Primary IPv6 Addresses¶
- 17129: 1 IPv6 Address
We’re not just talking about NaCl.
Salt is:
It was developed in order to bring the best solutions found in the world of remote execution together and make them better, faster, and more malleable. Salt accomplishes this through its ability to handle large loads of information, and not just dozens but hundreds and even thousands of individual servers quickly through a simple and manageable interface.
Providing versatility between massive scale deployments and smaller systems may seem daunting, but Salt is very simple to set up and maintain, regardless of the size of the project. The architecture of Salt is designed to work with any number of servers, from a handful of local network systems to international deployments across different data centers. The topology is a simple server/client model with the needed functionality built into a single set of daemons. While the default configuration will work with little to no modification, Salt can be fine tuned to meet specific needs.
The core functions of Salt:
Salt also introduces more granular controls to the realm of remote execution, allowing systems to be targeted not just by hostname, but also by system properties.
Salt takes advantage of a number of technologies and techniques. The networking layer is built with the excellent ZeroMQ networking library, so the Salt daemon includes a viable and transparent AMQ broker. Salt uses public keys for authentication with the master daemon, then uses faster AES encryption for payload communication; authentication and encryption are integral to Salt. Salt takes advantage of communication via msgpack, enabling fast and light network traffic.
In order to allow for simple expansion, Salt execution routines can be written as plain Python modules. The data collected from Salt executions can be sent back to the master server, or to any arbitrary program. Salt can be called from a simple Python API, or from the command line, so that Salt can be used to execute one-off commands as well as operate as an integral part of a larger application.
The result is a system that can execute commands at high speed on target server groups ranging from one to very many servers. Salt is very fast, easy to set up, amazingly malleable and provides a single remote execution architecture that can manage the diverse requirements of any number of servers. The Salt infrastructure brings together the best of the remote execution world, amplifies its capabilities and expands its range, resulting in a system that is as versatile as it is practical, suitable for any network.
Salt is developed under the Apache 2.0 license, and can be used for open and proprietary projects. Please submit your expansions back to the Salt project so that we can all benefit together as Salt grows. Please feel free to sprinkle Salt around your systems and let the deliciousness come forth.
Join the Salt!
There are many ways to participate in and communicate with the Salt community.
Salt has an active IRC channel and a mailing list.
Join the salt-users mailing list. It is the best place to ask questions about Salt and see whats going on with Salt development! The Salt mailing list is hosted by Google Groups. It is open to new members.
The #salt
IRC channel is hosted on the popular Freenode network. You
can use the Freenode webchat client right from your browser.
Logs of the IRC channel activity are being collected courtesy of Moritz Lenz.
If you wish to discuss the development of Salt itself join us in
#salt-devel
.
The Salt code is developed via GitHub. Follow Salt for constant updates on what is happening in Salt development:
SaltStack Inc. keeps a blog with recent news and advancements:
http://www.saltstack.com/blog/
Thomas Hatch also shares news and thoughts on Salt and related projects in his personal blog The Red45:
The official salt-states
repository is:
https://github.com/saltstack/salt-states
A few examples of salt states from the community:
If you want to get involved with the development of source code or the documentation efforts, please review the hacking section!
See also
Installing Salt for development and contributing to the project.
On most distributions, you can set up a Salt Minion with the Salt Bootstrap.
These guides go into detail how to install Salt on a given platform.
Salt (stable) is currently available via the Arch Linux Official repositories. There are currently -git packages available in the Arch User repositories (AUR) as well.
Install Salt stable releases from the Arch Linux Official repositories as follows:
pacman -S salt-zmq
To install Salt stable releases using the RAET protocol
,
use the following:
pacman -S salt-raet
Note
transports
Unlike other linux distributions, please be aware that Arch Linux's package manager pacman defaults to RAET as the Salt transport. If you want to use ZeroMQ instead, make sure to enter the associated number for the salt-zmq repository when prompted.
To install the bleeding edge version of Salt (may include bugs!), use the -git package. Installing the -git package as follows:
wget https://aur.archlinux.org/packages/sa/salt-git/salt-git.tar.gz
tar xf salt-git.tar.gz
cd salt-git/
makepkg -is
Note
yaourt
If a tool such as Yaourt is used, the dependencies will be gathered and built automatically.
The command to install salt using the yaourt tool is:
yaourt salt-git
systemd
Activate the Salt Master and/or Minion via systemctl
as follows:
systemctl enable salt-master.service
systemctl enable salt-minion.service
Start the Master
Once you've completed all of these steps you're ready to start your Salt Master. You should be able to start your Salt Master now using the command seen here:
systemctl start salt-master
Now go to the Configuring Salt page.
Currently the latest packages for Debian Old Stable, Stable, and Unstable (Squeeze, Wheezy, and Sid) are published in our (saltstack.com) Debian repository.
For squeeze, you will need to enable the Debian backports repository
as well as the debian.saltstack.com repository. To do so, add the
following to /etc/apt/sources.list
or a file in
/etc/apt/sources.list.d
:
deb http://debian.saltstack.com/debian squeeze-saltstack main
deb http://backports.debian.org/debian-backports squeeze-backports main
For wheezy, the following line is needed in either
/etc/apt/sources.list
or a file in /etc/apt/sources.list.d
:
deb http://debian.saltstack.com/debian wheezy-saltstack main
For jessie, the following line is needed in either
/etc/apt/sources.list
or a file in /etc/apt/sources.list.d
:
deb http://debian.saltstack.com/debian jessie-saltstack main
For sid, the following line is needed in either
/etc/apt/sources.list
or a file in /etc/apt/sources.list.d
:
deb http://debian.saltstack.com/debian unstable main
You will need to import the key used for signing.
wget -q -O- "http://debian.saltstack.com/debian-salt-team-joehealy.gpg.key" | apt-key add -
Note
You can optionally verify the key integrity with sha512sum
using the
public key signature shown here. E.g:
echo "b702969447140d5553e31e9701be13ca11cc0a7ed5fe2b30acb8491567560ee62f834772b5095d735dfcecb2384a5c1a20045f52861c417f50b68dd5ff4660e6 debian-salt-team-joehealy.gpg.key" | sha512sum -c
apt-get update
Install the Salt master, minion, or syndic from the repository with the apt-get command. These examples each install one daemon, but more than one package name may be given at a time:
apt-get install salt-master
apt-get install salt-minion
apt-get install salt-syndic
Now, go to the Configuring Salt page.
Beginning with version 0.9.4, Salt has been available in the primary Fedora repositories and EPEL. It is installable using yum. Fedora will have more up to date versions of Salt than other members of the Red Hat family, which makes it a great place to help improve Salt!
WARNING: Fedora 19 comes with systemd 204. Systemd has known bugs fixed in later revisions that prevent the salt-master from starting reliably or opening the network connections that it needs to. It's not likely that a salt-master will start or run reliably on any distribution that uses systemd version 204 or earlier. Running salt-minions should be OK.
Salt can be installed using yum
and is available in the standard Fedora
repositories.
Salt is packaged separately for the minion and the master. It is necessary only to install the appropriate package for the role the machine will play. Typically, there will be one master and multiple minions.
yum install salt-master
yum install salt-minion
updates-testing
¶When a new Salt release is packaged, it is first admitted into the
updates-testing
repository, before being moved to the stable repo.
To install from updates-testing
, use the enablerepo
argument for yum:
yum --enablerepo=updates-testing install salt-master
yum --enablerepo=updates-testing install salt-minion
Master
To have the Master start automatically at boot time:
systemctl enable salt-master.service
To start the Master:
systemctl start salt-master.service
Minion
To have the Minion start automatically at boot time:
systemctl enable salt-minion.service
To start the Minion:
systemctl start salt-minion.service
Now go to the Configuring Salt page.
Salt was added to the FreeBSD ports tree Dec 26th, 2011 by Christer Edwards <christer.edwards@gmail.com>. It has been tested on FreeBSD 7.4, 8.2, 9.0, and 9.1 releases.
Salt is dependent on the following additional ports. These will be installed as
dependencies of the sysutils/py-salt
port:
/devel/py-yaml
/devel/py-pyzmq
/devel/py-Jinja2
/devel/py-msgpack
/security/py-pycrypto
/security/py-m2crypto
On FreeBSD 10 and later, to install Salt from the FreeBSD pkgng repo, use the command:
pkg install py27-salt
On older versions of FreeBSD, to install Salt from the FreeBSD ports tree, use the command:
make -C /usr/ports/sysutils/py-salt install clean
Master
Copy the sample configuration file:
cp /usr/local/etc/salt/master.sample /usr/local/etc/salt/master
rc.conf
Activate the Salt Master in /etc/rc.conf
or /etc/rc.conf.local
and add:
+ salt_master_enable="YES"
Start the Master
Start the Salt Master as follows:
service salt_master start
Minion
Copy the sample configuration file:
cp /usr/local/etc/salt/minion.sample /usr/local/etc/salt/minion
rc.conf
Activate the Salt Minion in /etc/rc.conf
or /etc/rc.conf.local
and add:
+ salt_minion_enable="YES"
+ salt_minion_paths="/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin"
Start the Minion
Start the Salt Minion as follows:
service salt_minion start
Now go to the Configuring Salt page.
Salt can be easily installed on Gentoo via Portage:
emerge app-admin/salt
Now go to the Configuring Salt page.
Salt was added to the OpenBSD ports tree on Aug 10th 2013. It has been tested on OpenBSD 5.5 onwards.
Salt is dependent on the following additional ports. These will be installed as
dependencies of the sysutils/salt
port:
/net/py-msgpack
/net/py-zmq
/security/py-M2Crypto
/security/py-crypto
/textproc/py-MarkupSafe
/textproc/py-yaml
/www/py-jinja2
/www/py-requests
Master
To have the Master start automatically at boot time:
rcctl enable salt_master
To start the Master:
rcctl start salt_master
Minion
To have the Minion start automatically at boot time:
rcctl enable salt_minion
To start the Minion:
rcctl start salt_minion
Now go to the Configuring Salt page.
It should be noted that Homebrew explicitly discourages the use of sudo:
Homebrew is designed to work without using sudo. You can decide to use it but we strongly recommend not to do so. If you have used sudo and run into a bug then it is likely to be the cause. Please don’t file a bug report unless you can reproduce it after reinstalling Homebrew from scratch without using sudo
So when using Homebrew, if you want support from the Homebrew community, install this way:
brew install saltstack
When using MacPorts, install this way:
sudo port install salt
When only using the OS X system's pip, install this way:
sudo pip install salt
To run salt-master on OS X, the root user maxfiles limit must be increased:
Note
On OS X 10.10 (Yosemite) and higher, maxfiles should not be adjusted. The default limits are sufficient in all but the most extreme scenarios. Overriding these values with the setting below will cause system instability!
sudo launchctl limit maxfiles 4096 8192
And sudo add this configuration option to the /etc/salt/master file:
max_open_files: 8192
Now the salt-master should run without errors:
sudo salt-master --log-level=all
Now go to the Configuring Salt page.
Since Salt is on PyPI, it can be installed using pip, though most users prefer to install using RPMs (which can be installed from EPEL). Installation from pip is easy:
pip install salt
Warning
If installing from pip (or from source using setup.py install
), be
advised that the yum-utils
package is needed for Salt to manage
packages. Also, if the Python dependencies are not already installed, then
you will need additional libraries/tools installed to build some of them.
More information on this can be found here.
Due to the removal of some of Salt's dependencies from EPEL5, we have created a repository on Fedora COPR. Moving forward, this will be the official means of installing Salt on RHEL5-based systems. Information on how to enable this repository can be found here.
Beginning with version 0.9.4, Salt has been available in EPEL. It is installable using yum. Salt should work properly with all mainstream derivatives of RHEL, including CentOS, Scientific Linux, Oracle Linux and Amazon Linux. Report any bugs or issues on the issue tracker.
On RHEL6, the proper Jinja package 'python-jinja2' was moved from EPEL to the "RHEL Server Optional Channel". Verify this repository is enabled before installing salt on RHEL6.
If the EPEL repository is not installed on your system, you can download the RPM from here for RHEL/CentOS 6 (or here for RHEL/CentOS 7) and install it using the following command:
rpm -Uvh epel-release-X-Y.rpm
Replace epel-release-X-Y.rpm
with the appropriate filename.
Salt is packaged separately for the minion and the master. It is necessary only to install the appropriate package for the role the machine will play. Typically, there will be one master and multiple minions.
On the salt-master, run this:
yum install salt-master
On each salt-minion, run this:
yum install salt-minion
epel-testing
¶When a new Salt release is packaged, it is first admitted into the
epel-testing
repository, before being moved to the stable repo.
To install from epel-testing
, use the enablerepo
argument for yum:
yum --enablerepo=epel-testing install salt-minion
We recommend using ZeroMQ 4 where available. SaltStack provides ZeroMQ 4.0.4 and pyzmq 14.3.1 in a COPR repository. Instructions for adding this repository (as well as for upgrading ZeroMQ and pyzmq on existing minions) can be found here.
If this repo is added before Salt is installed, then installing either
salt-master
or salt-minion
will automatically pull in ZeroMQ 4.0.4, and
additional states to upgrade ZeroMQ and pyzmq are unnecessary.
Warning
RHEL/CentOS 5 Users
Using COPR repos on RHEL/CentOS 5 requires that the python-hashlib
package be installed. Not having it present will result in checksum errors
because YUM will not be able to process the SHA256 checksums used by COPR.
Note
For RHEL/CentOS 5 installations, if using the new repository to install Salt (as detailed above), then it is not necessary to enable the zeromq4 COPR, as the new EL5 repository includes ZeroMQ 4.
Salt's interface to yum
makes heavy use of the
repoquery utility, from the yum-utils package. This package will be
installed as a dependency if salt is installed via EPEL. However, if salt has
been installed using pip, or a host is being managed using salt-ssh, then as of
version 2014.7.0 yum-utils will be installed automatically to satisfy this
dependency.
Master
To have the Master start automatically at boot time:
chkconfig salt-master on
To start the Master:
service salt-master start
Minion
To have the Minion start automatically at boot time:
chkconfig salt-minion on
To start the Minion:
service salt-minion start
Now go to the Configuring Salt page.
Salt was added to the OpenCSW package repository in September of 2012 by Romeo Theriault <romeot@hawaii.edu> at version 0.10.2 of Salt. It has mainly been tested on Solaris 10 (sparc), though it is built for and has been tested minimally on Solaris 10 (x86), Solaris 9 (sparc/x86) and 11 (sparc/x86). (Please let me know if you're using it on these platforms!) Most of the testing has also just focused on the minion, though it has verified that the master starts up successfully on Solaris 10.
Comments and patches for better support on these platforms is very welcome.
As of version 0.10.4, Solaris is well supported under salt, with all of the following working well:
Salt is dependent on the following additional packages. These will
automatically be installed as dependencies of the py_salt
package:
To install Salt from the OpenCSW package repository you first need to install pkgutil assuming you don't already have it installed:
On Solaris 10:
pkgadd -d http://get.opencsw.org/now
On Solaris 9:
wget http://mirror.opencsw.org/opencsw/pkgutil.pkg
pkgadd -d pkgutil.pkg all
Once pkgutil is installed you'll need to edit it's config file
/etc/opt/csw/pkgutil.conf
to point it at the unstable catalog:
- #mirror=http://mirror.opencsw.org/opencsw/testing
+ mirror=http://mirror.opencsw.org/opencsw/unstable
OK, time to install salt.
# Update the catalog
root> /opt/csw/bin/pkgutil -U
# Install salt
root> /opt/csw/bin/pkgutil -i -y py_salt
Now that salt is installed you can find it's configuration files in
/etc/opt/csw/salt/
.
You'll want to edit the minion config file to set the name of your salt master server:
- #master: salt
+ master: your-salt-server
If you would like to use pkgutil as the default package provider for your
Solaris minions, you can do so using the providers
option in the
minion config file.
You can now start the salt minion like so:
On Solaris 10:
svcadm enable salt-minion
On Solaris 9:
/etc/init.d/salt-minion start
You should now be able to log onto the salt master and check to see if the salt-minion key is awaiting acceptance:
salt-key -l un
Accept the key:
salt-key -a <your-salt-minion>
Run a simple test against the minion:
salt '<your-salt-minion>' test.ping
Logs are in /var/log/salt
The latest packages for Ubuntu are published in the saltstack PPA. If you have
the add-apt-repository
utility, you can add the repository and import the
key in one step:
sudo add-apt-repository ppa:saltstack/salt
In addition to the main repository, there are secondary repositories for each individual major release. These repositories receive security and point releases but will not upgrade to any subsequent major release. There are currently four available repos: salt16, salt17, salt2014-1, salt2014-7. For example to follow 2014.7.x releases:
sudo add-apt-repository ppa:saltstack/salt2014-7
add-apt-repository: command not found?
The add-apt-repository
command is not always present on Ubuntu systems.
This can be fixed by installing python-software-properties:
sudo apt-get install python-software-properties
The following may be required as well:
sudo apt-get install software-properties-common
Note that since Ubuntu 12.10 (Raring Ringtail), add-apt-repository
is
found in the software-properties-common package, and is part of the base
install. Thus, add-apt-repository
should be able to be used
out-of-the-box to add the PPA.
Alternately, manually add the repository and import the PPA key with these commands:
echo deb http://ppa.launchpad.net/saltstack/salt/ubuntu `lsb_release -sc` main | sudo tee /etc/apt/sources.list.d/saltstack.list
wget -q -O- "http://keyserver.ubuntu.com:11371/pks/lookup?op=get&search=0x4759FA960E27C0A6" | sudo apt-key add -
After adding the repository, update the package management database:
sudo apt-get update
Install the Salt master, minion, or syndic from the repository with the apt-get command. These examples each install one daemon, but more than one package name may be given at a time:
sudo apt-get install salt-master
sudo apt-get install salt-minion
sudo apt-get install salt-syndic
Some core components are packaged separately in the Ubuntu repositories. These should be installed as well: salt-cloud, salt-ssh, salt-api
sudo apt-get install salt-cloud
sudo apt-get install salt-ssh
sudo apt-get install salt-api
ZeroMQ 4 is available by default for Ubuntu 14.04 and newer. However, for Ubuntu
12.04 LTS, starting with Salt version 2014.7.5
, ZeroMQ 4 is included with the
Salt installation package and nothing additional needs to be done.
Now go to the Configuring Salt page.
Salt has full support for running the Salt Minion on Windows.
There are no plans for the foreseeable future to develop a Salt Master on Windows. For now you must run your Salt Master on a supported operating system to control your Salt Minions on Windows.
Many of the standard Salt modules have been ported to work on Windows and many of the Salt States currently work on Windows, as well.
Salt Minion Windows installers can be found here. The output of md5sum <salt minion exe> should match the contents of the corresponding md5 file.
Download here
Note
The 2014.7.0 installers have been removed because of a regression. Please use the 2014.7.1 release instead.
Note
The executables above will install dependencies that the Salt minion requires.
The 64bit installer has been tested on Windows 7 64bit and Windows Server 2008R2 64bit. The 32bit installer has been tested on Windows 2003 Server 32bit. Please file a bug report on our GitHub repo if issues for other platforms are found.
The installer asks for 2 bits of information; the master hostname and the minion name. The installer will update the minion config with these options and then start the minion.
The salt-minion service will appear in the Windows Service Manager and can be started and stopped there or with the command line program sc like any other Windows service.
If the minion won't start, try installing the Microsoft Visual C++ 2008 x64 SP1 redistributable. Allow all Windows updates to run salt-minion smoothly.
The installer can be run silently by providing the /S option at the command line. The options /master and /minion-name allow for configuring the master hostname and minion name, respectively. Here's an example of using the silent installer:
Salt-Minion-0.17.0-Setup-amd64.exe /S /master=yoursaltmaster /minion-name=yourminionname
This document will explain how to set up a development environment for salt on Windows. The development environment allows you to work with the source code to customize or fix bugs. It will also allow you to build your own installation.
To do this the easy way you only need to install Git for Windows.
Clone the Salt-Windows-Dev repo from github.
Open a command line and type:
git clone https://github.com/saltstack/salt-windows-dev
Build the Python Environment
Go into the salt-windows-dev directory. Right-click the file named dev_env.ps1 and select Run with PowerShell
If you get an error, you may need to change the execution policy.
Open a powershell window and type the following:
Set-ExecutionPolicy RemoteSigned
This will download and install Python with all the dependencies needed to develop and build salt.
Build the Salt Environment
Right-click on the file named dev_env_salt.ps1 and select Run with Powershell
This will clone salt into C:\Salt-Dev\salt
and set it to the 2015.5
branch. You could optionally run the command from a powershell window with a
-Version
switch to pull a different version. For example:
dev_env_salt.ps1 -Version '2014.7'
To view a list of available branches and tags, open a command prompt in your C:Salt-Devsalt directory and type:
git branch -a
git tag -n
Install the following software:
Download the Prerequisite zip file for your CPU architecture from the SaltStack download site:
These files contain all sofware required to build and develop salt. Unzip the
contents of the file to C:\Salt-Dev\temp
.
Build the Python Environment
Install Python:
Browse to the C:\Salt-Dev\temp
directory and find the Python
installation file for your CPU Architecture under the corresponding
subfolder. Double-click the file to install python.
Make sure the following are in your PATH environment variable:
C:\Python27
C:\Python27\Scripts
Install Pip
Open a command prompt and navigate to C:\Salt-Dev\temp
Run the following command:
python get-pip.py
Easy Install compiled binaries.
M2Crypto, PyCrypto, and PyWin32 need to be installed using Easy Install.
Open a command prompt and navigate to C:\Salt-Dev\temp\<cpuarch>
.
Run the following commands:
easy_install -Z <M2Crypto file name>
easy_install -Z <PyCrypto file name>
easy_install -Z <PyWin32 file name>
Note
You can type the first part of the file name and then press the tab key to auto-complete the name of the file.
Pip Install Additional Prerequisites
All remaining prerequisites need to be pip installed. These prerequisites are as follow:
Open a command prompt and navigate to C:\Salt-Dev\temp
. Run the following
commands:
pip install <cpuarch>\<MarkupSafe file name>
pip install <Jinja file name>
pip install <cpuarch>\<MsgPack file name>
pip install <cpuarch>\<psutil file name>
pip install <cpuarch>\<PyYAML file name>
pip install <cpuarch>\<pyzmq file name>
pip install <WMI file name>
pip install <requests file name>
pip install <certifi file name>
Build the Salt Environment
Clone Salt
Open a command prompt and navigate to C:\Salt-Dev
. Run the following command
to clone salt:
git clone https://github.com/saltstack/salt
Checkout Branch
Checkout the branch or tag of salt you want to work on or build. Open a
command prompt and navigate to C:\Salt-Dev\salt
. Get a list of
available tags and branches by running the following commands:
git fetch --all
To view a list of available branches:
git branch -a
To view a list of availabel tags:
git tag -n
Checkout the branch or tag by typing the following command:
git checkout <branch/tag name>
Clean the Environment
When switching between branches residual files can be left behind that will interfere with the functionality of salt. Therefore, after you check out the branch you want to work on, type the following commands to clean the salt environment:
There are two ways to develop with salt. You can run salt's setup.py each time you make a change to source code or you can use the setup tools develop mode.
Both methods require that the minion configuration be in the C:\salt
directory. Copy the conf and var directories from C:\Salt-Dev\salt\pkg\
windows\buildenv
to C:\salt
. Now go into the C:\salt\conf
directory
and edit the file name minion
(no extension). You need to configure the
master and id parameters in this file. Edit the following lines:
master: <ip or name of your master>
id: <name of your minion>
Go into the C:\Salt-Dev\salt
directory from a cmd prompt and type:
python setup.py install --force
This will install python into your python installation at C:\Python27
.
Everytime you make an edit to your source code, you'll have to stop the minion,
run the setup, and start the minion.
To start the salt-minion go into C:\Python27\Scripts
from a cmd prompt and
type:
salt-minion
For debug mode type:
salt-minion -l debug
To stop the minion press Ctrl+C.
To use the Setup Tools Develop Mode go into C:\Salt-Dev\salt
from a cmd
prompt and type:
pip install -e .
This will install pointers to your source code that resides at
C:\Salt-Dev\salt
. When you edit your source code you only have to restart
the minion.
This is the method of building the installer as of version 2014.7.4.
Make sure you don't have any leftover salt files from previous versions of salt in your Python directory.
C:\Python27\Scripts
directoryC:\Python27\Lib\site-packages
directoryInstall salt using salt's setup.py. From the C:\Salt-Dev\salt
directory
type the following command:
python setup.py install --force
From cmd prompt go into the C:\Salt-Dev\salt\pkg\windows
directory. Type
the following command for the branch or tag of salt you're building:
BuildSalt.bat <branch or tag>
This will copy python with salt installed to the buildenv\bin
directory,
make it portable, and then create the windows installer . The .exe for the
windows installer will be placed in the installer
directory.
Create the directory C:\salt (if it doesn't exist already)
Copy the example conf
and var
directories from
pkg/windows/buildenv/
into C:\salt
Edit C:\salt\conf\minion
master: ipaddress or hostname of your salt-master
Start the salt-minion
cd C:\Python27\Scripts
python salt-minion
On the salt-master accept the new minion's key
sudo salt-key -A
This accepts all unaccepted keys. If you're concerned about security just accept the key for this specific minion.
Test that your minion is responding
On the salt-master run:
sudo salt '*' test.ping
You should get the following response: {'your minion hostname': True}
On a 64 bit Windows host the following script makes an unattended install of salt, including all dependencies:
Not up to date.
This script is not up to date. Please use the installer found above
# (All in one line.)
"PowerShell (New-Object System.Net.WebClient).DownloadFile('http://csa-net.dk/salt/bootstrap64.bat','C:\bootstrap.bat');(New-Object -com Shell.Application).ShellExecute('C:\bootstrap.bat');"
You can execute the above command remotely from a Linux host using winexe:
winexe -U "administrator" //fqdn "PowerShell (New-Object ......);"
For more info check http://csa-net.dk/salt
On windows Server 2003, you need to install optional component "wmi windows installer provider" to have full list of installed packages. If you don't have this, salt-minion can't report some installed softwares.
With openSUSE 13.1, Salt 0.16.4 has been available in the primary repositories. The devel:language:python repo will have more up to date versions of salt, all package development will be done there.
Salt can be installed using zypper
and is available in the standard openSUSE 13.1
repositories.
Salt is packaged separately for the minion and the master. It is necessary only to install the appropriate package for the role the machine will play. Typically, there will be one master and multiple minions.
zypper install salt-master
zypper install salt-minion
Master
To have the Master start automatically at boot time:
systemctl enable salt-master.service
To start the Master:
systemctl start salt-master.service
Minion
To have the Minion start automatically at boot time:
systemctl enable salt-minion.service
To start the Minion:
systemctl start salt-minion.service
Master
To have the Master start automatically at boot time:
chkconfig salt-master on
To start the Master:
rcsalt-master start
Minion
To have the Minion start automatically at boot time:
chkconfig salt-minion on
To start the Minion:
rcsalt-minion start
For openSUSE Factory run the following as root:
zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_Factory/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master
For openSUSE 13.1 run the following as root:
zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_13.1/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master
For openSUSE 12.3 run the following as root:
zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_12.3/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master
For openSUSE 12.2 run the following as root:
zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_12.2/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master
For openSUSE 12.1 run the following as root:
zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_12.1/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master
For bleeding edge python Factory run the following as root:
zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/bleeding_edge_python_Factory/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master
For SLE 12 run the following as root:
zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/SLE_12/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master
For SLE 11 SP3 run the following as root:
zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/SLE_11_SP3/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master
For SLE 11 SP2 run the following as root:
zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/SLE_11_SP2/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master
Now go to the Configuring Salt page.
Salt should run on any Unix-like platform so long as the dependencies are met.
Depending on the chosen Salt transport, ZeroMQ or RAET, dependencies vary:
Salt defaults to the ZeroMQ transport, and the choice can be made at install time, for example:
python setup.py --salt-transport=raet install
This way, only the required dependencies are pulled by the setup script if need be.
If installing using pip, the --salt-transport
install option can be
provided like:
pip install --install-option="--salt-transport=raet" salt
When upgrading Salt, the master(s) should always be upgraded first. Backward compatibility for minions running newer versions of salt than their masters is not guaranteed.
Whenever possible, backward compatibility between new masters and old minions will be preserved. Generally, the only exception to this policy is in case of a security vulnerability.
Running a masterless salt-minion lets you use Salt's configuration management for a single machine without calling out to a Salt master on another machine.
Since the Salt minion contains such extensive functionality it can be useful to run it standalone. A standalone minion can be used to do a number of things:
It is also useful for testing out state trees before deploying to a production setup.
The salt-bootstrap script makes bootstrapping a server with Salt simple for any OS with a Bourne shell:
curl -L https://bootstrap.saltstack.com -o install_salt.sh
sudo sh install_salt.sh
See the salt-bootstrap documentation for other one liners. When using Vagrant to test out salt, the Vagrant salt provisioner will provision the VM for you.
To instruct the minion to not look for a master, the file_client
configuration option needs to be set in the minion configuration file.
By default the file_client
is set to remote
so that the
minion gathers file server and pillar data from the salt master.
When setting the file_client
option to local
the
minion is configured to not gather this data from the master.
file_client: local
Now the salt minion will not look for a master and will assume that the local system has all of the file and pillar resources.
Note
When running Salt in masterless mode, do not run the salt-minion daemon. Otherwise, it will attempt to connect to a master and fail. The salt-call command stands on its own and does not need the salt-minion daemon.
Following the successful installation of a salt-minion, the next step is to create a state tree, which is where the SLS files that comprise the possible states of the minion are stored.
The following example walks through the steps necessary to create a state tree that ensures that the server has the Apache webserver installed.
Note
For a complete explanation on Salt States, see the tutorial.
top.sls
file:/srv/salt/top.sls:
base:
'*':
- webserver
/srv/salt/webserver.sls:
apache: # ID declaration
pkg: # state declaration
- installed # function declaration
Note
The apache package has different names on different platforms, for instance on Debian/Ubuntu it is apache2, on Fedora/RHEL it is httpd and on Arch it is apache
The only thing left is to provision our minion using salt-call and the highstate command.
The salt-call command is used to run module functions locally on a minion instead of executing them from the master. Normally the salt-call command checks into the master to retrieve file server and pillar data, but when running standalone salt-call needs to be instructed to not check the master for this data:
salt-call --local state.highstate
The --local
flag tells the salt-minion to look for the state tree in the
local file system and not to contact a Salt Master for instructions.
To provide verbose output, use -l debug
:
salt-call --local state.highstate -l debug
The minion first examines the top.sls
file and determines that it is a part
of the group matched by *
glob and that the webserver
SLS should be applied.
It then examines the webserver.sls
file and finds the apache
state, which
installs the Apache package.
The minion should now have Apache installed, and the next step is to begin learning how to write more complex states.
Since the Salt minion contains such extensive functionality it can be useful to run it standalone. A standalone minion can be used to do a number of things:
Note
When running Salt in masterless mode, do not run the salt-minion daemon. Otherwise, it will attempt to connect to a master and fail. The salt-call command stands on its own and does not need the salt-minion daemon.
The salt-call command is used to run module functions locally on a minion
instead of executing them from the master. Normally the salt-call command
checks into the master to retrieve file server and pillar data, but when
running standalone salt-call needs to be instructed to not check the master for
this data. To instruct the minion to not look for a master when running
salt-call the file_client
configuration option needs to be set.
By default the file_client
is set to remote
so that the
minion knows that file server and pillar data are to be gathered from the
master. When setting the file_client
option to local
the
minion is configured to not gather this data from the master.
file_client: local
Now the salt-call command will not look for a master and will assume that the local system has all of the file and pillar resources.
The state system can be easily run without a Salt master, with all needed files local to the minion. To do this the minion configuration file needs to be set up to know how to return file_roots information like the master. The file_roots setting defaults to /srv/salt for the base environment just like on the master:
file_roots:
base:
- /srv/salt
Now set up the Salt State Tree, top file, and SLS modules in the same way that
they would be set up on a master. Now, with the file_client
option set to local
and an available state tree then calls to functions in
the state module will use the information in the file_roots on the minion
instead of checking in with the master.
Remember that when creating a state tree on a minion there are no syntax or path changes needed, SLS modules written to be used from a master do not need to be modified in any way to work with a minion.
This makes it easy to "script" deployments with Salt states without having to set up a master, and allows for these SLS modules to be easily moved into a Salt master as the deployment grows.
The declared state can now be executed with:
salt-call state.highstate
Or the salt-call command can be executed with the --local
flag, this makes
it unnecessary to change the configuration file:
salt-call state.highstate --local
The Salt master communicates with the minions using an AES-encrypted ZeroMQ connection. These communications are done over TCP ports 4505 and 4506, which need to be accessible on the master only. This document outlines suggested firewall rules for allowing these incoming connections to the master.
Note
No firewall configuration needs to be done on Salt minions. These changes refer to the master only.
Starting with Fedora 18 FirewallD is the tool that is used to dynamically
manage the firewall rules on a host. It has support for IPv4/6 settings and
the separation of runtime and permanent configurations. To interact with
FirewallD use the command line client firewall-cmd
.
firewall-cmd example:
firewall-cmd --permanent --zone=<zone> --add-port=4505-4506/tcp
Please choose the desired zone according to your setup. Don't forget to reload after you made your changes.
firewall-cmd --reload
The lokkit
command packaged with some Linux distributions makes opening
iptables firewall ports very simple via the command line. Just be careful
to not lock out access to the server by neglecting to open the ssh port.
lokkit example:
lokkit -p 22:tcp -p 4505:tcp -p 4506:tcp
The system-config-firewall-tui
command provides a text-based interface to
modifying the firewall.
system-config-firewall-tui:
system-config-firewall-tui
Salt installs firewall rules in /etc/sysconfig/SuSEfirewall2.d/services/salt. Enable with:
SuSEfirewall2 open
SuSEfirewall2 start
If you have an older package of Salt where the above configuration file is
not included, the SuSEfirewall2
command makes opening iptables firewall
ports very simple via the command line.
SuSEfirewall example:
SuSEfirewall2 open EXT TCP 4505
SuSEfirewall2 open EXT TCP 4506
The firewall module in YaST2 provides a text-based interface to modifying the firewall.
YaST2:
yast2 firewall
Different Linux distributions store their iptables (also known as netfilter) rules in different places, which makes it difficult to standardize firewall documentation. Included are some of the more common locations, but your mileage may vary.
Fedora / RHEL / CentOS:
/etc/sysconfig/iptables
Arch Linux:
/etc/iptables/iptables.rules
Debian
Follow these instructions: https://wiki.debian.org/iptables
Once you've found your firewall rules, you'll need to add the two lines below
to allow traffic on tcp/4505
and tcp/4506
:
-A INPUT -m state --state new -m tcp -p tcp --dport 4505 -j ACCEPT
-A INPUT -m state --state new -m tcp -p tcp --dport 4506 -j ACCEPT
Ubuntu
Salt installs firewall rules in /etc/ufw/applications.d/salt.ufw. Enable with:
ufw allow salt
The BSD-family of operating systems uses packet filter (pf). The following
example describes the additions to pf.conf
needed to access the Salt
master.
pass in on $int_if proto tcp from any to $int_if port 4505
pass in on $int_if proto tcp from any to $int_if port 4506
Once these additions have been made to the pf.conf
the rules will need to
be reloaded. This can be done using the pfctl
command.
pfctl -vf /etc/pf.conf
There are situations where you want to selectively allow Minion traffic from specific hosts or networks into your Salt Master. The first scenario which comes to mind is to prevent unwanted traffic to your Master out of security concerns, but another scenario is to handle Minion upgrades when there are backwards incompatible changes between the installed Salt versions in your environment.
Here is an example Linux iptables ruleset to be set on the Master:
# Allow Minions from these networks
-I INPUT -s 10.1.2.0/24 -p tcp -m multiport --dports 4505,4506 -j ACCEPT
-I INPUT -s 10.1.3.0/24 -p tcp -m multiport --dports 4505,4506 -j ACCEPT
# Allow Salt to communicate with Master on the loopback interface
-A INPUT -i lo -p tcp -m multiport --dports 4505,4506 -j ACCEPT
# Reject everything else
-A INPUT -p tcp -m multiport --dports 4505,4506 -j REJECT
Note
The important thing to note here is that the salt
command
needs to communicate with the listening network socket of
salt-master
on the loopback interface. Without this you will
see no outgoing Salt traffic from the master, even for a simple
salt '*' test.ping
, because the salt
client never reached
the salt-master
to tell it to carry out the execution.
The Salt Minion can initiate its own highstate using the salt-call
command.
$ salt-call state.highstate
This will cause the minion to check in with the master and ensure it is in the correct 'state'.
If you would like the Salt Minion to regularly check in with the master you can
use the venerable cron to run the salt-call
command.
# PATH=/bin:/sbin:/usr/bin:/usr/sbin
00 00 * * * salt-call state.highstate
The above cron entry will run a highstate every day at midnight.
Note
Be aware that you may need to ensure the PATH for cron includes any scripts or commands that need to be executed.
Before continuing make sure you have a working Salt installation by following the installation and the configuration instructions.
Stuck?
There are many ways to get help from the Salt community including our mailing list and our IRC channel #salt.
Now that you have a master and at least one minion communicating with each other you can perform commands on the minion via the salt command. Salt calls are comprised of three main components:
salt '<target>' <function> [arguments]
See also
The target component allows you to filter which minions should run the following function. The default filter is a glob on the minion id. For example:
salt '*' test.ping
salt '*.example.org' test.ping
Targets can be based on minion system information using the Grains system:
salt -G 'os:Ubuntu' test.ping
See also
Targets can be filtered by regular expression:
salt -E 'virtmach[0-9]' test.ping
Targets can be explicitly specified in a list:
salt -L 'foo,bar,baz,quo' test.ping
Or Multiple target types can be combined in one command:
salt -C 'G@os:Ubuntu and webser* or E@database.*' test.ping
A function is some functionality provided by a module. Salt ships with a large collection of available functions. List all available functions on your minions:
salt '*' sys.doc
Here are some examples:
Show all currently available minions:
salt '*' test.ping
Run an arbitrary shell command:
salt '*' cmd.run 'uname -a'
See also
Space-delimited arguments to the function:
salt '*' cmd.exec_code python 'import sys; print sys.version'
Optional, keyword arguments are also supported:
salt '*' pip.install salt timeout=5 upgrade=True
They are always in the form of kwarg=argument
.
Note
This walkthrough assumes that the reader has already completed the initial Salt walkthrough.
Pillars are tree-like structures of data defined on the Salt Master and passed through to minions. They allow confidential, targeted data to be securely sent only to the relevant minion.
Note
Grains and Pillar are sometimes confused, just remember that Grains are data about a minion which is stored or generated from the minion. This is why information like the OS and CPU type are found in Grains. Pillar is information about a minion or many minions stored or generated on the Salt Master.
Pillar data is useful for:
Pillar is therefore one of the most important systems when using Salt. This walkthrough is designed to get a simple Pillar up and running in a few minutes and then to dive into the capabilities of Pillar and where the data is available.
The pillar is already running in Salt by default. To see the minion's pillar data:
salt '*' pillar.items
Note
Prior to version 0.16.2, this function is named pillar.data
. This
function name is still supported for backwards compatibility.
By default the contents of the master configuration file are loaded into pillar for all minions. This enables the master configuration file to be used for global configuration of minions.
Similar to the state tree, the pillar is comprised of sls files and has a top file. The default location for the pillar is in /srv/pillar.
Note
The pillar location can be configured via the pillar_roots option inside the master configuration file. It must not be in a subdirectory of the state tree.
To start setting up the pillar, the /srv/pillar directory needs to be present:
mkdir /srv/pillar
Now create a simple top file, following the same format as the top file used for states:
/srv/pillar/top.sls
:
base:
'*':
- data
This top file associates the data.sls file to all minions. Now the
/srv/pillar/data.sls
file needs to be populated:
/srv/pillar/data.sls
:
info: some data
To ensure that the minions have the new pillar data, issue a command to them asking that they fetch their pillars from the master:
salt '*' saltutil.refresh_pillar
Now that the minions have the new pillar, it can be retrieved:
salt '*' pillar.items
The key info
should now appear in the returned pillar data.
Unlike states, pillar files do not need to define formulas. This example sets up user data with a UID:
/srv/pillar/users/init.sls
:
users:
thatch: 1000
shouse: 1001
utahdave: 1002
redbeard: 1003
Note
The same directory lookups that exist in states exist in pillar, so the
file users/init.sls
can be referenced with users
in the top
file.
The top file will need to be updated to include this sls file:
/srv/pillar/top.sls
:
base:
'*':
- data
- users
Now the data will be available to the minions. To use the pillar data in a state, you can use Jinja:
/srv/salt/users/init.sls
{% for user, uid in pillar.get('users', {}).items() %}
{{user}}:
user.present:
- uid: {{uid}}
{% endfor %}
This approach allows for users to be safely defined in a pillar and then the user data is applied in an sls file.
Pillar data can be accessed in state files to customise behavior for each minion. All pillar (and grain) data applicable to each minion is substituted into the state files through templating before being run. Typical uses include setting directories appropriate for the minion and skipping states that don't apply.
A simple example is to set up a mapping of package names in pillar for separate Linux distributions:
/srv/pillar/pkg/init.sls
:
pkgs:
{% if grains['os_family'] == 'RedHat' %}
apache: httpd
vim: vim-enhanced
{% elif grains['os_family'] == 'Debian' %}
apache: apache2
vim: vim
{% elif grains['os'] == 'Arch' %}
apache: apache
vim: vim
{% endif %}
The new pkg
sls needs to be added to the top file:
/srv/pillar/top.sls
:
base:
'*':
- data
- users
- pkg
Now the minions will auto map values based on respective operating systems inside of the pillar, so sls files can be safely parameterized:
/srv/salt/apache/init.sls
:
apache:
pkg.installed:
- name: {{ pillar['pkgs']['apache'] }}
Or, if no pillar is available a default can be set as well:
Note
The function pillar.get
used in this example was added to Salt in
version 0.14.0
/srv/salt/apache/init.sls
:
apache:
pkg.installed:
- name: {{ salt['pillar.get']('pkgs:apache', 'httpd') }}
In the above example, if the pillar value pillar['pkgs']['apache']
is not
set in the minion's pillar, then the default of httpd
will be used.
Note
Under the hood, pillar is just a Python dict, so Python dict methods such as get and items can be used.
One of the design goals of pillar is to make simple sls formulas easily grow into more flexible formulas without refactoring or complicating the states.
A simple formula:
/srv/salt/edit/vim.sls
:
vim:
pkg.installed: []
/etc/vimrc:
file.managed:
- source: salt://edit/vimrc
- mode: 644
- user: root
- group: root
- require:
- pkg: vim
Can be easily transformed into a powerful, parameterized formula:
/srv/salt/edit/vim.sls
:
vim:
pkg.installed:
- name: {{ pillar['pkgs']['vim'] }}
/etc/vimrc:
file.managed:
- source: {{ pillar['vimrc'] }}
- mode: 644
- user: root
- group: root
- require:
- pkg: vim
Where the vimrc source location can now be changed via pillar:
/srv/pillar/edit/vim.sls
:
{% if grains['id'].startswith('dev') %}
vimrc: salt://edit/dev_vimrc
{% elif grains['id'].startswith('qa') %}
vimrc: salt://edit/qa_vimrc
{% else %}
vimrc: salt://edit/vimrc
{% endif %}
Ensuring that the right vimrc is sent out to the correct minions.
Pillar data can be set on the command line like so:
salt '*' state.highstate pillar='{"foo": "bar"}'
The state.sls
command can also be used to set pillar values via the command
line:
salt '*' state.sls my_sls_file pillar='{"hello": "world"}'
Lists can be passed in pillar as well:
salt '*' state.highstate pillar='["foo", "bar", "baz"]'
Note
If a key is passed on the command line that already exists on the minion, the key that is passed in will overwrite the entire value of that key, rather than merging only the specified value set via the command line.
Pillar data is generated on the Salt master and securely distributed to minions. Salt is not restricted to the pillar sls files when defining the pillar but can retrieve data from external sources. This can be useful when information about an infrastructure is stored in a separate location.
Reference information on pillar and the external pillar interface can be found in the Salt documentation:
Minion configuration options can be set on pillars. Any option that you want to modify, should be in the first level of the pillars, in the same way you set the options in the config file. For example, to configure the MySQL root password to be used by MySQL Salt execution module:
mysql.pass: hardtoguesspassword
This is very convenient when you need some dynamic configuration change that you want to be applied on the fly. For example, there is a chicken and the egg problem if you do this:
mysql-admin-passwd:
mysql_user.present:
- name: root
- password: somepasswd
mydb:
mysql_db.present
The second state will fail, because you changed the root password and the minion didn't notice it. Setting mysql.pass in the pillar, will help to sort out the issue. But always change the root admin password in the first place.
This is very helpful for any module that needs credentials to apply state changes: mysql, keystone, etc.
Simplicity, Simplicity, Simplicity
Many of the most powerful and useful engineering solutions are founded on simple principles. Salt States strive to do just that: K.I.S.S. (Keep It Stupidly Simple)
The core of the Salt State system is the SLS, or SaLt State file. The SLS is a representation of the state in which a system should be in, and is set up to contain this data in a simple format. This is often called configuration management.
Note
This is just the beginning of using states, make sure to read up on pillar Pillar next.
Before delving into the particulars, it will help to understand that the SLS file is just a data structure under the hood. While understanding that the SLS is just a data structure isn't critical for understanding and making use of Salt States, it should help bolster knowledge of where the real power is.
SLS files are therefore, in reality, just dictionaries, lists, strings, and numbers. By using this approach Salt can be much more flexible. As one writes more state files, it becomes clearer exactly what is being written. The result is a system that is easy to understand, yet grows with the needs of the admin or developer.
The example SLS files in the below sections can be assigned to hosts using a file called top.sls. This file is described in-depth here.
By default Salt represents the SLS data in what is one of the simplest serialization formats available - YAML.
A typical SLS file will often look like this in YAML:
Note
These demos use some generic service and package names, different distributions often use different names for packages and services. For instance apache should be replaced with httpd on a Red Hat system. Salt uses the name of the init script, systemd name, upstart name etc. based on what the underlying service management for the platform. To get a list of the available service names on a platform execute the service.get_all salt function.
Information on how to make states work with multiple distributions is later in the tutorial.
apache:
pkg.installed: []
service.running:
- require:
- pkg: apache
This SLS data will ensure that the package named apache is installed, and that the apache service is running. The components can be explained in a simple way.
The first line is the ID for a set of data, and it is called the ID Declaration. This ID sets the name of the thing that needs to be manipulated.
The second and third lines contain the state module function to be run, in the
format <state_module>.<function>
. The pkg.installed
state module
function ensures that a software package is installed via the system's native
package manager. The service.running
state module function ensures that a
given system daemon is running.
Finally, on line five, is the word require
. This is called a Requisite
Statement, and it makes sure that the Apache service is only started after
a successful installation of the apache package.
When setting up a service like an Apache web server, many more components may need to be added. The Apache configuration file will most likely be managed, and a user and group may need to be set up.
apache:
pkg.installed: []
service.running:
- watch:
- pkg: apache
- file: /etc/httpd/conf/httpd.conf
- user: apache
user.present:
- uid: 87
- gid: 87
- home: /var/www/html
- shell: /bin/nologin
- require:
- group: apache
group.present:
- gid: 87
- require:
- pkg: apache
/etc/httpd/conf/httpd.conf:
file.managed:
- source: salt://apache/httpd.conf
- user: root
- group: root
- mode: 644
This SLS data greatly extends the first example, and includes a config file,
a user, a group and new requisite statement: watch
.
Adding more states is easy, since the new user and group states are under
the Apache ID, the user and group will be the Apache user and group. The
require
statements will make sure that the user will only be made after
the group, and that the group will be made only after the Apache package is
installed.
Next, the require
statement under service was changed to watch, and is
now watching 3 states instead of just one. The watch statement does the same
thing as require, making sure that the other states run before running the
state with a watch, but it adds an extra component. The watch
statement
will run the state's watcher function for any changes to the watched states.
So if the package was updated, the config file changed, or the user
uid modified, then the service state's watcher will be run. The service
state's watcher just restarts the service, so in this case, a change in the
config file will also trigger a restart of the respective service.
When setting up Salt States in a scalable manner, more than one SLS will need
to be used. The above examples were in a single SLS file, but two or more
SLS files can be combined to build out a State Tree. The above example also
references a file with a strange source - salt://apache/httpd.conf
. That
file will need to be available as well.
The SLS files are laid out in a directory structure on the Salt master; an SLS is just a file and files to download are just files.
The Apache example would be laid out in the root of the Salt file server like this:
apache/init.sls
apache/httpd.conf
So the httpd.conf is just a file in the apache directory, and is referenced directly.
Do not use dots in SLS file names
The initial implementation of top.sls
and
Include declaration followed the python import model where a slash
is represented as a period. This means that a SLS file with a period in
the name ( besides the suffix period) can not be referenced. For example,
webserver_1.0.sls is not referenceable because webserver_1.0 would refer
to the directory/file webserver_1/0.sls
But when using more than one single SLS file, more components can be added to the toolkit. Consider this SSH example:
ssh/init.sls:
openssh-client:
pkg.installed
/etc/ssh/ssh_config:
file.managed:
- user: root
- group: root
- mode: 644
- source: salt://ssh/ssh_config
- require:
- pkg: openssh-client
ssh/server.sls:
include:
- ssh
openssh-server:
pkg.installed
sshd:
service.running:
- require:
- pkg: openssh-client
- pkg: openssh-server
- file: /etc/ssh/banner
- file: /etc/ssh/sshd_config
/etc/ssh/sshd_config:
file.managed:
- user: root
- group: root
- mode: 644
- source: salt://ssh/sshd_config
- require:
- pkg: openssh-server
/etc/ssh/banner:
file:
- managed
- user: root
- group: root
- mode: 644
- source: salt://ssh/banner
- require:
- pkg: openssh-server
Note
Notice that we use two similar ways of denoting that a file is managed by Salt. In the /etc/ssh/sshd_config state section above, we use the file.managed state declaration whereas with the /etc/ssh/banner state section, we use the file state declaration and add a managed attribute to that state declaration. Both ways produce an identical result; the first way -- using file.managed -- is merely a shortcut.
Now our State Tree looks like this:
apache/init.sls
apache/httpd.conf
ssh/init.sls
ssh/server.sls
ssh/banner
ssh/ssh_config
ssh/sshd_config
This example now introduces the include
statement. The include statement
includes another SLS file so that components found in it can be required,
watched or as will soon be demonstrated - extended.
The include statement allows for states to be cross linked. When an SLS has an include statement it is literally extended to include the contents of the included SLS files.
Note that some of the SLS files are called init.sls, while others are not. More info on what this means can be found in the States Tutorial.
Sometimes SLS data needs to be extended. Perhaps the apache service needs to watch additional resources, or under certain circumstances a different file needs to be placed.
In these examples, the first will add a custom banner to ssh and the second will add more watchers to apache to include mod_python.
ssh/custom-server.sls:
include:
- ssh.server
extend:
/etc/ssh/banner:
file:
- source: salt://ssh/custom-banner
python/mod_python.sls:
include:
- apache
extend:
apache:
service:
- watch:
- pkg: mod_python
mod_python:
pkg.installed
The custom-server.sls
file uses the extend statement to overwrite where the
banner is being downloaded from, and therefore changing what file is being used
to configure the banner.
In the new mod_python SLS the mod_python package is added, but more importantly the apache service was extended to also watch the mod_python package.
Using extend with require or watch
The extend
statement works differently for require
or watch
.
It appends to, rather than replacing the requisite component.
Since SLS data is simply that (data), it does not need to be represented with YAML. Salt defaults to YAML because it is very straightforward and easy to learn and use. But the SLS files can be rendered from almost any imaginable medium, so long as a renderer module is provided.
The default rendering system is the yaml_jinja
renderer. The
yaml_jinja
renderer will first pass the template through the Jinja2
templating system, and then through the YAML parser. The benefit here is that
full programming constructs are available when creating SLS files.
Other renderers available are yaml_mako
and yaml_wempy
which each use
the Mako or Wempy templating system respectively rather than the jinja
templating system, and more notably, the pure Python or py
, pydsl
&
pyobjects
renderers.
The py
renderer allows for SLS files to be written in pure Python,
allowing for the utmost level of flexibility and power when preparing SLS
data; while the pydsl renderer
provides a flexible, domain-specific language for authoring SLS data in Python;
and the pyobjects renderer
gives you a "Pythonic" interface to building state data.
Note
The templating engines described above aren't just available in SLS files.
They can also be used in file.managed
states, making file management much more dynamic and flexible. Some
examples for using templates in managed files can be found in the
documentation for the file states, as well as the MooseFS
example below.
The default renderer - yaml_jinja
, allows for use of the jinja
templating system. A guide to the Jinja templating system can be found here:
http://jinja.pocoo.org/docs
When working with renderers a few very useful bits of data are passed in. In
the case of templating engine based renderers, three critical components are
available, salt
, grains
, and pillar
. The salt
object allows for
any Salt function to be called from within the template, and grains
allows
for the Grains to be accessed from within the template. A few examples:
apache/init.sls:
apache:
pkg.installed:
{% if grains['os'] == 'RedHat'%}
- name: httpd
{% endif %}
service.running:
{% if grains['os'] == 'RedHat'%}
- name: httpd
{% endif %}
- watch:
- pkg: apache
- file: /etc/httpd/conf/httpd.conf
- user: apache
user.present:
- uid: 87
- gid: 87
- home: /var/www/html
- shell: /bin/nologin
- require:
- group: apache
group.present:
- gid: 87
- require:
- pkg: apache
/etc/httpd/conf/httpd.conf:
file.managed:
- source: salt://apache/httpd.conf
- user: root
- group: root
- mode: 644
This example is simple. If the os
grain states that the operating system is
Red Hat, then the name of the Apache package and service needs to be httpd.
A more aggressive way to use Jinja can be found here, in a module to set up a MooseFS distributed filesystem chunkserver:
moosefs/chunk.sls:
include:
- moosefs
{% for mnt in salt['cmd.run']('ls /dev/data/moose*').split() %}
/mnt/moose{{ mnt[-1] }}:
mount.mounted:
- device: {{ mnt }}
- fstype: xfs
- mkmnt: True
file.directory:
- user: mfs
- group: mfs
- require:
- user: mfs
- group: mfs
{% endfor %}
/etc/mfshdd.cfg:
file.managed:
- source: salt://moosefs/mfshdd.cfg
- user: root
- group: root
- mode: 644
- template: jinja
- require:
- pkg: mfs-chunkserver
/etc/mfschunkserver.cfg:
file.managed:
- source: salt://moosefs/mfschunkserver.cfg
- user: root
- group: root
- mode: 644
- template: jinja
- require:
- pkg: mfs-chunkserver
mfs-chunkserver:
pkg.installed: []
mfschunkserver:
service.running:
- require:
{% for mnt in salt['cmd.run']('ls /dev/data/moose*') %}
- mount: /mnt/moose{{ mnt[-1] }}
- file: /mnt/moose{{ mnt[-1] }}
{% endfor %}
- file: /etc/mfschunkserver.cfg
- file: /etc/mfshdd.cfg
- file: /var/lib/mfs
This example shows much more of the available power of Jinja.
Multiple for loops are used to dynamically detect available hard drives
and set them up to be mounted, and the salt
object is used multiple
times to call shell commands to gather data.
Sometimes the chosen default renderer might not have enough logical power to accomplish the needed task. When this happens, the Python renderer can be used. Normally a YAML renderer should be used for the majority of SLS files, but an SLS file set to use another renderer can be easily added to the tree.
This example shows a very basic Python SLS file:
python/django.sls:
#!py
def run():
'''
Install the django package
'''
return {'include': ['python'],
'django': {'pkg': ['installed']}}
This is a very simple example; the first line has an SLS shebang that
tells Salt to not use the default renderer, but to use the py
renderer.
Then the run function is defined, the return value from the run function
must be a Salt friendly data structure, or better known as a Salt
HighState data structure.
Alternatively, using the pydsl renderer, the above example can be written more succinctly as:
#!pydsl
include('python', delayed=True)
state('django').pkg.installed()
The pyobjects renderer provides an "Pythonic" object based approach for building the state data. The above example could be written as:
#!pyobjects
include('python')
Pkg.installed("django")
These Python examples would look like this if they were written in YAML:
include:
- python
django:
pkg.installed
This example clearly illustrates that; one, using the YAML renderer by default is a wise decision and two, unbridled power can be obtained where needed by using a pure Python SLS.
Once the rules in an SLS are ready, they should be tested to ensure they
work properly. To invoke these rules, simply execute
salt '*' state.highstate
on the command line. If you get back only
hostnames with a :
after, but no return, chances are there is a problem with
one or more of the sls files. On the minion, use the salt-call
command:
salt-call state.highstate -l debug
to examine the output for errors.
This should help troubleshoot the issue. The minions can also be started in
the foreground in debug mode: salt-minion -l debug
.
With an understanding of states, the next recommendation is to become familiar with Salt's pillar interface:
The purpose of this tutorial is to demonstrate how quickly you can configure a system to be managed by Salt States. For detailed information about the state system please refer to the full states reference.
This tutorial will walk you through using Salt to configure a minion to run the Apache HTTP server and to ensure the server is running.
Before continuing make sure you have a working Salt installation by following the installation and the configuration instructions.
Stuck?
There are many ways to get help from the Salt community including our mailing list and our IRC channel #salt.
States are stored in text files on the master and transferred to the minions on
demand via the master's File Server. The collection of state files make up the
State Tree
.
To start using a central state system in Salt, the Salt File Server must first
be set up. Edit the master config file (file_roots
) and
uncomment the following lines:
file_roots:
base:
- /srv/salt
Note
If you are deploying on FreeBSD via ports, the file_roots
path defaults
to /usr/local/etc/salt/states
.
Restart the Salt master in order to pick up this change:
pkill salt-master
salt-master -d
On the master, in the directory uncommented in the previous step,
(/srv/salt
by default), create a new file called
top.sls
and add the following:
base:
'*':
- webserver
The top file is separated into environments (discussed
later). The default environment is base
. Under the base
environment a
collection of minion matches is defined; for now simply specify all hosts
(*
).
Targeting minions
The expressions can use any of the targeting mechanisms used by Salt — minions can be matched by glob, PCRE regular expression, or by grains. For example:
base:
'os:Fedora':
- match: grain
- webserver
sls
file¶In the same directory as the top file, create a file
named webserver.sls
, containing the following:
apache: # ID declaration
pkg: # state declaration
- installed # function declaration
The first line, called the ID declaration, is an arbitrary identifier. In this case it defines the name of the package to be installed.
Note
The package name for the Apache httpd web server may differ depending on
OS or distro — for example, on Fedora it is httpd
but on
Debian/Ubuntu it is apache2
.
The second line, called the State declaration, defines which of the Salt
States we are using. In this example, we are using the pkg state
to ensure that a given package is installed.
The third line, called the Function declaration, defines which function
in the pkg state
module to call.
Renderers
States sls
files can be written in many formats. Salt requires only
a simple data structure and is not concerned with how that data structure
is built. Templating languages and DSLs are a dime-a-dozen and everyone
has a favorite.
Building the expected data structure is the job of Salt renderers and they are dead-simple to write.
In this tutorial we will be using YAML in Jinja2 templates, which is the
default format. The default can be changed by editing
renderer
in the master configuration file.
Next, let's run the state we created. Open a terminal on the master and run:
% salt '*' state.highstate
Our master is instructing all targeted minions to run state.highstate
. When a minion executes a highstate call it
will download the top file and attempt to match the
expressions. When it does match an expression the modules listed for it will be
downloaded, compiled, and executed.
Once completed, the minion will report back with a summary of all actions taken and all changes made.
Warning
If you have created custom grain modules, they will not be available in the top file until after the first highstate. To make custom grains available on a minion's first highstate, it is recommended to use this example to ensure that the custom grains are synced when the minion starts.
SLS File Namespace
Note that in the example above, the SLS file
webserver.sls
was referred to simply as webserver
. The namespace
for SLS files when referenced in top.sls
or an Include declaration
follows a few simple rules:
The .sls
is discarded (i.e. webserver.sls
becomes
webserver
).
webserver/dev.sls
can also be referred to
as webserver.dev
A file called init.sls
in a subdirectory is referred to by the path
of the directory. So, webserver/init.sls
is referred to as
webserver
.
If both webserver.sls
and webserver/init.sls
happen to exist,
webserver/init.sls
will be ignored and webserver.sls
will be the
file referred to as webserver
.
Troubleshooting Salt
If the expected output isn't seen, the following tips can help to narrow down the problem.
Salt can be quite chatty when you change the logging setting to
debug
:
salt-minion -l debug
By not starting the minion in daemon mode (-d
) one can view any output from the minion as it works:
salt-minion &
Increase the default timeout value when running salt. For example, to change the default timeout to 60 seconds:
salt -t 60
For best results, combine all three:
salt-minion -l debug & # On the minion
salt '*' state.highstate -t 60 # On the master
Note
This tutorial builds on topics covered in part 1. It is recommended that you begin there.
In the last part of the Salt States tutorial we covered the
basics of installing a package. We will now modify our webserver.sls
file
to have requirements, and use even more Salt States.
You can specify multiple State declaration under an
ID declaration. For example, a quick modification to our
webserver.sls
to also start Apache if it is not running:
1 2 3 4 5 | apache:
pkg.installed: []
service.running:
- require:
- pkg: apache
|
Try stopping Apache before running state.highstate
once again and observe
the output.
We now have a working installation of Apache so let's add an HTML file to
customize our website. It isn't exactly useful to have a website without a
webserver so we don't want Salt to install our HTML file until Apache is
installed and running. Include the following at the bottom of your
webserver/init.sls
file:
1 2 3 4 5 6 7 8 9 10 11 12 | apache:
pkg.installed: []
service.running:
- require:
- pkg: apache
/var/www/index.html: # ID declaration
file: # state declaration
- managed # function
- source: salt://webserver/index.html # function arg
- require: # requisite declaration
- pkg: apache # requisite reference
|
line 7 is the ID declaration. In this example it is the location we
want to install our custom HTML file. (Note: the default location that
Apache serves may differ from the above on your OS or distro. /srv/www
could also be a likely place to look.)
Line 8 the State declaration. This example uses the Salt file
state
.
Line 9 is the Function declaration. The managed function
will download a file from the master and install it
in the location specified.
Line 10 is a Function arg declaration which, in this example, passes
the source
argument to the managed function
.
Line 11 is a Requisite declaration.
Line 12 is a Requisite reference which refers to a state and an ID.
In this example, it is referring to the ID declaration
from our example in
part 1. This declaration tells Salt not to install the HTML
file until Apache is installed.
Next, create the index.html
file and save it in the webserver
directory:
<!DOCTYPE html>
<html>
<head><title>Salt rocks</title></head>
<body>
<h1>This file brought to you by Salt</h1>
</body>
</html>
Last, call state.highstate
again and the
minion will fetch and execute the highstate as well as our HTML file from the
master using Salt's File Server:
salt '*' state.highstate
Verify that Apache is now serving your custom HTML.
require
vs. watch
There are two Requisite declaration, “require”, and “watch”. Not
every state supports “watch”. The service state
does support “watch” and will restart a service
based on the watch condition.
For example, if you use Salt to install an Apache virtual host configuration file and want to restart Apache whenever that file is changed you could modify our Apache example from earlier as follows:
/etc/httpd/extra/httpd-vhosts.conf:
file.managed:
- source: salt://webserver/httpd-vhosts.conf
apache:
pkg.installed: []
service.running:
- watch:
- file: /etc/httpd/extra/httpd-vhosts.conf
- require:
- pkg: apache
If the pkg and service names differ on your OS or distro of choice you can specify each one separately using a Name declaration which explained in Part 3.
Note
This tutorial builds on topics covered in part 1 and part 2. It is recommended that you begin there.
This part of the tutorial will cover more advanced templating and
configuration techniques for sls
files.
SLS modules may require programming logic or inline execution. This is
accomplished with module templating. The default module templating system used
is Jinja2 and may be configured by changing the renderer
value in the master config.
All states are passed through a templating system when they are initially read. To make use of the templating system, simply add some templating markup. An example of an sls module with templating markup may look like this:
{% for usr in ['moe','larry','curly'] %}
{{ usr }}:
user.present
{% endfor %}
This templated sls file once generated will look like this:
moe:
user.present
larry:
user.present
curly:
user.present
Here's a more complex example:
{% for usr in 'moe','larry','curly' %}
{{ usr }}:
group:
- present
user:
- present
- gid_from_name: True
- require:
- group: {{ usr }}
{% endfor %}
Often times a state will need to behave differently on different systems. Salt grains objects are made available in the template context. The grains can be used from within sls modules:
apache:
pkg.installed:
{% if grains['os'] == 'RedHat' %}
- name: httpd
{% elif grains['os'] == 'Ubuntu' %}
- name: apache2
{% endif %}
All of the Salt modules loaded by the minion are available within the templating system. This allows data to be gathered in real time on the target system. It also allows for shell commands to be run easily from within the sls modules.
The Salt module functions are also made available in the template context as
salt:
moe:
user.present:
- gid: {{ salt['file.group_to_gid']('some_group_that_exists') }}
Note that for the above example to work, some_group_that_exists
must exist
before the state file is processed by the templating engine.
Below is an example that uses the network.hw_addr
function to retrieve the
MAC address for eth0:
salt['network.hw_addr']('eth0')
Lastly, we will cover some incredibly useful techniques for more complex State trees.
A previous example showed how to spread a Salt tree across several files. Similarly, requisites span multiple files by using an Include declaration. For example:
python/python-libs.sls:
python-dateutil:
pkg.installed
python/django.sls:
include:
- python.python-libs
django:
pkg.installed:
- require:
- pkg: python-dateutil
You can modify previous declarations by using an Extend declaration. For example the following modifies the Apache tree to also restart Apache when the vhosts file is changed:
apache/apache.sls:
apache:
pkg.installed
apache/mywebsite.sls:
include:
- apache.apache
extend:
apache:
service:
- running
- watch:
- file: /etc/httpd/extra/httpd-vhosts.conf
/etc/httpd/extra/httpd-vhosts.conf:
file.managed:
- source: salt://apache/httpd-vhosts.conf
Using extend with require or watch
The extend
statement works differently for require
or watch
.
It appends to, rather than replacing the requisite component.
You can override the ID declaration by using a Name declaration. For example, the previous example is a bit more maintainable if rewritten as follows:
apache/mywebsite.sls:
include:
- apache.apache
extend:
apache:
service:
- running
- watch:
- file: mywebsite
mywebsite:
file.managed:
- name: /etc/httpd/extra/httpd-vhosts.conf
- source: salt://apache/httpd-vhosts.conf
Even more powerful is using a Names declaration to override the ID declaration for multiple states at once. This often can remove the need for looping in a template. For example, the first example in this tutorial can be rewritten without the loop:
stooges:
user.present:
- names:
- moe
- larry
- curly
In part 4 we will discuss how to use salt's
file_roots
to set up a workflow in which states can be
"promoted" from dev, to QA, to production.
Note
This tutorial builds on topics covered in part 1, part 2 and part 3. It is recommended that you begin there.
This part of the tutorial will show how to use salt's file_roots
to set up a workflow in which states can be "promoted" from dev, to QA, to
production.
Salt's fileserver allows for more than one root directory per environment, like in the below example, which uses both a local directory and a secondary location shared to the salt master via NFS:
# In the master config file (/etc/salt/master)
file_roots:
base:
- /srv/salt
- /mnt/salt-nfs/base
Salt's fileserver collapses the list of root directories into a single virtual
environment containing all files from each root. If the same file exists at the
same relative path in more than one root, then the top-most match "wins". For
example, if /srv/salt/foo.txt
and /mnt/salt-nfs/base/foo.txt
both
exist, then salt://foo.txt
will point to /srv/salt/foo.txt
.
Note
When using multiple fileserver backends, the order in which they are listed
in the fileserver_backend
parameter also matters. If both
roots
and git
backends contain a file with the same relative path,
and roots
appears before git
in the
fileserver_backend
list, then the file in roots
will
"win", and the file in gitfs will be ignored.
A more thorough explanation of how Salt's modular fileserver works can be found here. We recommend reading this.
Configure a multiple-environment setup like so:
file_roots:
base:
- /srv/salt/prod
qa:
- /srv/salt/qa
- /srv/salt/prod
dev:
- /srv/salt/dev
- /srv/salt/qa
- /srv/salt/prod
Given the path inheritance described above, files within /srv/salt/prod
would be available in all environments. Files within /srv/salt/qa
would be
available in both qa
, and dev
. Finally, the files within
/srv/salt/dev
would only be available within the dev
environment.
Based on the order in which the roots are defined, new files/states can be
placed within /srv/salt/dev
, and pushed out to the dev hosts for testing.
Those files/states can then be moved to the same relative path within
/srv/salt/qa
, and they are now available only in the dev
and qa
environments, allowing them to be pushed to QA hosts and tested.
Finally, if moved to the same relative path within /srv/salt/prod
, the
files are now available in all three environments.
As an example, consider a simple website, installed to /var/www/foobarcom
.
Below is a top.sls that can be used to deploy the website:
/srv/salt/prod/top.sls:
base:
'web*prod*':
- webserver.foobarcom
qa:
'web*qa*':
- webserver.foobarcom
dev:
'web*dev*':
- webserver.foobarcom
Using pillar, roles can be assigned to the hosts:
/srv/pillar/top.sls:
base:
'web*prod*':
- webserver.prod
'web*qa*':
- webserver.qa
'web*dev*':
- webserver.dev
/srv/pillar/webserver/prod.sls:
webserver_role: prod
/srv/pillar/webserver/qa.sls:
webserver_role: qa
/srv/pillar/webserver/dev.sls:
webserver_role: dev
And finally, the SLS to deploy the website:
/srv/salt/prod/webserver/foobarcom.sls:
{% if pillar.get('webserver_role', '') %}
/var/www/foobarcom:
file.recurse:
- source: salt://webserver/src/foobarcom
- env: {{ pillar['webserver_role'] }}
- user: www
- group: www
- dir_mode: 755
- file_mode: 644
{% endif %}
Given the above SLS, the source for the website should initially be placed in
/srv/salt/dev/webserver/src/foobarcom
.
First, let's deploy to dev. Given the configuration in the top file, this can
be done using state.highstate
:
salt --pillar 'webserver_role:dev' state.highstate
However, in the event that it is not desirable to apply all states configured
in the top file (which could be likely in more complex setups), it is possible
to apply just the states for the foobarcom
website, using state.sls
:
salt --pillar 'webserver_role:dev' state.sls webserver.foobarcom
Once the site has been tested in dev, then the files can be moved from
/srv/salt/dev/webserver/src/foobarcom
to
/srv/salt/qa/webserver/src/foobarcom
, and deployed using the following:
salt --pillar 'webserver_role:qa' state.sls webserver.foobarcom
Finally, once the site has been tested in qa, then the files can be moved from
/srv/salt/qa/webserver/src/foobarcom
to
/srv/salt/prod/webserver/src/foobarcom
, and deployed using the following:
salt --pillar 'webserver_role:prod' state.sls webserver.foobarcom
Thanks to Salt's fileserver inheritance, even though the files have been moved
to within /srv/salt/prod
, they are still available from the same
salt://
URI in both the qa and dev environments.
The best way to continue learning about Salt States is to read through the reference documentation and to look through examples of existing state trees. Many pre-configured state trees can be found on GitHub in the saltstack-formulas collection of repositories.
If you have any questions, suggestions, or just want to chat with other people who are using Salt, we have a very active community and we'd love to hear from you.
In addition, by continuing to part 5, you can learn about the powerful orchestration of which Salt is capable.
Note
This tutorial builds on some of the topics covered in the earlier States Walkthrough pages. It is recommended to start with Part 1 if you are not familiar with how to use states.
Orchestration is accomplished in salt primarily through the Orchestrate Runner. Added in version 0.17.0, this Salt Runner can use the full suite of requisites available in states, and can also execute states/functions using salt-ssh. This runner replaces the OverState.
New in version 0.17.0.
As noted above in the introduction, the Orchestrate Runner (originally called the state.sls runner) offers all the functionality of the OverState, but with a couple advantages:
The Orchestrate Runner was added with the intent to eventually deprecate the OverState system, however the OverState will still be maintained for the foreseeable future.
The configuration differs slightly from that of the OverState, and more closely resembles the configuration schema used for states.
To execute a state, use salt.state
:
install_nginx:
salt.state:
- tgt: 'web*'
- sls:
- nginx
To execute a function, use salt.function
:
cmd.run:
salt.function:
- tgt: '*'
- arg:
- rm -rf /tmp/foo
Whereas with the OverState, a Highstate is run by simply omitting an sls
or
function
argument, with the Orchestrate Runner the Highstate must
explicitly be requested by using highstate: True
:
webserver_setup:
salt.state:
- tgt: 'web*'
- highstate: True
The Orchestrate Runner can be executed using the state.orchestrate
runner
function. state.orch
also works, for those that would like to type less.
Assuming that your base
environment is located at /srv/salt
, and you
have placed a configuration file in /srv/salt/orchestration/webserver.sls
,
then the following could both be used:
salt-run state.orchestrate orchestration.webserver
salt-run state.orch orchestration.webserver
Changed in version 2014.1.1: The runner function was renamed to state.orchestrate
. In versions
0.17.0 through 2014.1.0, state.sls
must be used. This was renamed to
avoid confusion with the state.sls
execution function.
salt-run state.sls orchestration.webserver
Many states/functions can be configured in a single file, which when combined with the full suite of requisites, can be used to easily configure complex orchestration tasks. Additionally, the states/functions will be executed in the order in which they are defined, unless prevented from doing so by any requisites, as is the default in SLS files since 0.17.0.
cmd.run:
salt.function:
- tgt: 10.0.0.0/24
- tgt_type: ipcidr
- arg:
- bootstrap
storage_setup:
salt.state:
- tgt: 'role:storage'
- tgt_type: grain
- sls: ceph
- require:
- salt: webserver_setup
webserver_setup:
salt.state:
- tgt: 'web*'
- highstate: True
Given the above setup, the orchestration will be carried out as follows:
bootstrap
will be executed on all minions in the
10.0.0.0/24 subnet.storage_setup
state requires it.ceph
SLS target will be executed on all minions which have
a grain called role
with a value of storage
.Warning
The OverState runner is deprecated, and will be removed in the feature release of Salt codenamed Boron. (Three feature releases after 2014.7.0, which is codenamed Helium)
Often, servers need to be set up and configured in a specific order, and systems should only be set up if systems earlier in the sequence have been set up without any issues.
The OverState system can be used to orchestrate deployment in a smooth and reliable way across multiple systems in small to large environments.
The OverState system is managed by an SLS file named overstate.sls
, located
in the root of a Salt fileserver environment.
The overstate.sls configures an unordered list of stages, each stage defines
the minions on which to execute the state, and can define what sls files to
run, execute a state.highstate
, or
execute a function. Here's a sample overstate.sls
:
mysql:
match: 'db*'
sls:
- mysql.server
- drbd
webservers:
match: 'web*'
require:
- mysql
all:
match: '*'
require:
- mysql
- webservers
Note
The match
argument uses compound matching
Given the above setup, the OverState will be carried out as follows:
mysql
stage will be executed first because it is required by the
webservers
and all
stages. It will execute state.sls
once for each of the two listed SLS targets
(mysql.server
and drbd
). These states will be executed on all
minions whose minion ID starts with "db".webservers
stage will then be executed, but only if the mysql
stage executes without any failures. The webservers
stage will execute a
state.highstate
on all minions whose
minion IDs start with "web".all
stage will execute, running state.highstate
on all systems, if, and only if the mysql
and webservers
stages completed without any failures.Any failure in the above steps would cause the requires to fail, preventing the dependent stages from executing.
In the above example, you'll notice that the stages lacking an sls
entry
run a state.highstate
. As mentioned
earlier, it is also possible to execute other functions in a stage. This
functionality was added in version 0.15.0.
Running a function is easy:
http:
function:
pkg.install:
- httpd
The list of function arguments are defined after the declared function. So, the
above stage would run pkg.install http
. Requisites only function properly
if the given function supports returning a custom return code.
Since the OverState is a Runner, it is executed
using the salt-run
command. The runner function for the OverState is
state.over
.
salt-run state.over
The function will by default look in the root of the base
environment (as
defined in file_roots
) for a file called overstate.sls
, and
then execute the stages defined within that file.
Different environments and paths can be used as well, by adding them as positional arguments:
salt-run state.over dev /root/other-overstate.sls
The above would run an OverState using the dev
fileserver environment, with
the stages defined in /root/other-overstate.sls
.
Warning
Since these are positional arguments, when defining the path to the
overstate file the environment must also be specified, even if it is the
base
environment.
Note
Remember, salt-run is always executed on the master.
Syslog_ng state module is for generating syslog-ng configurations. You can do the following things:
There is also an execution module, which can check the syntax of the configuration, get the version and other information about syslog-ng.
Users can create syslog-ng configuration statements with the
syslog_ng.config
function. It requires
a name and a config parameter. The name parameter determines the name of
the generated statement and the config parameter holds a parsed YAML structure.
A statement can be declared in the following forms (both are equivalent):
source.s_localhost:
syslog_ng.config:
- config:
- tcp:
- ip: "127.0.0.1"
- port: 1233
s_localhost:
syslog_ng.config:
- config:
source:
- tcp:
- ip: "127.0.0.1"
- port: 1233
The first one is called short form, because it needs less typing. Users can use lists and dictionaries to specify their configuration. The format is quite self describing and there are more examples [at the end](#examples) of this document.
"string"
in the generated configuration, it should be like '"string"'
in the YAML document"'string'"
to get 'string'
in the generated configurationThe following configuration is an example, how a complete syslog-ng configuration looks like:
# Set the location of the configuration file
set_location:
module.run:
- name: syslog_ng.set_config_file
- m_name: "/home/tibi/install/syslog-ng/etc/syslog-ng.conf"
# The syslog-ng and syslog-ng-ctl binaries are here. You needn't use
# this method if these binaries can be found in a directory in your PATH.
set_bin_path:
module.run:
- name: syslog_ng.set_binary_path
- m_name: "/home/tibi/install/syslog-ng/sbin"
# Writes the first lines into the config file, also erases its previous
# content
write_version:
module.run:
- name: syslog_ng.write_version
- m_name: "3.6"
# There is a shorter form to set the above variables
set_variables:
module.run:
- name: syslog_ng.set_parameters
- version: "3.6"
- binary_path: "/home/tibi/install/syslog-ng/sbin"
- config_file: "/home/tibi/install/syslog-ng/etc/syslog-ng.conf"
# Some global options
options.global_options:
syslog_ng.config:
- config:
- time_reap: 30
- mark_freq: 10
- keep_hostname: "yes"
source.s_localhost:
syslog_ng.config:
- config:
- tcp:
- ip: "127.0.0.1"
- port: 1233
destination.d_log_server:
syslog_ng.config:
- config:
- tcp:
- "127.0.0.1"
- port: 1234
log.l_log_to_central_server:
syslog_ng.config:
- config:
- source: s_localhost
- destination: d_log_server
some_comment:
module.run:
- name: syslog_ng.write_config
- config: |
# Multi line
# comment
# An other mode to use comments or existing configuration snippets
config.other_comment_form:
syslog_ng.config:
- config: |
# Multi line
# comment
The syslog_ng.reloaded
function can generate syslog-ng configuration from YAML. If the statement (source, destination, parser,
etc.) has a name, this function uses the id as the name, otherwise (log
statement) it's purpose is like a mandatory comment.
After execution this example the syslog_ng state will generate this file:
#Generated by Salt on 2014-08-18 00:11:11
@version: 3.6
options {
time_reap(
30
);
mark_freq(
10
);
keep_hostname(
yes
);
};
source s_localhost {
tcp(
ip(
127.0.0.1
),
port(
1233
)
);
};
destination d_log_server {
tcp(
127.0.0.1,
port(
1234
)
);
};
log {
source(
s_localhost
);
destination(
d_log_server
);
};
# Multi line
# comment
# Multi line
# comment
Users can include arbitrary texts in the generated configuration with
using the config
statement (see the example above).
You can use syslog_ng.set_binary_path
to set the directory which contains the
syslog-ng and syslog-ng-ctl binaries. If this directory is in your PATH,
you don't need to use this function. There is also a syslog_ng.set_config_file
function to set the location of the configuration file.
source s_tail {
file(
"/var/log/apache/access.log",
follow_freq(1),
flags(no-parse, validate-utf8)
);
};
s_tail:
# Salt will call the source function of syslog_ng module
syslog_ng.config:
- config:
source:
- file:
- file: ''"/var/log/apache/access.log"''
- follow_freq : 1
- flags:
- no-parse
- validate-utf8
OR
s_tail:
syslog_ng.config:
- config:
source:
- file:
- ''"/var/log/apache/access.log"''
- follow_freq : 1
- flags:
- no-parse
- validate-utf8
OR
source.s_tail:
syslog_ng.config:
- config:
- file:
- ''"/var/log/apache/access.log"''
- follow_freq : 1
- flags:
- no-parse
- validate-utf8
source s_gsoc2014 {
tcp(
ip("0.0.0.0"),
port(1234),
flags(no-parse)
);
};
s_gsoc2014:
syslog_ng.config:
- config:
source:
- tcp:
- ip: 0.0.0.0
- port: 1234
- flags: no-parse
filter f_json {
match(
"@json:"
);
};
f_json:
syslog_ng.config:
- config:
filter:
- match:
- ''"@json:"''
template t_demo_filetemplate {
template(
"$ISODATE $HOST $MSG "
);
template_escape(
no
);
};
t_demo_filetemplate:
syslog_ng.config:
-config:
template:
- template:
- '"$ISODATE $HOST $MSG\n"'
- template_escape:
- "no"
rewrite r_set_message_to_MESSAGE {
set(
"${.json.message}",
value("$MESSAGE")
);
};
r_set_message_to_MESSAGE:
syslog_ng.config:
- config:
rewrite:
- set:
- '"${.json.message}"'
- value : '"$MESSAGE"'
options {
time_reap(30);
mark_freq(10);
keep_hostname(yes);
};
global_options:
syslog_ng.config:
- config:
options:
- time_reap: 30
- mark_freq: 10
- keep_hostname: "yes"
log {
source(s_gsoc2014);
junction {
channel {
filter(f_json);
parser(p_json);
rewrite(r_set_json_tag);
rewrite(r_set_message_to_MESSAGE);
destination {
file(
"/tmp/json-input.log",
template(t_gsoc2014)
);
};
flags(final);
};
channel {
filter(f_not_json);
parser {
syslog-parser(
);
};
rewrite(r_set_syslog_tag);
flags(final);
};
};
destination {
file(
"/tmp/all.log",
template(t_gsoc2014)
);
};
};
l_gsoc2014:
syslog_ng.config:
- config:
log:
- source: s_gsoc2014
- junction:
- channel:
- filter: f_json
- parser: p_json
- rewrite: r_set_json_tag
- rewrite: r_set_message_to_MESSAGE
- destination:
- file:
- '"/tmp/json-input.log"'
- template: t_gsoc2014
- flags: final
- channel:
- filter: f_not_json
- parser:
- syslog-parser: []
- rewrite: r_set_syslog_tag
- flags: final
- destination:
- file:
- "/tmp/all.log"
- template: t_gsoc2014
Note
Welcome to SaltStack! I am excited that you are interested in Salt and starting down the path to better infrastructure management. I developed (and am continuing to develop) Salt with the goal of making the best software available to manage computers of almost any kind. I hope you enjoy working with Salt and that the software can solve your real world needs!
Salt is a different approach to infrastructure management, founded on the idea that high-speed communication with large numbers of systems can open up new capabilities. This approach makes Salt a powerful multitasking system that can solve many specific problems in an infrastructure.
The backbone of Salt is the remote execution engine, which creates a high-speed,
secure and bi-directional communication net for groups of systems. On top of this
communication system, Salt provides an extremely fast, flexible, and easy-to-use
configuration management system called Salt States
.
SaltStack has been made to be very easy to install and get started. Setting up Salt should be as easy as installing Salt via distribution packages on Linux or via the Windows installer. The installation documents cover platform-specific installation in depth.
Salt functions on a master/minion topology. A master server acts as a
central control bus for the clients, which are called minions
. The minions
connect back to the master.
Turning on the Salt Master is easy -- just turn it on! The default configuration is suitable for the vast majority of installations. The Salt Master can be controlled by the local Linux/Unix service manager:
On Systemd based platforms (OpenSuse, Fedora):
systemctl start salt-master
On Upstart based systems (Ubuntu, older Fedora/RHEL):
service salt-master start
On SysV Init systems (Debian, Gentoo etc.):
/etc/init.d/salt-master start
Alternatively, the Master can be started directly on the command-line:
salt-master -d
The Salt Master can also be started in the foreground in debug mode, thus greatly increasing the command output:
salt-master -l debug
The Salt Master needs to bind to two TCP network ports on the system. These ports
are 4505
and 4506
. For more in depth information on firewalling these ports,
the firewall tutorial is available here.
Note
The Salt Minion can operate with or without a Salt Master. This walk-through assumes that the minion will be connected to the master, for information on how to run a master-less minion please see the master-less quick-start guide:
The Salt Minion only needs to be aware of one piece of information to run, the network location of the master.
By default the minion will look for the DNS name salt
for the master,
making the easiest approach to set internal DNS to resolve the name salt
back to the Salt Master IP.
Otherwise, the minion configuration file will need to be edited so that the
configuration option master
points to the DNS name or the IP of the Salt Master:
Note
The default location of the configuration files is /etc/salt
. Most
platforms adhere to this convention, but platforms such as FreeBSD and
Microsoft Windows place this file in different locations.
/etc/salt/minion:
master: saltmaster.example.com
Now that the master can be found, start the minion in the same way as the master; with the platform init system or via the command line directly:
As a daemon:
salt-minion -d
In the foreground in debug mode:
salt-minion -l debug
When the minion is started, it will generate an id
value, unless it has
been generated on a previous run and cached in the configuration directory, which
is /etc/salt
by default. This is the name by which the minion will attempt
to authenticate to the master. The following steps are attempted, in order to
try to find a value that is not localhost
:
socket.getfqdn()
is run/etc/hostname
is checked (non-Windows only)/etc/hosts
(%WINDIR%\system32\drivers\etc\hosts
on Windows hosts) is
checked for hostnames that map to anything within 127.0.0.0/8.If none of the above are able to produce an id which is not localhost
, then
a sorted list of IP addresses on the minion (excluding any within
127.0.0.0/8) is inspected. The first publicly-routable IP address is
used, if there is one. Otherwise, the first privately-routable IP address is
used.
If all else fails, then localhost
is used as a fallback.
Note
Overriding the id
The minion id can be manually specified using the id
parameter in the minion config file. If this configuration value is
specified, it will override all other sources for the id
.
Now that the minion is started, it will generate cryptographic keys and attempt to connect to the master. The next step is to venture back to the master server and accept the new minion's public key.
Salt authenticates minions using public-key encryption and authentication. For a minion to start accepting commands from the master, the minion keys need to be accepted by the master.
The salt-key
command is used to manage all of the keys on the
master. To list the keys that are on the master:
salt-key -L
The keys that have been rejected, accepted, and pending acceptance are listed. The easiest way to accept the minion key is to accept all pending keys:
salt-key -A
Note
Keys should be verified! The secure thing to do before accepting a key is
to run salt-key -f minion-id
to print the fingerprint of the minion's
public key. This fingerprint can then be compared against the fingerprint
generated on the minion.
On the master:
# salt-key -f foo.domain.com
Unaccepted Keys:
foo.domain.com: 39:f9:e4:8a:aa:74:8d:52:1a:ec:92:03:82:09:c8:f9
On the minion:
# salt-call key.finger --local
local:
39:f9:e4:8a:aa:74:8d:52:1a:ec:92:03:82:09:c8:f9
If they match, approve the key with salt-key -a foo.domain.com
.
Now that the minion is connected to the master and authenticated, the master can start to command the minion.
Salt commands allow for a vast set of functions to be executed and for specific minions and groups of minions to be targeted for execution.
The salt
command is comprised of command options, target specification,
the function to execute, and arguments to the function.
A simple command to start with looks like this:
salt '*' test.ping
The *
is the target, which specifies all minions.
test.ping
tells the minion to run the test.ping
function.
In the case of test.ping
, test
refers to a execution module. ping
refers to the ping
function contained in the aforementioned test
module.
Note
Execution modules are the workhorses of Salt. They do the work on the system to perform various tasks, such as manipulating files and restarting services.
The result of running this command will be the master instructing all of the
minions to execute test.ping
in parallel
and return the result.
This is not an actual ICMP ping, but rather a simple function which returns True
.
Using test.ping
is a good way of confirming that a minion is
connected.
Note
Each minion registers itself with a unique minion ID. This ID defaults to
the minion's hostname, but can be explicitly defined in the minion config as
well by using the id
parameter.
Of course, there are hundreds of other modules that can be called just as
test.ping
can. For example, the following would return disk usage on all
targeted minions:
salt '*' disk.usage
Salt comes with a vast library of functions available for execution, and Salt
functions are self-documenting. To see what functions are available on the
minions execute the sys.doc
function:
salt '*' sys.doc
This will display a very large list of available functions and documentation on them.
Note
Module documentation is also available on the web.
These functions cover everything from shelling out to package management to manipulating database servers. They comprise a powerful system management API which is the backbone to Salt configuration management and many other aspects of Salt.
Note
Salt comes with many plugin systems. The functions that are available via
the salt
command are called Execution Modules.
The cmd module contains
functions to shell out on minions, such as cmd.run
and cmd.run_all
:
salt '*' cmd.run 'ls -l /etc'
The pkg
functions automatically map local system package managers to the
same salt functions. This means that pkg.install
will install packages via
yum
on Red Hat based systems, apt
on Debian systems, etc.:
salt '*' pkg.install vim
Note
Some custom Linux spins and derivatives of other distributions are not properly
detected by Salt. If the above command returns an error message saying that
pkg.install
is not available, then you may need to override the pkg
provider. This process is explained here.
The network.interfaces
function will
list all interfaces on a minion, along with their IP addresses, netmasks, MAC
addresses, etc:
salt '*' network.interfaces
The default output format used for most Salt commands is called the nested
outputter, but there are several other outputters that can be used to change
the way the output is displayed. For instance, the pprint
outputter can be
used to display the return data using Python's pprint
module:
root@saltmaster:~# salt myminion grains.item pythonpath --out=pprint
{'myminion': {'pythonpath': ['/usr/lib64/python2.7',
'/usr/lib/python2.7/plat-linux2',
'/usr/lib64/python2.7/lib-tk',
'/usr/lib/python2.7/lib-tk',
'/usr/lib/python2.7/site-packages',
'/usr/lib/python2.7/site-packages/gst-0.10',
'/usr/lib/python2.7/site-packages/gtk-2.0']}}
The full list of Salt outputters, as well as example output, can be found here.
salt-call
¶The examples so far have described running commands from the Master using the
salt
command, but when troubleshooting it can be more beneficial to login
to the minion directly and use salt-call
.
Doing so allows you to see the minion log messages specific to the command you
are running (which are not part of the return data you see when running the
command from the Master using salt
), making it unnecessary to tail the
minion log. More information on salt-call
and how to use it can be found
here.
Salt uses a system called Grains to build up static data about minions. This data includes information about the operating system that is running, CPU architecture and much more. The grains system is used throughout Salt to deliver platform data to many components and to users.
Grains can also be statically set, this makes it easy to assign values to minions for grouping and managing.
A common practice is to assign grains to minions to specify what the role or
roles a minion might be. These static grains can be set in the minion
configuration file or via the grains.setval
function.
Salt allows for minions to be targeted based on a wide range of criteria. The
default targeting system uses globular expressions to match minions, hence if
there are minions named larry1
, larry2
, curly1
, and curly2
, a
glob of larry*
will match larry1
and larry2
, and a glob of *1
will match larry1
and curly1
.
Many other targeting systems can be used other than globs, these systems include:
The concepts of targets are used on the command line with Salt, but also function in many other areas as well, including the state system and the systems used for ACLs and user permissions.
Many of the functions available accept arguments which can be passed in on the command line:
salt '*' pkg.install vim
This example passes the argument vim
to the pkg.install function. Since
many functions can accept more complex input than just a string, the arguments
are parsed through YAML, allowing for more complex data to be sent on the
command line:
salt '*' test.echo 'foo: bar'
In this case Salt translates the string 'foo: bar' into the dictionary "{'foo': 'bar'}"
Note
Any line that contains a newline will not be parsed by YAML.
Now that the basics are covered the time has come to evaluate States
. Salt
States
, or the State System
is the component of Salt made for
configuration management.
The state system is already available with a basic Salt setup, no additional configuration is required. States can be set up immediately.
Note
Before diving into the state system, a brief overview of how states are constructed will make many of the concepts clearer. Salt states are based on data modeling and build on a low level data structure that is used to execute each state function. Then more logical layers are built on top of each other.
The high layers of the state system which this tutorial will cover consists of everything that needs to be known to use states, the two high layers covered here are the sls layer and the highest layer highstate.
Understanding the layers of data management in the State System will help with understanding states, but they never need to be used. Just as understanding how a compiler functions assists when learning a programming language, understanding what is going on under the hood of a configuration management system will also prove to be a valuable asset.
The state system is built on SLS formulas. These formulas are built out in files on Salt's file server. To make a very basic SLS formula open up a file under /srv/salt named vim.sls. The following state ensures that vim is installed on a system to which that state has been applied.
/srv/salt/vim.sls:
vim:
pkg.installed
Now install vim on the minions by calling the SLS directly:
salt '*' state.sls vim
This command will invoke the state system and run the vim
SLS.
Now, to beef up the vim SLS formula, a vimrc
can be added:
/srv/salt/vim.sls:
vim:
pkg.installed: []
/etc/vimrc:
file.managed:
- source: salt://vimrc
- mode: 644
- user: root
- group: root
Now the desired vimrc
needs to be copied into the Salt file server to
/srv/salt/vimrc
. In Salt, everything is a file, so no path redirection needs
to be accounted for. The vimrc
file is placed right next to the vim.sls
file.
The same command as above can be executed to all the vim SLS formulas and now
include managing the file.
Note
Salt does not need to be restarted/reloaded or have the master manipulated in any way when changing SLS formulas. They are instantly available.
Obviously maintaining SLS formulas right in a single directory at the root of the file server will not scale out to reasonably sized deployments. This is why more depth is required. Start by making an nginx formula a better way, make an nginx subdirectory and add an init.sls file:
/srv/salt/nginx/init.sls:
nginx:
pkg.installed: []
service.running:
- require:
- pkg: nginx
A few concepts are introduced in this SLS formula.
First is the service statement which ensures that the nginx
service is running.
Of course, the nginx service can't be started unless the package is installed --
hence the require
statement which sets up a dependency between the two.
The require
statement makes sure that the required component is executed before
and that it results in success.
Note
The require option belongs to a family of options called requisites. Requisites are a powerful component of Salt States, for more information on how requisites work and what is available see: Requisites
Also evaluation ordering is available in Salt as well: Ordering States
This new sls formula has a special name -- init.sls
. When an SLS formula is
named init.sls
it inherits the name of the directory path that contains it.
This formula can be referenced via the following command:
salt '*' state.sls nginx
Note
Reminder!
Just as one could call the test.ping
or disk.usage
execution modules,
state.sls
is simply another execution module. It simply takes the name of an
SLS file as an argument.
Now that subdirectories can be used, the vim.sls
formula can be cleaned up.
To make things more flexible, move the vim.sls
and vimrc into a new subdirectory
called edit
and change the vim.sls
file to reflect the change:
/srv/salt/edit/vim.sls:
vim:
pkg.installed
/etc/vimrc:
file.managed:
- source: salt://edit/vimrc
- mode: 644
- user: root
- group: root
Only the source path to the vimrc file has changed. Now the formula is
referenced as edit.vim
because it resides in the edit subdirectory.
Now the edit subdirectory can contain formulas for emacs, nano, joe or any other
editor that may need to be deployed.
Two walk-throughs are specifically recommended at this point. First, a deeper run through States, followed by an explanation of Pillar.
An understanding of Pillar is extremely helpful in using States.
Two more in-depth States tutorials exist, which delve much more deeply into States functionality.
These tutorials include much more in-depth information including templating SLS formulas etc.
This concludes the initial Salt walk-through, but there are many more things still to learn! These documents will cover important core aspects of Salt:
A few more tutorials are also available:
This still is only scratching the surface, many components such as the reactor and event systems, extending Salt, modular components and more are not covered here. For an overview of all Salt features and documentation, look at the Table of Contents.
New in version 2014.1.0.
Sometimes, you might need to propagate files that are generated on a minion. Salt already has a feature to send files from a minion to the master:
salt 'minion-id' cp.push /path/to/the/file
This command will store the file, including its full path, under
cachedir
/master/minions/minion-id/files
. With the default
cachedir
the example file above would be stored as
/var/cache/salt/master/minions/minion-id/files/path/to/the/file.
Note
This walkthrough assumes basic knowledge of Salt and cp.push
. To get up to speed, check out the
walkthrough.
Since it is not a good idea to expose the whole cachedir
, MinionFS
should be used to send these files to other minions.
To use the minionfs backend only two configuration changes are required on the
master. The fileserver_backend
option needs to contain a value of
minion
and file_recv
needs to be set to true:
fileserver_backend:
- roots
- minion
file_recv: True
These changes require a restart of the master, then new requests for the
salt://minion-id/
protocol will send files that are pushed by cp.push
from minion-id
to the master.
Note
All of the files that are pushed to the master are going to be available to
all of the minions. If this is not what you want, please remove minion
from fileserver_backend
in the master config file.
Note
Having directories with the same name as your minions in the root
that can be accessed like salt://minion-id/
might cause confusion.
Lets assume that we are going to generate SSH keys on a minion called
minion-source
and put the public part in ~/.ssh/authorized_keys
of root
user of a minion called minion-destination
.
First, lets make sure that /root/.ssh
exists and has the right permissions:
[root@salt-master file]# salt '*' file.mkdir dir_path=/root/.ssh user=root group=root mode=700
minion-source:
None
minion-destination:
None
We create an RSA key pair without a passphrase [*]:
[root@salt-master file]# salt 'minion-source' cmd.run 'ssh-keygen -N "" -f /root/.ssh/id_rsa'
minion-source:
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
9b:cd:1c:b9:c2:93:8e:ad:a3:52:a0:8b:0a:cc:d4:9b root@minion-source
The key's randomart image is:
+--[ RSA 2048]----+
| |
| |
| |
| o . |
| o o S o |
|= + . B o |
|o+ E B = |
|+ . .+ o |
|o ...ooo |
+-----------------+
and we send the public part to the master to be available to all minions:
[root@salt-master file]# salt 'minion-source' cp.push /root/.ssh/id_rsa.pub
minion-source:
True
now it can be seen by everyone:
[root@salt-master file]# salt 'minion-destination' cp.list_master_dirs
minion-destination:
- .
- etc
- minion-source/root
- minion-source/root/.ssh
Lets copy that as the only authorized key to minion-destination
:
[root@salt-master file]# salt 'minion-destination' cp.get_file salt://minion-source/root/.ssh/id_rsa.pub /root/.ssh/authorized_keys
minion-destination:
/root/.ssh/authorized_keys
Or we can use a more elegant and salty way to add an SSH key:
[root@salt-master file]# salt 'minion-destination' ssh.set_auth_key_from_file user=root source=salt://minion-source/root/.ssh/id_rsa.pub
minion-destination:
new
[*] | Yes, that was the actual key on my server, but the server is already destroyed. |
New in version 0.10.3.d.
Salt has support for the Esky application freezing and update tool. This tool allows one to build a complete zipfile out of the salt scripts and all their dependencies - including shared objects / DLLs.
To build frozen applications, suitable build environment will be needed for each platform. You should probably set up a virtualenv in order to limit the scope of Q/A.
This process does work on Windows. Directions are available at https://github.com/saltstack/salt-windows-install for details on installing Salt in Windows. Only the 32-bit Python and dependencies have been tested, but they have been tested on 64-bit Windows.
Install bbfreeze
, and then esky
from PyPI in order to enable the
bdist_esky
command in setup.py
. Salt itself must also be installed, in
addition to its dependencies.
Once you have your tools installed and the environment configured, use
setup.py
to prepare the distribution files.
python setup.py sdist
python setup.py bdist
Once the distribution files are in place, Esky can be used traverse the module tree and pack all the scripts up into a redistributable.
python setup.py bdist_esky
There will be an appropriately versioned salt-VERSION.zip
in dist/
if
everything went smoothly.
C:\Python27\lib\site-packages\zmq
will need to be added to the PATH
variable. This helps bbfreeze find the zmq DLL so it can pack it up.
Unpack the zip file in the desired install location. Scripts like
salt-minion
and salt-call
will be in the root of the zip file. The
associated libraries and bootstrapping will be in the directories at the same
level. (Check the Esky documentation
for more information)
To support updating your minions in the wild, put the builds on a web server
that the minions can reach. salt.modules.saltutil.update()
will
trigger an update and (optionally) a restart of the minion service under the
new version.
The process dispatch on Windows is slower than it is on *nix. It may be necessary to add '-t 15' to salt commands to give minions plenty of time to return.
The Visual C++ 2008 32-bit redistributable will need to be installed on all
Windows minions. Esky has an option to pack the library into the zipfile,
but OpenSSL does not seem to acknowledge the new location. If a
no OPENSSL_Applink
error appears on the console when trying to start a
frozen minion, the redistributable is not installed.
The Yum Python module doesn't appear to be available on any of the standard
Python package mirrors. If RHEL/CentOS systems need to be supported, the frozen
build should created on that platform to support all the Linux nodes. Remember
to build the virtualenv with --system-site-packages
so that the yum
module is included.
Automatic (Python) module discovery does not work with the late-loaded scheme
that Salt uses for (Salt) modules. Any misbehaving modules will need to be
explicitly added to the freezer_includes
in Salt's setup.py
. Always
check the zipped application to make sure that the necessary modules were
included.
As of Salt 0.16.0, the ability to connect minions to multiple masters has been made available. The multi-master system allows for redundancy of Salt masters and facilitates multiple points of communication out to minions. When using a multi-master setup, all masters are running hot, and any active master can be used to send commands out to the minions.
Note
If you need failover capabilities with multiple masters, there is also a MultiMaster-PKI setup available, that uses a different topology MultiMaster-PKI with Failover Tutorial
In 0.16.0, the masters do not share any information, keys need to be accepted
on both masters, and shared files need to be shared manually or use tools like
the git fileserver backend to ensure that the file_roots
are
kept consistent.
The first task is to prepare the redundant master. If the redundant master is
already running, stop it. There is only one requirement when preparing a
redundant master, which is that masters share the same private key. When the
first master was created, the master's identifying key pair was generated and
placed in the master's pki_dir
. The default location of the master's key
pair is /etc/salt/pki/master/
. Take the private key, master.pem
, and
copy it to the same location on the redundant master. Do the same for the
master's public key, master.pub
. Assuming that no minions have yet been
connected to the new redundant master, it is safe to delete any existing key
in this location and replace it.
Note
There is no logical limit to the number of redundant masters that can be used.
Once the new key is in place, the redundant master can be safely started.
Since minions need to be master-aware, the new master needs to be added to the minion configurations. Simply update the minion configurations to list all connected masters:
master:
- saltmaster1.example.com
- saltmaster2.example.com
Now the minion can be safely restarted.
Now the minions will check into the original master and also check into the new redundant master. Both masters are first-class and have rights to the minions.
Note
Minions can automatically detect failed masters and attempt to reconnect to reconnect to them quickly. To enable this functionality, set master_alive_interval in the minion config and specify a number of seconds to poll the masters for connection status.
If this option is not set, minions will still reconnect to failed masters but the first command sent after a master comes back up may be lost while the minion authenticates.
Salt does not automatically share files between multiple masters. A number of files should be shared or sharing of these files should be strongly considered.
Minion keys can be accepted the normal way using salt-key on both
masters. Keys accepted, deleted, or rejected on one master will NOT be
automatically managed on redundant masters; this needs to be taken care of by
running salt-key on both masters or sharing the
/etc/salt/pki/master/{minions,minions_pre,minions_rejected}
directories
between masters.
Note
While sharing the /etc/salt/pki/master directory will work, it is strongly discouraged, since allowing access to the master.pem key outside of Salt creates a SERIOUS security risk.
The file_roots
contents should be kept consistent between
masters. Otherwise state runs will not always be consistent on minions since
instructions managed by one master will not agree with other masters.
The recommended way to sync these is to use a fileserver backend like gitfs or to keep these files on shared storage.
Pillar roots should be given the same considerations as
file_roots
.
While reasons may exist to maintain separate master configurations, it is wise to remember that each master maintains independent control over minions. Therefore, access controls should be in sync between masters unless a valid reason otherwise exists to keep them inconsistent.
These access control options include but are not limited to:
This tutorial will explain, how to run a salt-environment where a single minion can have multiple masters and fail-over between them if its current master fails.
The individual steps are
setup the master(s) to sign its auth-replies
setup minion(s) to verify master-public-keys
enable multiple masters on minion(s)
enable master-check on minion(s)
Please note, that it is advised to have good knowledge of the salt- authentication and communication-process to understand this tutorial. All of the settings described here, go on top of the default authentication/communication process.
The default behaviour of a salt-minion is to connect to a master and accept the masters public key. With each publication, the master sends his public-key for the minion to check and if this public-key ever changes, the minion complains and exits. Practically this means, that there can only be a single master at any given time.
Would it not be much nicer, if the minion could have any number of masters (1:n) and jump to the next master if its current master died because of a network or hardware failure?
Note
There is also a MultiMaster-Tutorial with a different approach and topology than this one, that might also suite your needs or might even be better suited Multi-Master Tutorial
It is also desirable, to add some sort of authenticity-check to the very first public key a minion receives from a master. Currently a minions takes the first masters public key for granted.
Setup the master to sign the public key it sends to the minions and enable the minions to verify this signature for authenticity.
For signing to work, both master and minion must have the signing and/or verification settings enabled. If the master signs the public key but the minion does not verify it, the minion will complain and exit. The same happens, when the master does not sign but the minion tries to verify.
The easiest way to have the master sign its public key is to set
master_sign_pubkey: True
After restarting the salt-master service, the master will automatically generate a new key-pair
master_sign.pem
master_sign.pub
A custom name can be set for the signing key-pair by setting
master_sign_key_name: <name_without_suffix>
The master will then generate that key-pair upon restart and use it for creating the public keys signature attached to the auth-reply.
The computation is done for every auth-request of a minion. If many minions auth very often, it is advised to use conf_master:master_pubkey_signature and conf_master:master_use_pubkey_signature settings described below.
If multiple masters are in use and should sign their auth-replies, the signing key-pair master_sign.* has to be copied to each master. Otherwise a minion will fail to verify the masters public when connecting to a different master than it did initially. That is because the public keys signature was created with a different signing key-pair.
The minion must have the public key (and only that one!) available to be able to verify a signature it receives. That public key (defaults to master_sign.pub) must be copied from the master to the minions pki-directory.
/etc/salt/pki/minion/master_sign.pub
DO NOT COPY THE master_sign.pem FILE. IT MUST STAY ON THE MASTER AND
ONLY THERE!
When that is done, enable the signature checking in the minions configuration
verify_master_pubkey_sign: True
and restart the minion. For the first try, the minion should be run in manual debug mode.
$ salt-minion -l debug
Upon connecting to the master, the following lines should appear on the output:
[DEBUG ] Attempting to authenticate with the Salt Master at 172.16.0.10
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG ] salt.crypt.verify_signature: Loading public key
[DEBUG ] salt.crypt.verify_signature: Verifying signature
[DEBUG ] Successfully verified signature of master public key with verification public key master_sign.pub
[INFO ] Received signed and verified master pubkey from master 172.16.0.10
[DEBUG ] Decrypting the current master AES key
If the signature verification fails, something went wrong and it will look like this
[DEBUG ] Attempting to authenticate with the Salt Master at 172.16.0.10
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG ] salt.crypt.verify_signature: Loading public key
[DEBUG ] salt.crypt.verify_signature: Verifying signature
[DEBUG ] Failed to verify signature of public key
[CRITICAL] The Salt Master server's public key did not authenticate!
In a case like this, it should be checked, that the verification pubkey (master_sign.pub) on the minion is the same as the one on the master.
Once the verification is successful, the minion can be started in daemon mode again.
For the paranoid among us, its also possible to verify the public whenever it is received from the master. That is, for every single auth-attempt which can be quite frequent. For example just the start of the minion will force the signature to be checked 6 times for various things like auth, mine, highstate, etc.
If that is desired, enable the setting
always_verify_signature: True
Configuring multiple masters on a minion is done by specifying two settings:
master:
- 172.16.0.10
- 172.16.0.11
- 172.16.0.12
master_type: failover
This tells the minion that all the master above are available for it to connect to. When started with this configuration, it will try the master in the order they are defined. To randomize that order, set
master_shuffle: True
The master-list will then be shuffled before the first connection attempt.
The first master that accepts the minion, is used by the minion. If the master does not yet know the minion, that counts as accepted and the minion stays on that master.
For the minion to be able to detect if its still connected to its current master enable the check for it
master_alive_interval: <seconds>
If the loss of the connection is detected, the minion will temporarily remove the failed master from the list and try one of the other masters defined (again shuffled if that is enabled).
At least two running masters are needed to test the failover setup.
Both masters should be running and the minion should be running on the command line in debug mode
$ salt-minion -l debug
The minion will connect to the first master from its master list
[DEBUG ] Attempting to authenticate with the Salt Master at 172.16.0.10
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG ] salt.crypt.verify_signature: Loading public key
[DEBUG ] salt.crypt.verify_signature: Verifying signature
[DEBUG ] Successfully verified signature of master public key with verification public key master_sign.pub
[INFO ] Received signed and verified master pubkey from master 172.16.0.10
[DEBUG ] Decrypting the current master AES key
A test.ping on the master the minion is currently connected to should be run to test connectivity.
If successful, that master should be turned off. A firewall-rule denying the minions packets will also do the trick.
Depending on the configured conf_minion:master_alive_interval, the minion will notice the loss of the connection and log it to its logfile.
[INFO ] Connection to master 172.16.0.10 lost
[INFO ] Trying to tune in to next master from master-list
The minion will then remove the current master from the list and try connecting to the next master
[INFO ] Removing possibly failed master 172.16.0.10 from list of masters
[WARNING ] Master ip address changed from 172.16.0.10 to 172.16.0.11
[DEBUG ] Attempting to authenticate with the Salt Master at 172.16.0.11
If everything is configured correctly, the new masters public key will be verified successfully
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG ] salt.crypt.verify_signature: Loading public key
[DEBUG ] salt.crypt.verify_signature: Verifying signature
[DEBUG ] Successfully verified signature of master public key with verification public key master_sign.pub
the authentication with the new master is successful
[INFO ] Received signed and verified master pubkey from master 172.16.0.11
[DEBUG ] Decrypting the current master AES key
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[INFO ] Authentication with master successful!
and the minion can be pinged again from its new master.
With the setup described above, the master computes a signature for every auth-request of a minion. With many minions and many auth-requests, that can chew up quite a bit of CPU-Power.
To avoid that, the master can use a pre-created signature of its public-key. The signature is saved as a base64 encoded string which the master reads once when starting and attaches only that string to auth-replies.
Enabling this also gives paranoid users the possibility, to have the signing key-pair on a different system than the actual salt-master and create the public keys signature there. Probably on a system with more restrictive firewall rules, without internet access, less users, etc.
That signature can be created with
$ salt-key --gen-signature
This will create a default signature file in the master pki-directory
/etc/salt/pki/master/master_pubkey_signature
It is a simple text-file with the binary-signature converted to base64.
If no signing-pair is present yet, this will auto-create the signing pair and the signature file in one call
$ salt-key --gen-signature --auto-create
Telling the master to use the pre-created signature is done with
master_use_pubkey_signature: True
That requires the file 'master_pubkey_signature' to be present in the masters pki-directory with the correct signature.
If the signature file is named differently, its name can be set with
master_pubkey_signature: <filename>
With many masters and many public-keys (default and signing), it is advised to use the salt-masters hostname for the signature-files name. Signatures can be easily confused because they do not provide any information about the key the signature was created from.
Verifying that everything works is done the same way as above.
The default key-pair of the salt-master is
/etc/salt/pki/master/master.pem
/etc/salt/pki/master/master.pub
To be able to create a signature of a message (in this case a public-key), another key-pair has to be added to the setup. Its default name is:
master_sign.pem
master_sign.pub
The combination of the master.* and master_sign.* key-pairs give the possibility of generating signatures. The signature of a given message is unique and can be verified, if the public-key of the signing-key-pair is available to the recipient (the minion).
The signature of the masters public-key in master.pub is computed with
master_sign.pem
master.pub
M2Crypto.EVP.sign_update()
This results in a binary signature which is converted to base64 and attached to the auth-reply send to the minion.
With the signing-pairs public-key available to the minion, the attached signature can be verified with
master_sign.pub
master.pub
M2Cryptos EVP.verify_update().
When running multiple masters, either the signing key-pair has to be present on all of them, or the master_pubkey_signature has to be pre-computed for each master individually (because they all have different public-keys).
DO NOT PUT THE SAME master.pub ON ALL MASTERS FOR EASE OF USE.
In some situations, it is not convenient to wait for a minion to start before accepting its key on the master. For instance, you may want the minion to bootstrap itself as soon as it comes online. You may also want to to let your developers provision new development machines on the fly.
See also
Many ways to preseed minion keys
Salt has other ways to generate and pre-accept minion keys in addition to the manual steps outlined below.
salt-cloud performs these same steps automatically when new cloud VMs are created (unless instructed not to).
salt-api exposes an HTTP call to Salt's REST API to generate and
download the new minion keys as a tarball
.
There is a general four step process to do this:
root@saltmaster# salt-key --gen-keys=[key_name]
Pick a name for the key, such as the minion's id.
root@saltmaster# cp key_name.pub /etc/salt/pki/master/minions/[minion_id]
It is necessary that the public key file has the same name as your minion id. This is how Salt matches minions with their keys. Also note that the pki folder could be in a different location, depending on your OS or if specified in the master config file.
There is no single method to get the keypair to your minion. The difficulty is finding a distribution method which is secure. For Amazon EC2 only, an AWS best practice is to use IAM Roles to pass credentials. (See blog post, http://blogs.aws.amazon.com/security/post/Tx610S2MLVZWEA/Using-IAM-roles-to-distribute-non-AWS-credentials-to-your-EC2-instances )
Security Warning
Since the minion key is already accepted on the master, distributing the private key poses a potential security risk. A malicious party will have access to your entire state tree and other sensitive data if they gain access to a preseeded minion key.
You will want to place the minion keys before starting the salt-minion daemon:
/etc/salt/pki/minion/minion.pem
/etc/salt/pki/minion/minion.pub
Once in place, you should be able to start salt-minion and run
salt-call state.highstate
or any other salt commands that require master
authentication.
The Salt Bootstrap script allows for a user to install the Salt Minion or
Master on a variety of system distributions and versions. This shell script
known as bootstrap-salt.sh
runs through a series of checks to determine
the operating system type and version. It then installs the Salt binaries
using the appropriate methods. The Salt Bootstrap script installs the
minimum number of packages required to run Salt. This means that in the event
you run the bootstrap to install via package, Git will not be installed.
Installing the minimum number of packages helps ensure the script stays as
lightweight as possible, assuming the user will install any other required
packages after the Salt binaries are present on the system. The script source
is available on GitHub: https://github.com/saltstack/salt-bootstrap
Note
In the event you do not see your distribution or version available please review the develop branch on GitHub as it main contain updates that are not present in the stable release: https://github.com/saltstack/salt-bootstrap/tree/develop
If you're looking for the one-liner to install salt, please scroll to the bottom and use the instructions for Installing via an Insecure One-Liner
Note
In every two-step example, you would be well-served to examine the downloaded file and examine it to ensure that it does what you expect.
Using curl
to install latest git:
curl -L https://bootstrap.saltstack.com -o install_salt.sh
sudo sh install_salt.sh git develop
Using wget
to install your distribution's stable packages:
wget -O install_salt.sh https://bootstrap.saltstack.com
sudo sh install_salt.sh
Install a specific version from git using wget
:
wget -O install_salt.sh https://bootstrap.saltstack.com
sudo sh install_salt.sh -P git v0.16.4
If you already have python installed, python 2.6
, then it's as easy as:
python -m urllib "https://bootstrap.saltstack.com" > install_salt.sh
sudo sh install_salt.sh git develop
All python versions should support the following one liner:
python -c 'import urllib; print urllib.urlopen("https://bootstrap.saltstack.com").read()' > install_salt.sh
sudo sh install_salt.sh git develop
On a FreeBSD base system you usually don't have either of the above binaries available. You do
have fetch
available though:
fetch -o install_salt.sh https://bootstrap.saltstack.com
sudo sh install_salt.sh
If all you want is to install a salt-master
using latest git:
curl -o install_salt.sh -L https://bootstrap.saltstack.com
sudo sh install_salt.sh -M -N git develop
If you want to install a specific release version (based on the git tags):
curl -o install_salt.sh -L https://bootstrap.saltstack.com
sudo sh install_salt.sh git v0.16.4
To install a specific branch from a git fork:
curl -o install_salt.sh -L https://bootstrap.saltstack.com
sudo sh install_salt.sh -g https://github.com/myuser/salt.git git mybranch
The following examples illustrate how to install Salt via a one-liner.
Note
Warning! These methods do not involve a verification step and assume that the delivered file is trustworthy.
Installing the latest develop branch of Salt:
curl -L https://bootstrap.saltstack.com | sudo sh -s -- git develop
Any of the example above which use two-lines can be made to run in a single-line configuration with minor modifications.
The Salt Bootstrap script has a wide variety of options that can be passed as well as several ways of obtaining the bootstrap script itself.
For example, using curl
to install your distribution's stable packages:
curl -L https://bootstrap.saltstack.com | sudo sh
Using wget
to install your distribution's stable packages:
wget -O - https://bootstrap.saltstack.com | sudo sh
Installing the latest version available from git with curl
:
curl -L https://bootstrap.saltstack.com | sudo sh -s -- git develop
Install a specific version from git using wget
:
wget -O - https://bootstrap.saltstack.com | sh -s -- -P git v0.16.4
If you already have python installed, python 2.6
, then it's as easy as:
python -m urllib "https://bootstrap.saltstack.com" | sudo sh -s -- git develop
All python versions should support the following one liner:
python -c 'import urllib; print urllib.urlopen("https://bootstrap.saltstack.com").read()' | \
sudo sh -s -- git develop
On a FreeBSD base system you usually don't have either of the above binaries
available. You do have fetch
available though:
fetch -o - https://bootstrap.saltstack.com | sudo sh
If all you want is to install a salt-master
using latest git:
curl -L https://bootstrap.saltstack.com | sudo sh -s -- -M -N git develop
If you want to install a specific release version (based on the git tags):
curl -L https://bootstrap.saltstack.com | sudo sh -s -- git v0.16.4
Downloading the develop branch (from here standard command line options may be passed):
wget https://bootstrap.saltstack.com/develop
Here's a summary of the command line options:
$ sh bootstrap-salt.sh -h
Usage : bootstrap-salt.sh [options] <install-type> <install-type-args>
Installation types:
- stable (default)
- daily (ubuntu specific)
- git
Examples:
$ bootstrap-salt.sh
$ bootstrap-salt.sh stable
$ bootstrap-salt.sh daily
$ bootstrap-salt.sh git
$ bootstrap-salt.sh git develop
$ bootstrap-salt.sh git v0.17.0
$ bootstrap-salt.sh git 8c3fadf15ec183e5ce8c63739850d543617e4357
Options:
-h Display this message
-v Display script version
-n No colours.
-D Show debug output.
-c Temporary configuration directory
-g Salt repository URL. (default: git://github.com/saltstack/salt.git)
-k Temporary directory holding the minion keys which will pre-seed
the master.
-M Also install salt-master
-S Also install salt-syndic
-N Do not install salt-minion
-X Do not start daemons after installation
-C Only run the configuration function. This option automatically
bypasses any installation.
-P Allow pip based installations. On some distributions the required salt
packages or its dependencies are not available as a package for that
distribution. Using this flag allows the script to use pip as a last
resort method. NOTE: This only works for functions which actually
implement pip based installations.
-F Allow copied files to overwrite existing(config, init.d, etc)
-U If set, fully upgrade the system prior to bootstrapping salt
-K If set, keep the temporary files in the temporary directories specified
with -c and -k.
-I If set, allow insecure connections while downloading any files. For
example, pass '--no-check-certificate' to 'wget' or '--insecure' to 'curl'
-A Pass the salt-master DNS name or IP. This will be stored under
${BS_SALT_ETC_DIR}/minion.d/99-master-address.conf
-i Pass the salt-minion id. This will be stored under
${BS_SALT_ETC_DIR}/minion_id
-L Install the Apache Libcloud package if possible(required for salt-cloud)
-p Extra-package to install while installing salt dependencies. One package
per -p flag. You're responsible for providing the proper package name.
Note
This walkthrough assumes basic knowledge of Salt. To get up to speed, check out the Salt Walkthrough.
The gitfs backend allows Salt to serve files from git repositories. It can be
enabled by adding git
to the fileserver_backend
list, and
configuring one or more repositories in gitfs_remotes
.
Branches and tags become Salt fileserver environments.
Beginning with version 2014.7.0, both pygit2 and Dulwich are supported as
alternatives to GitPython. The desired provider can be configured using the
gitfs_provider
parameter in the master config file.
If gitfs_provider
is not configured, then Salt will prefer
pygit2 if a suitable version is available, followed by GitPython and
Dulwich.
The minimum supported version of pygit2 is 0.20.3. Availability for this version of pygit2 is still limited, though the SaltStack team is working to get compatible versions available for as many platforms as possible.
For the Fedora/EPEL versions which have a new enough version packaged, the following command would be used to install pygit2:
# yum install python-pygit2
Provided a valid version is packaged for Debian/Ubuntu (which is not currently the case), the package name would be the same, and the following command would be used to install it:
# apt-get install python-pygit2
If pygit2 is not packaged for the platform on which the Master is running, the
pygit2 website has installation instructions here. Keep in mind however that
following these instructions will install libgit2 and pygit2 without system
packages. Additionally, keep in mind that SSH authentication in pygit2 requires libssh2 (not libssh) development
libraries to be present before libgit2 is built. On some distros (debian based)
pkg-config
is also required to link libgit2 with libssh2.
GitPython 0.3.0 or newer is required to use GitPython for gitfs. For RHEL-based Linux distros, a compatible version is available in EPEL, and can be easily installed on the master using yum:
# yum install GitPython
Ubuntu 14.04 LTS and Debian Wheezy (7.x) also have a compatible version packaged:
# apt-get install python-git
If your master is running an older version (such as Ubuntu 12.04 LTS or Debian
Squeeze), then you will need to install GitPython using either pip or
easy_install (it is recommended to use pip). Version 0.3.2.RC1 is now marked as
the stable release in PyPI, so it should be a simple matter of running pip
install GitPython
(or easy_install GitPython
) as root.
Warning
Keep in mind that if GitPython has been previously installed on the master
using pip (even if it was subsequently uninstalled), then it may still
exist in the build cache (typically /tmp/pip-build-root/GitPython
) if
the cache is not cleared after installation. The package in the build cache
will override any requirement specifiers, so if you try upgrading to
version 0.3.2.RC1 by running pip install 'GitPython==0.3.2.RC1'
then it
will ignore this and simply install the version from the cache directory.
Therefore, it may be necessary to delete the GitPython directory from the
build cache in order to ensure that the specified version is installed.
Dulwich 0.9.4 or newer is required to use Dulwich as backend for gitfs.
Dulwich is available in EPEL, and can be easily installed on the master using yum:
# yum install python-dulwich
For APT-based distros such as Ubuntu and Debian:
# apt-get install python-dulwich
Important
If switching to Dulwich from GitPython/pygit2, or switching from
GitPython/pygit2 to Dulwich, it is necessary to clear the gitfs cache to
avoid unpredictable behavior. This is probably a good idea whenever
switching to a new gitfs_provider
, but it is less important
when switching between GitPython and pygit2.
Beginning in version 2015.5.0, the gitfs cache can be easily cleared using
the fileserver.clear_cache
runner.
salt-run fileserver.clear_cache backend=git
If the Master is running an earlier version, then the cache can be cleared
by removing the gitfs
and file_lists/gitfs
directories (both paths
relative to the master cache directory, usually
/var/cache/salt/master
).
rm -rf /var/cache/salt/master{,/file_lists}/gitfs
To use the gitfs backend, only two configuration changes are required on the master:
Include git
in the fileserver_backend
list in the master
config file:
fileserver_backend:
- git
Specify one or more git://
, https://
, file://
, or ssh://
URLs in gitfs_remotes
to configure which repositories to
cache and search for requested files:
gitfs_remotes:
- https://github.com/saltstack-formulas/salt-formula.git
SSH remotes can also be configured using scp-like syntax:
gitfs_remotes:
- git@github.com:user/repo.git
- ssh://user@domain.tld/path/to/repo.git
Information on how to authenticate to SSH remotes can be found here.
Note
Dulwich does not recognize ssh://
URLs, git+ssh://
must be used
instead. Salt version 2015.5.0 and later will automatically add the
git+
to the beginning of these URLs before fetching, but earlier
Salt versions will fail to fetch unless the URL is specified using
git+ssh://
.
Restart the master to load the new configuration.
Note
In a master/minion setup, files from a gitfs remote are cached once by the master, so minions do not need direct access to the git repository.
The gitfs_remotes
option accepts an ordered list of git remotes to
cache and search, in listed order, for requested files.
A simple scenario illustrates this cascading lookup behavior:
If the gitfs_remotes
option specifies three remotes:
gitfs_remotes:
- git://github.com/example/first.git
- https://github.com/example/second.git
- file:///root/third
And each repository contains some files:
first.git:
top.sls
edit/vim.sls
edit/vimrc
nginx/init.sls
second.git:
edit/dev_vimrc
haproxy/init.sls
third:
haproxy/haproxy.conf
edit/dev_vimrc
Salt will attempt to lookup the requested file from each gitfs remote repository in the order in which they are defined in the configuration. The git://github.com/example/first.git remote will be searched first. If the requested file is found, then it is served and no further searching is executed. For example:
Note
This example is purposefully contrived to illustrate the behavior of the gitfs backend. This example should not be read as a recommended way to lay out files and git repos.
The file:// prefix denotes a git repository in a local directory. However, it will still use the given file:// URL as a remote, rather than copying the git repo to the salt cache. This means that any refs you want accessible must exist as local refs in the specified repo.
Warning
Salt versions prior to 2014.1.0 are not tolerant of changing the
order of remotes or modifying the URI of existing remotes. In those
versions, when modifying remotes it is a good idea to remove the gitfs
cache directory (/var/cache/salt/master/gitfs
) before restarting the
salt-master service.
New in version 2014.7.0.
The following master config parameters are global (that is, they apply to all configured gitfs remotes):
gitfs_base
gitfs_root
gitfs_mountpoint
(new in 2014.7.0)gitfs_user
(pygit2 only, new in 2014.7.0)gitfs_password
(pygit2 only, new in 2014.7.0)gitfs_insecure_auth
(pygit2 only, new in 2014.7.0)gitfs_pubkey
(pygit2 only, new in 2014.7.0)gitfs_privkey
(pygit2 only, new in 2014.7.0)gitfs_passphrase
(pygit2 only, new in 2014.7.0)These parameters can now be overridden on a per-remote basis. This allows for a tremendous amount of customization. Here's some example usage:
gitfs_provider: pygit2
gitfs_base: develop
gitfs_remotes:
- https://foo.com/foo.git
- https://foo.com/bar.git:
- root: salt
- mountpoint: salt://foo/bar/baz
- base: salt-base
- http://foo.com/baz.git:
- root: salt/states
- user: joe
- password: mysupersecretpassword
- insecure_auth: True
Important
There are two important distinctions which should be noted for per-remote configuration:
gitfs_
removed from the beginning.In the example configuration above, the following is true:
develop
branch/tag as the
base
environment, while the second one will use the salt-base
branch/tag as the base
environment.salt
directory (and its
subdirectories), while the third remote will only serve files from the
salt/states
directory (and its subdirectories).salt://foo/bar/baz
, while the files from the first and third remotes
will be located under the root of the Salt fileserver namespace
(salt://
).The gitfs_root
parameter allows files to be served from a
subdirectory within the repository. This allows for only part of a repository
to be exposed to the Salt fileserver.
Assume the below layout:
.gitignore
README.txt
foo/
foo/bar/
foo/bar/one.txt
foo/bar/two.txt
foo/bar/three.txt
foo/baz/
foo/baz/top.sls
foo/baz/edit/vim.sls
foo/baz/edit/vimrc
foo/baz/nginx/init.sls
The below configuration would serve only the files under foo/baz
, ignoring
the other files in the repository:
gitfs_remotes:
- git://mydomain.com/stuff.git
gitfs_root: foo/baz
The root can also be configured on a per-remote basis.
New in version 2014.7.0.
The gitfs_mountpoint
parameter will prepend the specified path
to the files served from gitfs. This allows an existing repository to be used,
rather than needing to reorganize a repository or design it around the layout
of the Salt fileserver.
Before the addition of this feature, if a file being served up via gitfs was
deeply nested within the root directory (for example,
salt://webapps/foo/files/foo.conf
, it would be necessary to ensure that the
file was properly located in the remote repository, and that all of the the
parent directories were present (for example, the directories
webapps/foo/files/
would need to exist at the root of the repository).
The below example would allow for a file foo.conf
at the root of the
repository to be served up from the Salt fileserver path
salt://webapps/foo/files/foo.conf
.
gitfs_remotes:
- https://mydomain.com/stuff.git
gitfs_mountpoint: salt://webapps/foo/files
Mountpoints can also be configured on a per-remote basis.
Sometimes it may make sense to use multiple backends; for instance, if sls
files are stored in git but larger files are stored directly on the master.
The cascading lookup logic used for multiple remotes is also used with
multiple backends. If the fileserver_backend
option contains
multiple backends:
fileserver_backend:
- roots
- git
Then the roots
backend (the default backend of files in /srv/salt
) will
be searched first for the requested file; then, if it is not found on the
master, each configured git remote will be searched.
When using the gitfs backend, branches, and tags will be mapped to environments using the branch/tag name as an identifier.
There is one exception to this rule: the master
branch is implicitly mapped
to the base
environment.
So, for a typical base
, qa
, dev
setup, the following branches could
be used:
master
qa
dev
top.sls
files from different branches will be merged into one at runtime.
Since this can lead to overly complex configurations, the recommended setup is
to have a separate repository, containing only the top.sls
file with just
one single master
branch.
To map a branch other than master
as the base
environment, use the
gitfs_base
parameter.
gitfs_base: salt-base
The base can also be configured on a per-remote basis.
New in version 2014.7.0.
The gitfs_env_whitelist
and gitfs_env_blacklist
parameters allow for greater control over which branches/tags are exposed as
fileserver environments. Exact matches, globs, and regular expressions are
supported, and are evaluated in that order. If using a regular expression,
^
and $
must be omitted, and the expression must match the entire
branch/tag.
gitfs_env_whitelist:
- base
- v1.*
- 'mybranch\d+'
Note
v1.*
, in this example, will match as both a glob and a regular
expression (though it will have been matched as a glob, since globs are
evaluated before regular expressions).
The behavior of the blacklist/whitelist will differ depending on which combination of the two options is used:
gitfs_env_whitelist
is used, then only branches/tags
which match the whitelist will be available as environmentsgitfs_env_blacklist
is used, then the branches/tags
which match the blacklist will not be available as environmentsNew in version 2014.7.0.
Both HTTPS and SSH authentication are supported as of version 0.20.3, which is the earliest version of pygit2 supported by Salt for gitfs.
Note
The examples below make use of per-remote configuration parameters, a feature new to Salt 2014.7.0. More information on these can be found here.
For HTTPS repositories which require authentication, the username and password can be provided like so:
gitfs_remotes:
- https://domain.tld/myrepo.git:
- user: git
- password: mypassword
If the repository is served over HTTP instead of HTTPS, then Salt will by
default refuse to authenticate to it. This behavior can be overridden by adding
an insecure_auth
parameter:
gitfs_remotes:
- http://domain.tld/insecure_repo.git:
- user: git
- password: mypassword
- insecure_auth: True
SSH repositories can be configured using the ssh://
protocol designation,
or using scp-like syntax. So, the following two configurations are equivalent:
ssh://git@github.com/user/repo.git
git@github.com:user/repo.git
Both gitfs_pubkey
and gitfs_privkey
(or their
per-remote counterparts) must be configured in
order to authenticate to SSH-based repos. If the private key is protected with
a passphrase, it can be configured using gitfs_passphrase
(or
simply passphrase
if being configured per-remote). For example:
gitfs_remotes:
- git@github.com:user/repo.git:
- pubkey: /root/.ssh/id_rsa.pub
- privkey: /root/.ssh/id_rsa
- passphrase: myawesomepassphrase
Finally, the SSH host key must be added to the known_hosts file.
With GitPython, only passphrase-less SSH public key authentication is supported. The auth parameters (pubkey, privkey, etc.) shown in the pygit2 authentication examples above do not work with GitPython.
gitfs_remotes:
- ssh://git@github.com/example/salt-states.git
Since GitPython wraps the git CLI, the private key must be located in
~/.ssh/id_rsa
for the user under which the Master is running, and should
have permissions of 0600
. Also, in the absence of a user in the repo URL,
GitPython will (just as SSH does) attempt to login as the current user (in
other words, the user under which the Master is running, usually root
).
If a key needs to be used, then ~/.ssh/config
can be configured to use
the desired key. Information on how to do this can be found by viewing the
manpage for ssh_config
. Here's an example entry which can be added to the
~/.ssh/config
to use an alternate key for gitfs:
Host github.com
IdentityFile /root/.ssh/id_rsa_gitfs
The Host
parameter should be a hostname (or hostname glob) that matches the
domain name of the git repository.
It is also necessary to add the SSH host key to the known_hosts file. The exception to this would be if strict host key
checking is disabled, which can be done by adding StrictHostKeyChecking no
to the entry in ~/.ssh/config
Host github.com
IdentityFile /root/.ssh/id_rsa_gitfs
StrictHostKeyChecking no
However, this is generally regarded as insecure, and is not recommended.
To use SSH authentication, it is necessary to have the remote repository's SSH
host key in the ~/.ssh/known_hosts
file. If the master is also a minion,
this can be done using the ssh.set_known_host
function:
# salt mymaster ssh.set_known_host user=root hostname=github.com
mymaster:
----------
new:
----------
enc:
ssh-rsa
fingerprint:
16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48
hostname:
|1|OiefWWqOD4kwO3BhoIGa0loR5AA=|BIXVtmcTbPER+68HvXmceodDcfI=
key:
AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==
old:
None
status:
updated
If not, then the easiest way to add the key is to su to the user (usually
root
) under which the salt-master runs and attempt to login to the
server via SSH:
$ su
Password:
# ssh github.com
The authenticity of host 'github.com (192.30.252.128)' can't be established.
RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'github.com,192.30.252.128' (RSA) to the list of known hosts.
Permission denied (publickey).
It doesn't matter if the login was successful, as answering yes
will write
the fingerprint to the known_hosts file.
To verify that the correct fingerprint was added, it is a good idea to look it up. One way to do this is to use nmap:
$ nmap github.com --script ssh-hostkey
Starting Nmap 5.51 ( http://nmap.org ) at 2014-08-18 17:47 CDT
Nmap scan report for github.com (192.30.252.129)
Host is up (0.17s latency).
Not shown: 996 filtered ports
PORT STATE SERVICE
22/tcp open ssh
| ssh-hostkey: 1024 ad:1c:08:a4:40:e3:6f:9c:f5:66:26:5d:4b:33:5d:8c (DSA)
|_2048 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48 (RSA)
80/tcp open http
443/tcp open https
9418/tcp open git
Nmap done: 1 IP address (1 host up) scanned in 28.78 seconds
Another way is to check one's own known_hosts file, using this one-liner:
$ ssh-keygen -l -f /dev/stdin <<<`ssh-keyscan -t rsa github.com 2>/dev/null` | awk '{print $2}'
16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48
By default, Salt updates the remote fileserver backends every 60 seconds. However, if it is desirable to refresh quicker than that, the Reactor System can be used to signal the master to update the fileserver on each push, provided that the git server is also a Salt minion. There are three steps to this process:
On the master, create a file /srv/reactor/update_fileserver.sls, with the following contents:
update_fileserver:
runner.fileserver.update
Add the following reactor configuration to the master config file:
reactor:
- 'salt/fileserver/gitfs/update':
- /srv/reactor/update_fileserver.sls
On the git server, add a post-receive hook with the following contents:
#!/usr/bin/env sh
salt-call event.fire_master update salt/fileserver/gitfs/update
The update
argument right after event.fire_master
in this example can really be anything, as it
represents the data being passed in the event, and the passed data is ignored
by this reactor.
Similarly, the tag name salt/fileserver/gitfs/update
can be replaced by
anything, so long as the usage is consistent.
Git repositories can also be used to provide Pillar data, using the External Pillar system. Note that this is different from gitfs, and is not yet at feature parity with it.
To define a git external pillar, add a section like the following to the salt master config file:
ext_pillar:
- git: <branch> <repo> [root=<gitroot>]
Changed in version 2014.7.0: The optional root
parameter was added
The <branch>
param is the branch containing the pillar SLS tree. The
<repo>
param is the URI for the repository. To add the
master
branch of the specified repo as an external pillar source:
ext_pillar:
- git: master https://domain.com/pillar.git
Use the root
parameter to use pillars from a subdirectory of a git
repository:
ext_pillar:
- git: master https://domain.com/pillar.git root=subdirectory
More information on the git external pillar can be found in the
salt.pillar.get_pillar docs
.
In versions 0.16.3 and older, when using the git fileserver backend, certain versions of GitPython may generate errors when fetching, which Salt fails to catch. While not fatal to the fetch process, these interrupt the fileserver update that takes place before custom types are synced, and thus interrupt the sync itself. Try disabling the git fileserver backend in the master config, restarting the master, and attempting the sync again.
This issue is worked around in Salt 0.16.4 and newer.
This document provides a step-by-step guide to installing a Salt cluster consisting of one master, and one minion running on a local VM hosted on Mac OS X.
Note
This guide is aimed at developers who wish to run Salt in a virtual machine. The official (Linux) walkthrough can be found here.
Since you're here you've probably already heard about Salt, so you already know Salt lets you configure and run commands on hordes of servers easily. Here's a brief overview of a Salt cluster:
Salt works by having a "master" server sending commands to one or multiple "minion" servers [1]. The master server is the "command center". It is going to be the place where you store your configuration files, aka: "which server is the db, which is the web server, and what libraries and software they should have installed". The minions receive orders from the master. Minions are the servers actually performing work for your business.
Salt has two types of configuration files:
1. the "salt communication channels" or "meta" or "config" configuration files (not official names): one for the master (usually is /etc/salt/master , on the master server), and one for minions (default is /etc/salt/minion or /etc/salt/minion.conf, on the minion servers). Those files are used to determine things like the Salt Master IP, port, Salt folder locations, etc.. If these are configured incorrectly, your minions will probably be unable to receive orders from the master, or the master will not know which software a given minion should install.
2. the "business" or "service" configuration files (once again, not an official name): these are configuration files, ending with ".sls" extension, that describe which software should run on which server, along with particular configuration properties for the software that is being installed. These files should be created in the /srv/salt folder by default, but their location can be changed using ... /etc/salt/master configuration file!
Note
This tutorial contains a third important configuration file, not to be confused with the previous two: the virtual machine provisioning configuration file. This in itself is not specifically tied to Salt, but it also contains some Salt configuration. More on that in step 3. Also note that all configuration files are YAML files. So indentation matters.
[1] | Salt also works with "masterless" configuration where a minion is autonomous (in which case salt can be seen as a local configuration tool), or in "multiple master" configuration. See the documentation for more on that. |
The "Salt master" server is going to be the Mac OS machine, directly. Commands will be run from a terminal app, so Salt will need to be installed on the Mac. This is going to be more convenient for toying around with configuration files.
We'll only have one "Salt minion" server. It is going to be running on a Virtual Machine running on the Mac, using VirtualBox. It will run an Ubuntu distribution.
Because Salt has a lot of dependencies that are not built in Mac OS X, we will use Homebrew to install Salt. Homebrew is a package manager for Mac, it's great, use it (for this tutorial at least!). Some people spend a lot of time installing libs by hand to better understand dependencies, and then realize how useful a package manager is once they're configuring a brand new machine and have to do it all over again. It also lets you uninstall things easily.
Note
Brew is a Ruby program (Ruby is installed by default with your Mac). Brew
downloads, compiles, and links software. The linking phase is when compiled
software is deployed on your machine. It may conflict with manually
installed software, especially in the /usr/local directory. It's ok,
remove the manually installed version then refresh the link by typing
brew link 'packageName'
. Brew has a brew doctor
command that can
help you troubleshoot. It's a great command, use it often. Brew requires
xcode command line tools. When you run brew the first time it asks you to
install them if they're not already on your system. Brew installs
software in /usr/local/bin (system bins are in /usr/bin). In order to use
those bins you need your $PATH to search there first. Brew tells you if
your $PATH needs to be fixed.
Tip
Use the keyboard shortcut cmd + shift + period
in the "open" Mac OS X
dialog box to display hidden files and folders, such as .profile.
Install Homebrew here http://brew.sh/ Or just type
ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"
Now type the following commands in your terminal (you may want to type brew
doctor
after each to make sure everything's fine):
brew install python
brew install swig
brew install zmq
Note
zmq is ZeroMQ. It's a fantastic library used for server to server network communication and is at the core of Salt efficiency.
You should now have everything ready to launch this command:
pip install salt
Note
There should be no need for sudo pip install salt
. Brew installed
Python for your user, so you should have all the access. In case you
would like to check, type which python
to ensure that it's
/usr/local/bin/python, and which pip
which should be
/usr/local/bin/pip.
Now type python
in a terminal then, import salt
. There should be no
errors. Now exit the Python terminal using exit()
.
If the default /etc/salt/master configuration file was not created, copy-paste it from here: http://docs.saltstack.com/ref/configuration/examples.html#configuration-examples-master
Note
/etc/salt/master
is a file, not a folder.
Salt Master configuration changes. The Salt master needs a few customization to be able to run on Mac OS X:
sudo launchctl limit maxfiles 4096 8192
In the /etc/salt/master file, change max_open_files to 8192 (or just add the
line: max_open_files: 8192
(no quote) if it doesn't already exists).
You should now be able to launch the Salt master:
sudo salt-master --log-level=all
There should be no errors when running the above command.
Note
This command is supposed to be a daemon, but for toying around, we'll keep it running on a terminal to monitor the activity.
Now that the master is set, let's configure a minion on a VM.
The Salt minion is going to run on a Virtual Machine. There are a lot of software options that let you run virtual machines on a mac, But for this tutorial we're going to use VirtualBox. In addition to virtualBox, we will use Vagrant, which allows you to create the base VM configuration.
Vagrant lets you build ready to use VM images, starting from an OS image and customizing it using "provisioners". In our case, we'll use it to:
Go get it here: https://www.virtualBox.org/wiki/Downloads (click on VirtualBox for OS X hosts => x86/amd64)
Go get it here: http://downloads.vagrantup.com/ and choose the latest version
(1.3.5 at time of writing), then the .dmg file. Double-click to install it.
Make sure the vagrant
command is found when run in the terminal. Type
vagrant
. It should display a list of commands.
Create a folder in which you will store your minion's VM. In this tutorial, it's going to be a minion folder in the $home directory.
cd $home
mkdir minion
From the minion folder, type
vagrant init
This command creates a default Vagrantfile configuration file. This configuration file will be used to pass configuration parameters to the Salt provisioner in Step 3.
vagrant box add precise64 http://files.vagrantup.com/precise64.box
Note
This box is added at the global Vagrant level. You only need to do it once as each VM will use this same file.
Modify ./minion/Vagrantfile to use th precise64 box. Change the config.vm.box
line to:
config.vm.box = "precise64"
Uncomment the line creating a host-only IP. This is the ip of your minion (you can change it to something else if that IP is already in use):
config.vm.network :private_network, ip: "192.168.33.10"
At this point you should have a VM that can run, although there won't be much in it. Let's check that.
From the $home/minion folder type:
vagrant up
A log showing the VM booting should be present. Once it's done you'll be back to the terminal:
ping 192.168.33.10
The VM should respond to your ping request.
Now log into the VM in ssh using Vagrant again:
vagrant ssh
You should see the shell prompt change to something similar to
vagrant@precise64:~$
meaning you're inside the VM. From there, enter the
following:
ping 10.0.2.2
Note
That ip is the ip of your VM host (the Mac OS X OS). The number is a
VirtualBox default and is displayed in the log after the Vagrant ssh
command. We'll use that IP to tell the minion where the Salt master is.
Once you're done, end the ssh session by typing exit
.
It's now time to connect the VM to the salt master
Create the /etc/salt/minion
file. In that file, put the
following lines, giving the ID for this minion, and the IP of the master:
master: 10.0.2.2
id: 'minion1'
file_client: remote
Minions authenticate with the master using keys. Keys are generated automatically if you don't provide one and can accept them later on. However, this requires accepting the minion key every time the minion is destroyed or created (which could be quite often). A better way is to create those keys in advance, feed them to the minion, and authorize them once.
From the minion folder on your Mac run:
sudo salt-key --gen-keys=minion1
This should create two files: minion1.pem, and minion1.pub. Since those files have been created using sudo, but will be used by vagrant, you need to change ownership:
sudo chown youruser:yourgroup minion1.pem
sudo chown youruser:yourgroup minion1.pub
Then copy the .pub file into the list of accepted minions:
sudo cp minion1.pub /etc/salt/pki/master/minions/minion1
Let's now modify the Vagrantfile used to provision the Salt VM. Add the following section in the Vagrantfile (note: it should be at the same indentation level as the other properties):
# salt-vagrant config
config.vm.provision :salt do |salt|
salt.run_highstate = true
salt.minion_config = "/etc/salt/minion"
salt.minion_key = "./minion1.pem"
salt.minion_pub = "./minion1.pub"
end
Now destroy the vm and recreate it from the /minion folder:
vagrant destroy
vagrant up
If everything is fine you should see the following message:
"Bootstrapping Salt... (this may take a while)
Salt successfully configured and installed!"
To make sure the master and minion are talking to each other, enter the following:
sudo salt '*' test.ping
You should see your minion answering the ping. It's now time to do some configuration.
In this step we'll use the Salt master to instruct our minion to install Nginx.
First, make sure that an HTTP server is not installed on our minion.
When opening a browser directed at http://192.168.33.10/
You should get an
error saying the site cannot be reached.
System configuration is done in the /srv/salt/top.sls file (and
subfiles/folders), and then applied by running the state.highstate
command to have the Salt master give orders so minions will update their
instructions and run the associated commands.
First Create an empty file on your Salt master (Mac OS X machine):
touch /srv/salt/top.sls
When the file is empty, or if no configuration is found for our minion an error is reported:
sudo salt 'minion1' state.highstate
Should return an error stating: "No Top file or external nodes data matches found".
Now is finally the time to enter the real meat of our server's configuration. For this tutorial our minion will be treated as a web server that needs to have Nginx installed.
Insert the following lines into the /srv/salt/top.sls
file (which should
current be empty).
base:
'minion1':
- bin.nginx
Now create a /srv/salt/bin/nginx.sls
file containing the following:
nginx:
pkg.installed:
- name: nginx
service.running:
- enable: True
- reload: True
Finally run the state.highstate command again:
sudo salt 'minion1' state.highstate
You should see a log showing that the Nginx package has been installed and the service configured. To prove it, open your browser and navigate to http://192.168.33.10/, you should see the standard Nginx welcome page.
Congratulations!
A full description of configuration management within Salt (sls files among other things) is available here: http://docs.saltstack.com/index.html#configuration-management
Note
THIS TUTORIAL IS A WORK IN PROGRESS
Salt comes with a powerful integration and unit test suite. The test suite allows for the fully automated run of integration and/or unit tests from a single interface. The integration tests are surprisingly easy to write and can be written to be either destructive or non-destructive.
To walk through adding an integration test, start by getting the latest development code and the test system from GitHub:
Note
The develop branch often has failing tests and should always be considered a staging area. For a checkout that tests should be running perfectly on, please check out a specific release tag (such as v2014.1.4).
git clone git@github.com:saltstack/salt.git
pip install git+https://github.com/saltstack/salt-testing.git#egg=SaltTesting
Now that a fresh checkout is available run the test suite
Since Salt is used to change the settings and behavior of systems, often, the best approach to run tests is to make actual changes to an underlying system. This is where the concept of destructive integration tests comes into play. Tests can be written to alter the system they are running on. This capability is what fills in the gap needed to properly test aspects of system management like package installation.
To write a destructive test import and use the destructiveTest decorator for the test method:
import integration
from salttesting.helpers import destructiveTest
class PkgTest(integration.ModuleCase):
@destructiveTest
def test_pkg_install(self):
ret = self.run_function('pkg.install', name='finch')
self.assertSaltTrueReturn(ret)
ret = self.run_function('pkg.purge', name='finch')
self.assertSaltTrueReturn(ret)
SaltStack maintains a Jenkins server which can be viewed at http://jenkins.saltstack.com. The tests executed from this Jenkins server create fresh virtual machines for each test run, then execute the destructive tests on the new clean virtual machine. This allows for the execution of tests across supported platforms.
This tutorial demonstrates using the various HTTP modules available in Salt.
These modules wrap the Python tornado
, urllib2
, and requests
libraries, extending them in a manner that is more consistent with Salt
workflows.
salt.utils.http
Library¶This library forms the core of the HTTP modules. Since it is designed to be used from the minion as an execution module, in addition to the master as a runner, it was abstracted into this multi-use library. This library can also be imported by 3rd-party programs wishing to take advantage of its extended functionality.
Core functionality of the execution, state, and runner modules is derived from this library, so common usages between them are described here. Documentation specific to each module is described below.
This library can be imported with:
import salt.utils.http
This library can make use of either tornado
, which is required by Salt,
urllib2
, which ships with Python, or requests
, which can be installed
separately. By default, tornado
will be used. In order to switch to
urllib2
, set the following variable:
backend: urllib2
In order to switch to requests
, set the following variable:
backend: requests
This can be set in the master or minion configuration file, or passed as an
option directly to any http.query()
functions.
salt.utils.http.query()
¶This function forms a basic query, but with some add-ons not present in the
tornado
, urllib2
, and requests
libraries. Not all functionality
currently available in these libraries has been added, but can be in future
iterations.
A basic query can be performed by calling this function with no more than a single URL:
salt.utils.http.query('http://example.com')
By default the query will be performed with a GET
method. The method can
be overridden with the method
argument:
salt.utils.http.query('http://example.com/delete/url', 'DELETE')
When using the POST
method (and others, such PUT
), extra data is usually
sent as well. This data can be either sent directly, in whatever format is
required by the remote server (XML, JSON, plain text, etc).
salt.utils.http.query(
'http://example.com/delete/url',
method='POST',
data=json.loads(mydict)
)
Bear in mind that this data must be sent pre-formatted; this function will not format it for you. However, a templated file stored on the local system may be passed through, along with variables to populate it with. To pass through only the file (untemplated):
salt.utils.http.query(
'http://example.com/post/url',
method='POST',
data_file='/srv/salt/somefile.xml'
)
To pass through a file that contains jinja + yaml templating (the default):
salt.utils.http.query(
'http://example.com/post/url',
method='POST',
data_file='/srv/salt/somefile.jinja',
data_render=True,
template_data={'key1': 'value1', 'key2': 'value2'}
)
To pass through a file that contains mako templating:
salt.utils.http.query(
'http://example.com/post/url',
method='POST',
data_file='/srv/salt/somefile.mako',
data_render=True,
data_renderer='mako',
template_data={'key1': 'value1', 'key2': 'value2'}
)
Because this function uses Salt's own rendering system, any Salt renderer can
be used. Because Salt's renderer requires __opts__
to be set, an opts
dictionary should be passed in. If it is not, then the default __opts__
values for the node type (master or minion) will be used. Because this library
is intended primarily for use by minions, the default node type is minion
.
However, this can be changed to master
if necessary.
salt.utils.http.query(
'http://example.com/post/url',
method='POST',
data_file='/srv/salt/somefile.jinja',
data_render=True,
template_data={'key1': 'value1', 'key2': 'value2'},
opts=__opts__
)
salt.utils.http.query(
'http://example.com/post/url',
method='POST',
data_file='/srv/salt/somefile.jinja',
data_render=True,
template_data={'key1': 'value1', 'key2': 'value2'},
node='master'
)
Headers may also be passed through, either as a header_list
, a
header_dict
or as a header_file
. As with the data_file
, the
header_file
may also be templated. Take note that because HTTP headers are
normally syntactically-correct YAML, they will automatically be imported as an
a Python dict.
salt.utils.http.query(
'http://example.com/delete/url',
method='POST',
header_file='/srv/salt/headers.jinja',
header_render=True,
header_renderer='jinja',
template_data={'key1': 'value1', 'key2': 'value2'}
)
Because much of the data that would be templated between headers and data may be
the same, the template_data
is the same for both. Correcting possible
variable name collisions is up to the user.
The query()
function supports basic HTTP authentication. A username and
password may be passed in as username
and password
, respectively.
salt.utils.http.query(
'http://example.com',
username='larry',
password=`5700g3543v4r`,
)
Cookies are also supported, using Python's built-in cookielib
. However, they
are turned off by default. To turn cookies on, set cookies
to True.
salt.utils.http.query(
'http://example.com',
cookies=True
)
By default cookies are stored in Salt's cache directory, normally
/var/cache/salt
, as a file called cookies.txt
. However, this location
may be changed with the cookie_jar
argument:
salt.utils.http.query(
'http://example.com',
cookies=True,
cookie_jar='/path/to/cookie_jar.txt'
)
By default, the format of the cookie jar is LWP (aka, lib-www-perl). This default was chosen because it is a human-readable text file. If desired, the format of the cookie jar can be set to Mozilla:
salt.utils.http.query(
'http://example.com',
cookies=True,
cookie_jar='/path/to/cookie_jar.txt',
cookie_format='mozilla'
)
Because Salt commands are normally one-off commands that are piped together,
this library cannot normally behave as a normal browser, with session cookies
that persist across multiple HTTP requests. However, the session can be
persisted in a separate cookie jar. The default filename for this file, inside
Salt's cache directory, is cookies.session.p
. This can also be changed.
salt.utils.http.query(
'http://example.com',
persist_session=True,
session_cookie_jar='/path/to/jar.p'
)
The format of this file is msgpack, which is consistent with much of the rest
of Salt's internal structure. Historically, the extension for this file is
.p
. There are no current plans to make this configurable.
By default, query()
will attempt to decode the return data. Because it was
designed to be used with REST interfaces, it will attempt to decode the data
received from the remote server. First it will check the Content-type
header
to try and find references to XML. If it does not find any, it will look for
references to JSON. If it does not find any, it will fall back to plain text,
which will not be decoded.
JSON data is translated into a dict using Python's built-in json
library.
XML is translated using salt.utils.xml_util
, which will use Python's
built-in XML libraries to attempt to convert the XML into a dict. In order to
force either JSON or XML decoding, the decode_type
may be set:
salt.utils.http.query(
'http://example.com',
decode_type='xml'
)
Once translated, the return dict from query()
will include a dict called
dict
.
If the data is not to be translated using one of these methods, decoding may be turned off.
salt.utils.http.query(
'http://example.com',
decode=False
)
If decoding is turned on, and references to JSON or XML cannot be found, then
this module will default to plain text, and return the undecoded data as
text
(even if text is set to False
; see below).
The query()
function can return the HTTP status code, headers, and/or text
as required. However, each must individually be turned on.
salt.utils.http.query(
'http://example.com',
status=True,
headers=True,
text=True
)
The return from these will be found in the return dict as status
,
headers
and text
, respectively.
It is possible to write either the return data or headers to files, as soon as
the response is received from the server, but specifying file locations via the
text_out
or headers_out
arguments. text
and headers
do not need
to be returned to the user in order to do this.
salt.utils.http.query(
'http://example.com',
text=False,
headers=False,
text_out='/path/to/url_download.txt',
headers_out='/path/to/headers_download.txt',
)
By default, this function will verify SSL certificates. However, for testing or debugging purposes, SSL verification can be turned off.
salt.utils.http.query(
'https://example.com',
ssl_verify=False,
)
The requests
library has its own method of detecting which CA (certficate
authority) bundle file to use. Usually this is implemented by the packager for
the specific operating system distribution that you are using. However,
urllib2
requires a little more work under the hood. By default, Salt will
try to auto-detect the location of this file. However, if it is not in an
expected location, or a different path needs to be specified, it may be done so
using the ca_bundle
variable.
salt.utils.http.query(
'https://example.com',
ca_bundle='/path/to/ca_bundle.pem',
)
The update_ca_bundle()
function can be used to update the bundle file at a
specified location. If the target location is not specified, then it will
attempt to auto-detect the location of the bundle file. If the URL to download
the bundle from does not exist, a bundle will be downloaded from the cURL
website.
CAUTION: The target
and the source
should always be specified! Failure
to specify the target
may result in the file being written to the wrong
location on the local system. Failure to specify the source
may cause the
upstream URL to receive excess unnecessary traffic, and may cause a file to be
download which is hazardous or does not meet the needs of the user.
salt.utils.http.update_ca_bundle(
target='/path/to/ca-bundle.crt',
source='https://example.com/path/to/ca-bundle.crt',
opts=__opts__,
)
The opts
parameter should also always be specified. If it is, then the
target
and the source
may be specified in the relevant configuration
file (master or minion) as ca_bundle
and ca_bundle_url
, respectively.
ca_bundle: /path/to/ca-bundle.crt
ca_bundle_url: https://example.com/path/to/ca-bundle.crt
If Salt is unable to auto-detect the location of the CA bundle, it will raise an error.
The update_ca_bundle()
function can also be passed a string or a list of
strings which represent files on the local system, which should be appended (in
the specified order) to the end of the CA bundle file. This is useful in
environments where private certs need to be made available, and are not
otherwise reasonable to add to the bundle file.
salt.utils.http.update_ca_bundle(
opts=__opts__,
merge_files=[
'/etc/ssl/private_cert_1.pem',
'/etc/ssl/private_cert_2.pem',
'/etc/ssl/private_cert_3.pem',
]
)
This function may be run in test mode. This mode will perform all work up until
the actual HTTP request. By default, instead of performing the request, an empty
dict will be returned. Using this function with TRACE
logging turned on will
reveal the contents of the headers and POST data to be sent.
Rather than returning an empty dict, an alternate test_url
may be passed in.
If this is detected, then test mode will replace the url
with the
test_url
, set test
to True
in the return data, and perform the rest
of the requested operations as usual. This allows a custom, non-destructive URL
to be used for testing when necessary.
The http
execution module is a very thin wrapper around the
salt.utils.http
library. The opts
can be passed through as well, but if
they are not specified, the minion defaults will be used as necessary.
Because passing complete data structures from the command line can be tricky at
best and dangerous (in terms of execution injection attacks) at worse, the
data_file
, and header_file
are likely to see more use here.
All methods for the library are available in the execution module, as kwargs.
salt myminion http.query http://example.com/restapi method=POST \
username='larry' password='5700g3543v4r' headers=True text=True \
status=True decode_type=xml data_render=True \
header_file=/tmp/headers.txt data_file=/tmp/data.txt \
header_render=True cookies=True persist_session=True
Like the execution module, the http
runner module is a very thin wrapper
around the salt.utils.http
library. The only significant difference is that
because runners execute on the master instead of a minion, a target is not
required, and default opts will be derived from the master config, rather than
the minion config.
All methods for the library are available in the runner module, as kwargs.
salt-run http.query http://example.com/restapi method=POST \
username='larry' password='5700g3543v4r' headers=True text=True \
status=True decode_type=xml data_render=True \
header_file=/tmp/headers.txt data_file=/tmp/data.txt \
header_render=True cookies=True persist_session=True
The state module is a wrapper around the runner module, which applies stateful
logic to a query. All kwargs as listed above are specified as usual in state
files, but two more kwargs are available to apply stateful logic. A required
parameter is match
, which specifies a pattern to look for in the return
text. By default, this will perform a string comparison of looking for the
value of match in the return text. In Python terms this looks like:
if match in html_text:
return True
If more complex pattern matching is required, a regular expression can be used
by specifying a match_type
. By default this is set to string
, but it
can be manually set to pcre
instead. Please note that despite the name, this
will use Python's re.search()
rather than re.match()
.
Therefore, the following states are valid:
http://example.com/restapi:
http.query:
- match: 'SUCCESS'
- username: 'larry'
- password: '5700g3543v4r'
- data_render: True
- header_file: /tmp/headers.txt
- data_file: /tmp/data.txt
- header_render: True
- cookies: True
- persist_session: True
http://example.com/restapi:
http.query:
- match_type: pcre
- match: '(?i)succe[ss|ed]'
- username: 'larry'
- password: '5700g3543v4r'
- data_render: True
- header_file: /tmp/headers.txt
- data_file: /tmp/data.txt
- header_render: True
- cookies: True
- persist_session: True
In addition to, or instead of a match pattern, the status code for a URL can be
checked. This is done using the status
argument:
http://example.com/:
http.query:
- status: '200'
If both are specified, both will be checked, but if only one is True
and the
other is False
, then False
will be returned. In this case, the comments
in the return data will contain information for troubleshooting.
Because this is a monitoring state, it will return extra data to code that
expects it. This data will always include text
and status
. Optionally,
headers
and dict
may also be requested by setting the headers
and
decode
arguments to True, respectively.
Note
This walkthrough assumes basic knowledge of Salt. To get up to speed, check out the Salt Walkthrough.
Warning
Some features are only currently available in the develop
branch, and
are new in the upcoming 2015.5.0 release. These new features will be
clearly labeled.
Even in 2015.5 release, you will need up to the last changeset of this
stable branch for the salt-cloud stuff to work correctly.
Manipulation of LXC containers in Salt requires the minion to have an LXC version of at least 1.0 (an alpha or beta release of LXC 1.0 is acceptable). The following distributions are known to have new enough versions of LXC packaged:
Profiles allow for a sort of shorthand for commonly-used
configurations to be defined in the minion config file, grains, pillar, or the master config file. The
profile is retrieved by Salt using the config.get
function, which looks in those locations, in that
order. This allows for profiles to be defined centrally in the master config
file, with several options for overriding them (if necessary) on groups of
minions or individual minions.
There are two types of profiles:
- One for defining the parameters used in container creation/clone.
- One for defining the container's network interface(s) settings.
LXC container profiles are defined defined underneath the
lxc.container_profile
config option:
lxc.container_profile:
centos:
template: centos
backing: lvm
vgname: vg1
lvname: lxclv
size: 10G
centos_big:
template: centos
backing: lvm
vgname: vg1
lvname: lxclv
size: 20G
Profiles are retrieved using the config.get
function, with the recurse merge strategy. This means that a profile can be
defined at a lower level (for example, the master config file) and then parts
of it can be overridden at a higher level (for example, in pillar data).
Consider the following container profile data:
In the Master config file:
lxc.container_profile:
centos:
template: centos
backing: lvm
vgname: vg1
lvname: lxclv
size: 10G
In the Pillar data
lxc.container_profile:
centos:
size: 20G
Any minion with the above Pillar data would have the size parameter in the centos profile overriden to 20G, while those minions without the above Pillar data would have the 10G size value. This is another way of achieving the same result as the centos_big profile above, without having to define another whole profile that differs in just one value.
Note
In the 2014.7.x release cycle and earlier, container profiles are defined
under lxc.profile
. This parameter will still work in version 2015.5.0,
but is deprecated and will be removed in a future release. Please note
however that the profile merging feature described above will only work
with profiles defined under lxc.container_profile
, and only in versions
2015.5.0 and later.
Additionally, in version 2015.5.0 container profiles have been expanded to
support passing template-specific CLI options to lxc.create
. Below is a table describing the parameters which
can be configured in container profiles:
Parameter | 2015.5.0 and Newer | 2014.7.x and Earlier |
---|---|---|
template1 | Yes | Yes |
options1 | Yes | No |
image1 | Yes | Yes |
backing | Yes | Yes |
snapshot2 | Yes | Yes |
lvname1 | Yes | Yes |
fstype1 | Yes | Yes |
size | Yes | Yes |
LXC network profiles are defined defined underneath the lxc.network_profile
config option.
By default, the module uses a DHCP based configuration and try to guess a bridge to
get connectivity.
Warning
on pre 2015.5.2, you need to specify explitly the network bridge
lxc.network_profile:
centos:
eth0:
link: br0
type: veth
flags: up
ubuntu:
eth0:
link: lxcbr0
type: veth
flags: up
As with container profiles, network profiles are retrieved using the
config.get
function, with the recurse
merge strategy. Consider the following network profile data:
In the Master config file:
lxc.network_profile:
centos:
eth0:
link: br0
type: veth
flags: up
In the Pillar data
lxc.network_profile:
centos:
eth0:
link: lxcbr0
Any minion with the above Pillar data would use the lxcbr0 interface as the bridge interface for any container configured using the centos network profile, while those minions without the above Pillar data would use the br0 interface for the same.
Note
In the 2014.7.x release cycle and earlier, network profiles are defined
under lxc.nic
. This parameter will still work in version 2015.5.0, but
is deprecated and will be removed in a future release. Please note however
that the profile merging feature described above will only work with
profiles defined under lxc.network_profile
, and only in versions
2015.5.0 and later.
The following are parameters which can be configured in network profiles. These
will directly correspond to a parameter in an LXC configuration file (see man
5 lxc.container.conf
).
Interface-specific options (MAC address, IPv4/IPv6, etc.) must be passed on a
container-by-container basis, for instance using the nic_opts
argument to
lxc.create
:
salt myminion lxc.create container1 profile=centos network_profile=centos nic_opts='{eth0: {ipv4: 10.0.0.20/24, gateway: 10.0.0.1}}'
Warning
The ipv4
, ipv6
, gateway
, and link
(bridge) settings in
network profiles / nic_opts will only work if the container doesnt redefine
the network configuration (for example in
/etc/sysconfig/network-scripts/ifcfg-<interface_name>
on RHEL/CentOS,
or /etc/network/interfaces
on Debian/Ubuntu/etc.). Use these with
caution. The container images installed using the download
template,
for instance, typically are configured for eth0 to use DHCP, which will
conflict with static IP addresses set at the container level.
Note
For LXC < 1.0.7 and DHCP support, set ipv4.gateway: 'auto'
is your
network profile, ie.:
lxc.network_profile.nic:
debian:
eth0:
link: lxcbr0
ipv4.gateway: 'auto'
With saltstack 2015.5.2 and above, normally the setting is autoselected, but before, you'll need to teach your network profile to set lxc.network.ipv4.gateway to auto when using a classic ipv4 configuration.
Thus you'll need
lxc.network_profile.foo:
etho:
link: lxcbr0
ipv4.gateway: auto
This example covers how to make a container with both an internal ip and a public routable ip, wired on two veth pairs.
The another interface which receives directly a public routable ip can't be on the first interface that we reserve for private inter LXC networking.
lxc.network_profile.foo:
eth0: {gateway: null, bridge: lxcbr0}
eth1:
# replace that by your main interface
'link': 'br0'
'mac': '00:16:5b:01:24:e1'
'gateway': '2.20.9.14'
'ipv4': '2.20.9.1'
LXC is commonly distributed with several template scripts in /usr/share/lxc/templates. Some distros may package these separately in an lxc-templates package, so make sure to check if this is the case.
There are LXC template scripts for several different operating systems, but
some of them are designed to use tools specific to a given distribution. For
instance, the ubuntu
template uses deb_bootstrap, the centos
template
uses yum, etc., making these templates impractical when a container from a
different OS is desired.
The lxc.create
function is used to create
containers using a template script. To create a CentOS container named
container1
on a CentOS minion named mycentosminion
, using the
centos
LXC template, one can simply run the following command:
salt mycentosminion lxc.create container1 template=centos
For these instances, there is a download
template which retrieves minimal
container images for several different operating systems. To use this template,
it is necessary to provide an options
parameter when creating the
container, with three values:
ubuntu
or centos
)trusty
or 6
)amd64
or i386
)The lxc.images
function (new in version
2015.5.0) can be used to list the available images. Alternatively, the releases
can be viewed on http://images.linuxcontainers.org/images/. The images are
organized in such a way that the dist, release, and arch can be
determined using the following URL format:
http://images.linuxcontainers.org/images/dist/release/arch
. For example,
http://images.linuxcontainers.org/images/centos/6/amd64
would correspond to
a dist of centos
, a release of 6
, and an arch of amd64
.
Therefore, to use the download
template to create a new 64-bit CentOS 6
container, the following command can be used:
salt myminion lxc.create container1 template=download options='{dist: centos, release: 6, arch: amd64}'
Note
These command-line options can be placed into a container profile, like so:
lxc.container_profile.cent6:
template: download
options:
dist: centos
release: 6
arch: amd64
The options
parameter is not supported in profiles for the 2014.7.x
release cycle and earlier, so it would still need to be provided on the
command-line.
To clone a container, use the lxc.clone
function:
salt myminion lxc.clone container2 orig=container1
While cloning is a good way to create new containers from a common base
container, the source container that is being cloned needs to already exist on
the minion. This makes deploying a common container across minions difficult.
For this reason, Salt's lxc.create
is capable
of installing a container from a tar archive of another container's rootfs. To
create an image of a container named cent6
, run the following command as
root:
tar czf cent6.tar.gz -C /var/lib/lxc/cent6 rootfs
Note
Before doing this, it is recommended that the container is stopped.
The resulting tarball can then be placed alongside the files in the salt
fileserver and referenced using a salt://
URL. To create a container using
an image, use the image
parameter with lxc.create
:
salt myminion lxc.create new-cent6 image=salt://path/to/cent6.tar.gz
Note
Making images of containers with LVM backing
For containers with LVM backing, the rootfs is not mounted, so it is
necessary to mount it first before creating the tar archive. When a
container is created using LVM backing, an empty rootfs
dir is handily
created within /var/lib/lxc/container_name
, so this can be used as the
mountpoint. The location of the logical volume for the container will be
/dev/vgname/lvname
, where vgname
is the name of the volume group,
and lvname
is the name of the logical volume. Therefore, assuming a
volume group of vg1
, a logical volume of lxc-cent6
, and a container
name of cent6
, the following commands can be used to create a tar
archive of the rootfs:
mount /dev/vg1/lxc-cent6 /var/lib/lxc/cent6/rootfs
tar czf cent6.tar.gz -C /var/lib/lxc/cent6 rootfs
umount /var/lib/lxc/cent6/rootfs
Warning
One caveat of using this method of container creation is that
/etc/hosts
is left unmodified. This could cause confusion for some
distros if salt-minion is later installed on the container, as the
functions that determine the hostname take /etc/hosts
into account.
Additionally, when creating an rootfs image, be sure to remove
/etc/salt/minion_id
and make sure that id
is not defined in
/etc/salt/minion
, as this will cause similar issues.
The above examples illustrate a few ways to create containers on the CLI, but
often it is desirable to also have the new container run as a Minion. To do
this, the lxc.init
function can be used. This
function will do the following:
By default, the new container will be pointed at the same Salt Master as the host machine on which the container was created. It will then request to authenticate with the Master like any other bootstrapped Minion, at which point it can be accepted.
salt myminion lxc.init test1 profile=centos
salt-key -a test1
For even greater convenience, the LXC runner
contains
a runner function of the same name (lxc.init
),
which creates a keypair, seeds the new minion with it, and pre-accepts the key,
allowing for the new Minion to be created and authorized in a single step:
salt-run lxc.init test1 host=myminion profile=centos
For containers which are not running their own Minion, commands can be run
within the container in a manner similar to using (cmd.run
<salt.modules.cmdmod.run
). The means of doing this have been changed
significantly in version 2015.5.0 (though the deprecated behavior will still be
supported for a few releases). Both the old and new usage are documented
below.
New functions have been added to mimic the behavior of the functions in the
cmd
module. Below is a table with the cmd
functions and their lxc
module
equivalents:
Description | cmd module |
lxc module |
---|---|---|
Run a command and get all output | cmd.run |
lxc.run |
Run a command and get just stdout | cmd.run_stdout |
lxc.run_stdout |
Run a command and get just stderr | cmd.run_stderr |
lxc.run_stderr |
Run a command and get just the retcode | cmd.retcode |
lxc.retcode |
Run a command and get all information | cmd.run_all |
lxc.run_all |
Earlier Salt releases use a single function (lxc.run_cmd
) to run commands within containers. Whether stdout,
stderr, etc. are returned depends on how the function is invoked.
To run a command and return the stdout:
salt myminion lxc.run_cmd web1 'tail /var/log/messages'
To run a command and return the stderr:
salt myminion lxc.run_cmd web1 'tail /var/log/messages' stdout=False stderr=True
To run a command and return the retcode:
salt myminion lxc.run_cmd web1 'tail /var/log/messages' stdout=False stderr=False
To run a command and return all information:
salt myminion lxc.run_cmd web1 'tail /var/log/messages' stdout=True stderr=True
Salt cloud uses under the hood the salt runner and module to manage containers, Please look at this chapter
Several states are being renamed or otherwise modified in version 2015.5.0. The
information in this tutorial refers to the new states. For
2014.7.x and earlier, please refer to the documentation for the LXC
states
.
To ensure the existence of a named container, use the lxc.present
state. Here are some examples:
# Using a template
web1:
lxc.present:
- template: download
- options:
dist: centos
release: 6
arch: amd64
# Cloning
web2:
lxc.present:
- clone_from: web-base
# Using a rootfs image
web3:
lxc.present:
- image: salt://path/to/cent6.tar.gz
# Using profiles
web4:
lxc.present:
- profile: centos_web
- network_profile: centos
Warning
The lxc.present
state will not modify an
existing container (in other words, it will not re-create the container).
If an lxc.present
state is run on an
existing container, there will be no change and the state will return a
True
result.
The lxc.present
state also includes an
optional running
parameter which can be used to ensure that a container is
running/stopped. Note that there are standalone lxc.running
and lxc.stopped
states which can be used for this purpose.
To ensure that a named container is not present, use the lxc.absent
state. For example:
web1:
lxc.absent
Containers can be in one of three states:
Salt has three states (lxc.running
,
lxc.frozen
, and lxc.stopped
) which can be used to ensure a container is in one
of these states:
web1:
lxc.running
# Restart the container if it was already running
web2:
lxc.running:
- restart: True
web3:
lxc.stopped
# Explicitly kill all tasks in container instead of gracefully stopping
web4:
lxc.stopped:
- kill: True
web5:
lxc.frozen
# If container is stopped, do not start it (in which case the state will fail)
web6:
lxc.frozen:
- start: False
In Salt 0.14.0, an advanced cloud control system were introduced, allow private cloud vms to be managed directly with Salt. This system is generally referred to as Salt Virt.
The Salt Virt system already exists and is installed within Salt itself, this means that beside setting up Salt, no additional salt code needs to be deployed.
The main goal of Salt Virt is to facilitate a very fast and simple cloud. The cloud that can scale and fully featured. Salt Virt comes with the ability to set up and manage complex virtual machine networking, powerful image, and disk management, as well as virtual machine migration with and without shared storage.
This means that Salt Virt can be used to create a cloud from a blade center and a SAN, but can also create a cloud out of a swarm of Linux Desktops without a single shared storage system. Salt Virt can make clouds from truly commodity hardware, but can also stand up the power of specialized hardware as well.
The first step to set up the hypervisors involves getting the correct software installed and setting up the hypervisor network interfaces.
Salt Virt is made to be hypervisor agnostic but currently the only fully implemented hypervisor is KVM via libvirt.
The required software for a hypervisor is libvirt and kvm. For advanced features install libguestfs or qemu-nbd.
Note
Libguestfs and qemu-nbd allow for virtual machine images to be mounted before startup and get pre-seeded with configurations and a salt minion
This sls will set up the needed software for a hypervisor, and run the routines to set up the libvirt pki keys.
Note
Package names and setup used is Red Hat specific, different package names will be required for different platforms
libvirt:
pkg.installed: []
file.managed:
- name: /etc/sysconfig/libvirtd
- contents: 'LIBVIRTD_ARGS="--listen"'
- require:
- pkg: libvirt
libvirt.keys:
- require:
- pkg: libvirt
service.running:
- name: libvirtd
- require:
- pkg: libvirt
- network: br0
- libvirt: libvirt
- watch:
- file: libvirt
libvirt-python:
pkg.installed: []
libguestfs:
pkg.installed:
- pkgs:
- libguestfs
- libguestfs-tools
The hypervisors will need to be running a network bridge to serve up network devices for virtual machines, this formula will set up a standard bridge on a hypervisor connecting the bridge to eth0:
eth0:
network.managed:
- enabled: True
- type: eth
- bridge: br0
br0:
network.managed:
- enabled: True
- type: bridge
- proto: dhcp
- require:
- network: eth0
Salt Virt comes with a system to model the network interfaces used by the
deployed virtual machines; by default a single interface is created for the
deployed virtual machine and is bridged to br0
. To get going with the
default networking setup, ensure that the bridge interface named br0
exists
on the hypervisor and is bridged to an active network device.
Note
To use more advanced networking in Salt Virt, read the Salt Virt Networking document:
One of the challenges of deploying a libvirt based cloud is the distribution
of libvirt certificates. These certificates allow for virtual machine
migration. Salt comes with a system used to auto deploy these certificates.
Salt manages the signing authority key and generates keys for libvirt clients
on the master, signs them with the certificate authority and uses pillar to
distribute them. This is managed via the libvirt
state. Simply execute this
formula on the minion to ensure that the certificate is in place and up to
date:
Note
The above formula includes the calls needed to set up libvirt keys.
libvirt_keys:
libvirt.keys
Salt Virt, requires that virtual machine images be provided as these are not generated on the fly. Generating these virtual machine images differs greatly based on the underlying platform.
Virtual machine images can be manually created using KVM and running through the installer, but this process is not recommended since it is very manual and prone to errors.
Virtual Machine generation applications are available for many platforms:
https://wiki.debian.org/VMBuilder
See also
Once virtual machine images are available, the easiest way to make them
available to Salt Virt is to place them in the Salt file server. Just copy an
image into /srv/salt
and it can now be used by Salt Virt.
For purposes of this demo, the file name centos.img
will be used.
Many existing Linux distributions distribute virtual machine images which can be used with Salt Virt. Please be advised that NONE OF THESE IMAGES ARE SUPPORTED BY SALTSTACK.
These images have been prepared for OpenNebula but should work without issue with Salt Virt, only the raw qcow image file is needed: http://wiki.centos.org/Cloud/OpenNebula
Images for Fedora Linux can be found here: http://fedoraproject.org/en/get-fedora#clouds
Images for Ubuntu Linux can be found here: http://cloud-images.ubuntu.com/
With hypervisors set up and virtual machine images ready, Salt can start issuing cloud commands.
Start by running a Salt Virt hypervisor info command:
salt-run virt.hyper_info
This will query what the running hypervisor stats are and display information for all configured hypervisors. This command will also validate that the hypervisors are properly configured.
Now that hypervisors are available a virtual machine can be provisioned. The
virt.init
routine will create a new virtual machine:
salt-run virt.init centos1 2 512 salt://centos.img
This command assumes that the CentOS virtual machine image is sitting in the root of the Salt fileserver. Salt Virt will now select a hypervisor to deploy the new virtual machine on and copy the virtual machine image down to the hypervisor.
Once the VM image has been copied down the new virtual machine will be seeded. Seeding the VMs involves setting pre-authenticated Salt keys on the new VM and if needed, will install the Salt Minion on the new VM before it is started.
Note
The biggest bottleneck in starting VMs is when the Salt Minion needs to be installed. Making sure that the source VM images already have Salt installed will GREATLY speed up virtual machine deployment.
Now that the new VM has been prepared, it can be seen via the virt.query
command:
salt-run virt.query
This command will return data about all of the hypervisors and respective virtual machines.
Now that the new VM is booted it should have contacted the Salt Master, a
test.ping
will reveal if the new VM is running.
Salt Virt comes with full support for virtual machine migration, and using the libvirt state in the above formula makes migration possible.
A few things need to be available to support migration. Many operating systems turn on firewalls when originally set up, the firewall needs to be opened up to allow for libvirt and kvm to cross communicate and execution migration routines. On Red Hat based hypervisors in particular port 16514 needs to be opened on hypervisors:
iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 16514 -j ACCEPT
Note
More in-depth information regarding distribution specific firewall settings can read in:
Salt also needs an additional flag to be turned on as well. The virt.tunnel
option needs to be turned on. This flag tells Salt to run migrations securely
via the libvirt TLS tunnel and to use port 16514. Without virt.tunnel
libvirt
tries to bind to random ports when running migrations. To turn on virt.tunnel
simple apply it to the master config file:
virt.tunnel: True
Once the master config has been updated, restart the master and send out a call to the minions to refresh the pillar to pick up on the change:
salt \* saltutil.refresh_modules
Now, migration routines can be run! To migrate a VM, simply run the Salt Virt migrate routine:
salt-run virt.migrate centos <new hypervisor>
Salt Virt also sets up VNC consoles by default, allowing for remote visual
consoles to be oped up. The information from a virt.query
routine will
display the vnc console port for the specific vms:
centos
CPU: 2
Memory: 524288
State: running
Graphics: vnc - hyper6:5900
Disk - vda:
Size: 2.0G
File: /srv/salt-images/ubuntu2/system.qcow2
File Format: qcow2
Nic - ac:de:48:98:08:77:
Source: br0
Type: bridge
The line Graphics: vnc - hyper6:5900 holds the key. First the port named, in this case 5900, will need to be available in the hypervisor's firewall. Once the port is open, then the console can be easily opened via vncviewer:
vncviewer hyper6:5900
By default there is no VNC security set up on these ports, which suggests that keeping them firewalled and mandating that SSH tunnels be used to access these VNC interfaces. Keep in mind that activity on a VNC interface that is accessed can be viewed by any other user that accesses that same VNC interface, and any other user logging in can also operate with the logged in user on the virtual machine.
Now with Salt Virt running, new hypervisors can be seamlessly added just by running the above states on new bare metal machines, and these machines will be instantly available to Salt Virt.
Note
This walkthrough assumes basic knowledge of Salt. To get up to speed, check out the Salt Walkthrough.
Warning
Some features are only currently available in the develop
branch, and
are new in the upcoming 2015.5.0 release. These new features will be
clearly labeled.
Even in 2015.5 release, you will need up to the last changeset of this
stable branch for the salt-cloud stuff to work correctly.
Manipulation of LXC containers in Salt requires the minion to have an LXC version of at least 1.0 (an alpha or beta release of LXC 1.0 is acceptable). The following distributions are known to have new enough versions of LXC packaged:
Profiles allow for a sort of shorthand for commonly-used
configurations to be defined in the minion config file, grains, pillar, or the master config file. The
profile is retrieved by Salt using the config.get
function, which looks in those locations, in that
order. This allows for profiles to be defined centrally in the master config
file, with several options for overriding them (if necessary) on groups of
minions or individual minions.
There are two types of profiles:
- One for defining the parameters used in container creation/clone.
- One for defining the container's network interface(s) settings.
LXC container profiles are defined defined underneath the
lxc.container_profile
config option:
lxc.container_profile:
centos:
template: centos
backing: lvm
vgname: vg1
lvname: lxclv
size: 10G
centos_big:
template: centos
backing: lvm
vgname: vg1
lvname: lxclv
size: 20G
Profiles are retrieved using the config.get
function, with the recurse merge strategy. This means that a profile can be
defined at a lower level (for example, the master config file) and then parts
of it can be overridden at a higher level (for example, in pillar data).
Consider the following container profile data:
In the Master config file:
lxc.container_profile:
centos:
template: centos
backing: lvm
vgname: vg1
lvname: lxclv
size: 10G
In the Pillar data
lxc.container_profile:
centos:
size: 20G
Any minion with the above Pillar data would have the size parameter in the centos profile overriden to 20G, while those minions without the above Pillar data would have the 10G size value. This is another way of achieving the same result as the centos_big profile above, without having to define another whole profile that differs in just one value.
Note
In the 2014.7.x release cycle and earlier, container profiles are defined
under lxc.profile
. This parameter will still work in version 2015.5.0,
but is deprecated and will be removed in a future release. Please note
however that the profile merging feature described above will only work
with profiles defined under lxc.container_profile
, and only in versions
2015.5.0 and later.
Additionally, in version 2015.5.0 container profiles have been expanded to
support passing template-specific CLI options to lxc.create
. Below is a table describing the parameters which
can be configured in container profiles:
Parameter | 2015.5.0 and Newer | 2014.7.x and Earlier |
---|---|---|
template1 | Yes | Yes |
options1 | Yes | No |
image1 | Yes | Yes |
backing | Yes | Yes |
snapshot2 | Yes | Yes |
lvname1 | Yes | Yes |
fstype1 | Yes | Yes |
size | Yes | Yes |
LXC network profiles are defined defined underneath the lxc.network_profile
config option.
By default, the module uses a DHCP based configuration and try to guess a bridge to
get connectivity.
Warning
on pre 2015.5.2, you need to specify explitly the network bridge
lxc.network_profile:
centos:
eth0:
link: br0
type: veth
flags: up
ubuntu:
eth0:
link: lxcbr0
type: veth
flags: up
As with container profiles, network profiles are retrieved using the
config.get
function, with the recurse
merge strategy. Consider the following network profile data:
In the Master config file:
lxc.network_profile:
centos:
eth0:
link: br0
type: veth
flags: up
In the Pillar data
lxc.network_profile:
centos:
eth0:
link: lxcbr0
Any minion with the above Pillar data would use the lxcbr0 interface as the bridge interface for any container configured using the centos network profile, while those minions without the above Pillar data would use the br0 interface for the same.
Note
In the 2014.7.x release cycle and earlier, network profiles are defined
under lxc.nic
. This parameter will still work in version 2015.5.0, but
is deprecated and will be removed in a future release. Please note however
that the profile merging feature described above will only work with
profiles defined under lxc.network_profile
, and only in versions
2015.5.0 and later.
The following are parameters which can be configured in network profiles. These
will directly correspond to a parameter in an LXC configuration file (see man
5 lxc.container.conf
).
Interface-specific options (MAC address, IPv4/IPv6, etc.) must be passed on a
container-by-container basis, for instance using the nic_opts
argument to
lxc.create
:
salt myminion lxc.create container1 profile=centos network_profile=centos nic_opts='{eth0: {ipv4: 10.0.0.20/24, gateway: 10.0.0.1}}'
Warning
The ipv4
, ipv6
, gateway
, and link
(bridge) settings in
network profiles / nic_opts will only work if the container doesnt redefine
the network configuration (for example in
/etc/sysconfig/network-scripts/ifcfg-<interface_name>
on RHEL/CentOS,
or /etc/network/interfaces
on Debian/Ubuntu/etc.). Use these with
caution. The container images installed using the download
template,
for instance, typically are configured for eth0 to use DHCP, which will
conflict with static IP addresses set at the container level.
Note
For LXC < 1.0.7 and DHCP support, set ipv4.gateway: 'auto'
is your
network profile, ie.:
lxc.network_profile.nic:
debian:
eth0:
link: lxcbr0
ipv4.gateway: 'auto'
With saltstack 2015.5.2 and above, normally the setting is autoselected, but before, you'll need to teach your network profile to set lxc.network.ipv4.gateway to auto when using a classic ipv4 configuration.
Thus you'll need
lxc.network_profile.foo:
etho:
link: lxcbr0
ipv4.gateway: auto
This example covers how to make a container with both an internal ip and a public routable ip, wired on two veth pairs.
The another interface which receives directly a public routable ip can't be on the first interface that we reserve for private inter LXC networking.
lxc.network_profile.foo:
eth0: {gateway: null, bridge: lxcbr0}
eth1:
# replace that by your main interface
'link': 'br0'
'mac': '00:16:5b:01:24:e1'
'gateway': '2.20.9.14'
'ipv4': '2.20.9.1'
LXC is commonly distributed with several template scripts in /usr/share/lxc/templates. Some distros may package these separately in an lxc-templates package, so make sure to check if this is the case.
There are LXC template scripts for several different operating systems, but
some of them are designed to use tools specific to a given distribution. For
instance, the ubuntu
template uses deb_bootstrap, the centos
template
uses yum, etc., making these templates impractical when a container from a
different OS is desired.
The lxc.create
function is used to create
containers using a template script. To create a CentOS container named
container1
on a CentOS minion named mycentosminion
, using the
centos
LXC template, one can simply run the following command:
salt mycentosminion lxc.create container1 template=centos
For these instances, there is a download
template which retrieves minimal
container images for several different operating systems. To use this template,
it is necessary to provide an options
parameter when creating the
container, with three values:
ubuntu
or centos
)trusty
or 6
)amd64
or i386
)The lxc.images
function (new in version
2015.5.0) can be used to list the available images. Alternatively, the releases
can be viewed on http://images.linuxcontainers.org/images/. The images are
organized in such a way that the dist, release, and arch can be
determined using the following URL format:
http://images.linuxcontainers.org/images/dist/release/arch
. For example,
http://images.linuxcontainers.org/images/centos/6/amd64
would correspond to
a dist of centos
, a release of 6
, and an arch of amd64
.
Therefore, to use the download
template to create a new 64-bit CentOS 6
container, the following command can be used:
salt myminion lxc.create container1 template=download options='{dist: centos, release: 6, arch: amd64}'
Note
These command-line options can be placed into a container profile, like so:
lxc.container_profile.cent6:
template: download
options:
dist: centos
release: 6
arch: amd64
The options
parameter is not supported in profiles for the 2014.7.x
release cycle and earlier, so it would still need to be provided on the
command-line.
To clone a container, use the lxc.clone
function:
salt myminion lxc.clone container2 orig=container1
While cloning is a good way to create new containers from a common base
container, the source container that is being cloned needs to already exist on
the minion. This makes deploying a common container across minions difficult.
For this reason, Salt's lxc.create
is capable
of installing a container from a tar archive of another container's rootfs. To
create an image of a container named cent6
, run the following command as
root:
tar czf cent6.tar.gz -C /var/lib/lxc/cent6 rootfs
Note
Before doing this, it is recommended that the container is stopped.
The resulting tarball can then be placed alongside the files in the salt
fileserver and referenced using a salt://
URL. To create a container using
an image, use the image
parameter with lxc.create
:
salt myminion lxc.create new-cent6 image=salt://path/to/cent6.tar.gz
Note
Making images of containers with LVM backing
For containers with LVM backing, the rootfs is not mounted, so it is
necessary to mount it first before creating the tar archive. When a
container is created using LVM backing, an empty rootfs
dir is handily
created within /var/lib/lxc/container_name
, so this can be used as the
mountpoint. The location of the logical volume for the container will be
/dev/vgname/lvname
, where vgname
is the name of the volume group,
and lvname
is the name of the logical volume. Therefore, assuming a
volume group of vg1
, a logical volume of lxc-cent6
, and a container
name of cent6
, the following commands can be used to create a tar
archive of the rootfs:
mount /dev/vg1/lxc-cent6 /var/lib/lxc/cent6/rootfs
tar czf cent6.tar.gz -C /var/lib/lxc/cent6 rootfs
umount /var/lib/lxc/cent6/rootfs
Warning
One caveat of using this method of container creation is that
/etc/hosts
is left unmodified. This could cause confusion for some
distros if salt-minion is later installed on the container, as the
functions that determine the hostname take /etc/hosts
into account.
Additionally, when creating an rootfs image, be sure to remove
/etc/salt/minion_id
and make sure that id
is not defined in
/etc/salt/minion
, as this will cause similar issues.
The above examples illustrate a few ways to create containers on the CLI, but
often it is desirable to also have the new container run as a Minion. To do
this, the lxc.init
function can be used. This
function will do the following:
By default, the new container will be pointed at the same Salt Master as the host machine on which the container was created. It will then request to authenticate with the Master like any other bootstrapped Minion, at which point it can be accepted.
salt myminion lxc.init test1 profile=centos
salt-key -a test1
For even greater convenience, the LXC runner
contains
a runner function of the same name (lxc.init
),
which creates a keypair, seeds the new minion with it, and pre-accepts the key,
allowing for the new Minion to be created and authorized in a single step:
salt-run lxc.init test1 host=myminion profile=centos
For containers which are not running their own Minion, commands can be run
within the container in a manner similar to using (cmd.run
<salt.modules.cmdmod.run
). The means of doing this have been changed
significantly in version 2015.5.0 (though the deprecated behavior will still be
supported for a few releases). Both the old and new usage are documented
below.
New functions have been added to mimic the behavior of the functions in the
cmd
module. Below is a table with the cmd
functions and their lxc
module
equivalents:
Description | cmd module |
lxc module |
---|---|---|
Run a command and get all output | cmd.run |
lxc.run |
Run a command and get just stdout | cmd.run_stdout |
lxc.run_stdout |
Run a command and get just stderr | cmd.run_stderr |
lxc.run_stderr |
Run a command and get just the retcode | cmd.retcode |
lxc.retcode |
Run a command and get all information | cmd.run_all |
lxc.run_all |
Earlier Salt releases use a single function (lxc.run_cmd
) to run commands within containers. Whether stdout,
stderr, etc. are returned depends on how the function is invoked.
To run a command and return the stdout:
salt myminion lxc.run_cmd web1 'tail /var/log/messages'
To run a command and return the stderr:
salt myminion lxc.run_cmd web1 'tail /var/log/messages' stdout=False stderr=True
To run a command and return the retcode:
salt myminion lxc.run_cmd web1 'tail /var/log/messages' stdout=False stderr=False
To run a command and return all information:
salt myminion lxc.run_cmd web1 'tail /var/log/messages' stdout=True stderr=True
Salt cloud uses under the hood the salt runner and module to manage containers, Please look at this chapter
Several states are being renamed or otherwise modified in version 2015.5.0. The
information in this tutorial refers to the new states. For
2014.7.x and earlier, please refer to the documentation for the LXC
states
.
To ensure the existence of a named container, use the lxc.present
state. Here are some examples:
# Using a template
web1:
lxc.present:
- template: download
- options:
dist: centos
release: 6
arch: amd64
# Cloning
web2:
lxc.present:
- clone_from: web-base
# Using a rootfs image
web3:
lxc.present:
- image: salt://path/to/cent6.tar.gz
# Using profiles
web4:
lxc.present:
- profile: centos_web
- network_profile: centos
Warning
The lxc.present
state will not modify an
existing container (in other words, it will not re-create the container).
If an lxc.present
state is run on an
existing container, there will be no change and the state will return a
True
result.
The lxc.present
state also includes an
optional running
parameter which can be used to ensure that a container is
running/stopped. Note that there are standalone lxc.running
and lxc.stopped
states which can be used for this purpose.
To ensure that a named container is not present, use the lxc.absent
state. For example:
web1:
lxc.absent
Containers can be in one of three states:
Salt has three states (lxc.running
,
lxc.frozen
, and lxc.stopped
) which can be used to ensure a container is in one
of these states:
web1:
lxc.running
# Restart the container if it was already running
web2:
lxc.running:
- restart: True
web3:
lxc.stopped
# Explicitly kill all tasks in container instead of gracefully stopping
web4:
lxc.stopped:
- kill: True
web5:
lxc.frozen
# If container is stopped, do not start it (in which case the state will fail)
web6:
lxc.frozen:
- start: False
The focus of this tutorial will be building a Salt infrastructure for handling large numbers of minions. This will include tuning, topology, and best practices.
For how to install the saltmaster please go here: Installing saltstack
Note
This tutorial is intended for large installations, although these same settings won't hurt, it may not be worth the complexity to smaller installations.
When used with minions, the term 'many' refers to at least a thousand and 'a few' always means 500.
For simplicity reasons, this tutorial will default to the standard ports used by salt.
The most common problems on the salt-master are:
The first three are all "thundering herd" problems. To mitigate these issues we must configure the minions to back-off appropriately when the master is under heavy load.
The fourth is caused by masters with little hardware resources in combination with a possible bug in ZeroMQ. At least thats what it looks like till today (Issue 118651, Issue 5948, Mail thread)
To fully understand each problem, it is important to understand, how salt works.
Very briefly, the saltmaster offers two services to the minions.
All minions are always connected to the publisher on port 4505 and only connect to the open return port 4506 if necessary. On an idle master, there will only be connections on port 4505.
When the minion service is first started up, it will connect to its master's publisher on port 4505. If too many minions are started at once, this can cause a "thundering herd". This can be avoided by not starting too many minions at once.
The connection itself usually isn't the culprit, the more likely cause of master-side issues is the authentication that the minion must do with the master. If the master is too heavily loaded to handle the auth request it will time it out. The minion will then wait acceptance_wait_time to retry. If acceptance_wait_time_max is set then the minion will increase its wait time by the acceptance_wait_time each subsequent retry until reaching acceptance_wait_time_max.
This is most likely to happen in the testing phase, when all minion keys have already been accepted, the framework is being tested and parameters change frequently in the masters configuration file.
In a few cases (master restart, remove minion key, etc.) the salt-master generates a new AES-key to encrypt its publications with. The minions aren't notified of this but will realize this on the next pub job they receive. When the minion receives such a job it will then re-auth with the master. Since Salt does minion-side filtering this means that all the minions will re-auth on the next command published on the master-- causing another "thundering herd". This can be avoided by setting the
random_reauth_delay: 60
in the minions configuration file to a higher value and stagger the amount of re-auth attempts. Increasing this value will of course increase the time it takes until all minions are reachable via salt commands.
By default the zmq socket will re-connect every 100ms which for some larger installations may be too quick. This will control how quickly the TCP session is re-established, but has no bearing on the auth load.
To tune the minions sockets reconnect attempts, there are a few values in the sample configuration file (default values)
recon_default: 100ms
recon_max: 5000
recon_randomize: True
To tune this values to an existing environment, a few decision have to be made.
These questions can not be answered generally. Their answers depend on the hardware and the administrators requirements.
Here is an example scenario with the goal, to have all minions reconnect within a 60 second time-frame on a salt-master service restart.
recon_default: 1000
recon_max: 59000
recon_randomize: True
Each minion will have a randomized reconnect value between 'recon_default' and 'recon_default + recon_max', which in this example means between 1000ms and 60000ms (or between 1 and 60 seconds). The generated random-value will be doubled after each attempt to reconnect (ZeroMQ default behavior).
Lets say the generated random value is 11 seconds (or 11000ms).
reconnect 1: wait 11 seconds
reconnect 2: wait 22 seconds
reconnect 3: wait 33 seconds
reconnect 4: wait 44 seconds
reconnect 5: wait 55 seconds
reconnect 6: wait time is bigger than 60 seconds (recon_default + recon_max)
reconnect 7: wait 11 seconds
reconnect 8: wait 22 seconds
reconnect 9: wait 33 seconds
reconnect x: etc.
With a thousand minions this will mean
1000/60 = ~16
round about 16 connection attempts a second. These values should be altered to values that match your environment. Keep in mind though, that it may grow over time and that more minions might raise the problem again.
This can also happen during the testing phase, if all minions are addressed at once with
$ salt * test.ping
it may cause thousands of minions trying to return their data to the salt-master open port 4506. Also causing a flood of syn-flood if the master can't handle that many returns at once.
This can be easily avoided with salts batch mode:
$ salt * test.ping -b 50
This will only address 50 minions at once while looping through all addressed minions.
The masters resources always have to match the environment. There is no way to give good advise without knowing the environment the master is supposed to run in. But here are some general tuning tips for different situations:
Salt uses RSA-Key-Pairs on the masters and minions end. Both generate 4096 bit key-pairs on first start. While the key-size for the master is currently not configurable, the minions keysize can be configured with different key-sizes. For example with a 2048 bit key:
keysize: 2048
With thousands of decryptions, the amount of time that can be saved on the masters end should not be neglected. See here for reference: Pull Request 9235 how much influence the key-size can have.
Downsizing the salt-masters key is not that important, because the minions do not encrypt as many messages as the master does.
By default, the master saves every minion's return for every job in its job-cache. The cache can then be used later, to lookup results for previous jobs. The default directory for this is:
cachedir: /var/cache/salt
and then in the /proc
directory.
Each job return for every minion is saved in a single file. Over time this directory can grow quite large, depending on the number of published jobs. The amount of files and directories will scale with the number of jobs published and the retention time defined by
keep_jobs: 24
250 jobs/day * 2000 minions returns = 500.000 files a day
If no job history is needed, the job cache can be disabled:
job_cache: False
If the job cache is necessary there are (currently) 2 options:
Targeting minions is specifying which minions should run a command or execute a state by matching against hostnames, or system information, or defined groups, or even combinations thereof.
For example the command salt web1 apache.signal restart
to restart the
Apache httpd server specifies the machine web1
as the target and the
command will only be run on that one minion.
Similarly when using States, the following top file specifies that only
the web1
minion should execute the contents of webserver.sls
:
base:
'web1':
- webserver
There are many ways to target individual minions or groups of minions in Salt:
minion id
¶Each minion needs a unique identifier. By default when a minion starts for the
first time it chooses its FQDN as that
identifier. The minion id can be overridden via the minion's id
configuration setting.
Tip
minion id and minion keys
The minion id is used to generate the minion's public/private keys and if it ever changes the master must then accept the new key as though the minion was a new host.
The default matching that Salt utilizes is shell-style globbing
around the minion id. This also works for states
in the top file.
Note
You must wrap salt calls that use globbing in single-quotes to prevent the shell from expanding the globs before Salt is invoked.
Match all minions:
salt '*' test.ping
Match all minions in the example.net domain or any of the example domains:
salt '*.example.net' test.ping
salt '*.example.*' test.ping
Match all the webN
minions in the example.net domain (web1.example.net
,
web2.example.net
… webN.example.net
):
salt 'web?.example.net' test.ping
Match the web1
through web5
minions:
salt 'web[1-5]' test.ping
Match the web1
and web3
minions:
salt 'web[1,3]' test.ping
Match the web-x
, web-y
, and web-z
minions:
salt 'web-[x-z]' test.ping
Note
For additional targeting methods please review the compound matchers documentation.
Minions can be matched using Perl-compatible regular expressions
(which is globbing on steroids and a ton of caffeine).
Match both web1-prod
and web1-devel
minions:
salt -E 'web1-(prod|devel)' test.ping
When using regular expressions in a State's top file, you must specify
the matcher as the first option. The following example executes the contents of
webserver.sls
on the above-mentioned minions.
base:
'web1-(prod|devel)':
- match: pcre
- webserver
At the most basic level, you can specify a flat list of minion IDs:
salt -L 'web1,web2,web3' test.ping
Salt comes with an interface to derive information about the underlying system. This is called the grains interface, because it presents salt with grains of information.
The grains interface is made available to Salt modules and components so that the right salt minion commands are automatically available on the right systems.
It is important to remember that grains are bits of information loaded when the salt minion starts, so this information is static. This means that the information in grains is unchanging, therefore the nature of the data is static. So grains information are things like the running kernel, or the operating system.
Note
Grains resolve to lowercase letters. For example, FOO
, and foo
target the same grain.
Match all CentOS minions:
salt -G 'os:CentOS' test.ping
Match all minions with 64-bit CPUs, and return number of CPU cores for each matching minion:
salt -G 'cpuarch:x86_64' grains.item num_cpus
Additionally, globs can be used in grain matches, and grains that are nested in
a dictionary can be matched by adding a colon for
each level that is traversed. For example, the following will match hosts that
have a grain called ec2_tags
, which itself is a
dict with a key named environment
, which
has a value that contains the word production
:
salt -G 'ec2_tags:environment:*production*'
Available grains can be listed by using the 'grains.ls' module:
salt '*' grains.ls
Grains data can be listed by using the 'grains.items' module:
salt '*' grains.items
Grains can also be statically assigned within the minion configuration file.
Just add the option grains
and pass options to it:
grains:
roles:
- webserver
- memcache
deployment: datacenter4
cabinet: 13
cab_u: 14-15
Then status data specific to your servers can be retrieved via Salt, or used inside of the State system for matching. It also makes targeting, in the case of the example above, simply based on specific data about your deployment.
If you do not want to place your custom static grains in the minion config
file, you can also put them in /etc/salt/grains
on the minion. They are configured in the
same way as in the above example, only without a top-level grains:
key:
roles:
- webserver
- memcache
deployment: datacenter4
cabinet: 13
cab_u: 14-15
With correctly configured grains on the Minion, the top file used in Pillar or during Highstate can be made very efficient. For example, consider the following configuration:
'node_type:web':
- match: grain
- webserver
'node_type:postgres':
- match: grain
- database
'node_type:redis':
- match: grain
- redis
'node_type:lb':
- match: grain
- lb
For this example to work, you would need to have defined the grain
node_type
for the minions you wish to match. This simple example is nice,
but too much of the code is similar. To go one step further, Jinja templating
can be used to simplify the top file.
{% set the_node_type = salt['grains.get']('node_type', '') %}
{% if the_node_type %}
'node_type:{{ the_node_type }}':
- match: grain
- {{ the_node_type }}
{% endif %}
Using Jinja templating, only one match entry needs to be defined.
Note
The example above uses the grains.get
function to account for minions which do not have the node_type
grain
set.
The grains interface is derived by executing all of the "public" functions found in the modules located in the grains package or the custom grains directory. The functions in the modules of the grains must return a Python dict, where the keys in the dict are the names of the grains and the values are the values.
Custom grains should be placed in a _grains
directory located under the
file_roots
specified by the master config file. The default path
would be /srv/salt/_grains
. Custom grains will be
distributed to the minions when state.highstate
is run, or by executing the
saltutil.sync_grains
or
saltutil.sync_all
functions.
Grains are easy to write, and only need to return a dictionary. A common approach would be code something similar to the following:
#!/usr/bin/env python
def yourfunction():
# initialize a grains dictionary
grains = {}
# Some code for logic that sets grains like
grains['yourcustomgrain'] = True
grains['anothergrain'] = 'somevalue'
return grains
Before adding a grain to Salt, consider what the grain is and remember that grains need to be static data. If the data is something that is likely to change, consider using Pillar instead.
Warning
Custom grains will not be available in the top file until after the first highstate. To make custom grains available on a minion's first highstate, it is recommended to use this example to ensure that the custom grains are synced when the minion starts.
Core grains can be overridden by custom grains. As there are several ways of defining custom grains, there is an order of precedence which should be kept in mind when defining them. The order of evaluation is as follows:
_grains
directory, synced to minions./etc/salt/grains
./etc/salt/minion
.Each successive evaluation overrides the previous ones, so any grains defined
by custom grains modules synced to minions that have the same name as a core
grain will override that core grain. Similarly, grains from
/etc/salt/grains
override both core grains and custom grain modules, and
grains in /etc/salt/minion
will override any grains of the same name.
The core module in the grains package is where the main grains are loaded by the Salt minion and provides the principal example of how to write grains:
https://github.com/saltstack/salt/blob/develop/salt/grains/core.py
Syncing grains can be done a number of ways, they are automatically synced when
state.highstate
is called, or (as noted
above) the grains can be manually synced and reloaded by calling the
saltutil.sync_grains
or
saltutil.sync_all
functions.
Minions can easily be matched based on IP address, or by subnet (using CIDR notation).
salt -S 192.168.40.20 test.ping
salt -S 10.0.0.0/24 test.ping
Ipcidr matching can also be used in compound matches
salt -C 'S@10.0.0.0/24 and G@os:Debian' test.ping
It is also possible to use in both pillar and state-matching
'172.16.0.0/12':
- match: ipcidr
- internal
Note
Only IPv4 matching is supported at this time.
Compound matchers allow very granular minion targeting using any of Salt's
matchers. The default matcher is a glob
match, just as
with CLI and top file matching. To match using anything other than a
glob, prefix the match string with the appropriate letter from the table below,
followed by an @
sign.
Letter | Delimiter | Match Type | Example |
---|---|---|---|
G | x | Grains glob | G@os:Ubuntu |
E | PCRE Minion ID | E@web\d+\.(dev|qa|prod)\.loc |
|
P | x | Grains PCRE | P@os:(RedHat|Fedora|CentOS) |
L | List of minions | L@minion1.example.com,minion3.domain.com or bl*.domain.com |
|
I | x | Pillar glob | I@pdata:foobar |
J | x | Pillar PCRE | J@pdata:^(foo|bar)$ |
S | Subnet/IP address | S@192.168.1.0/24 or S@192.168.1.100 |
|
R | Range cluster | R@%foo.bar |
Matchers can be joined using boolean and
, or
, and not
operators.
For example, the following string matches all Debian minions with a hostname
that begins with webserv
, as well as any minions that have a hostname which
matches the regular expression
web-dc1-srv.*
:
salt -C 'webserv* and G@os:Debian or E@web-dc1-srv.*' test.ping
That same example expressed in a top file looks like the following:
base:
'webserv* and G@os:Debian or E@web-dc1-srv.*':
- match: compound
- webserver
New in version Beryllium.
Excluding a minion based on its ID is also possible:
salt -C 'not web-dc1-srv' test.ping
Versions prior to Beryllium a leading not
was not supported in compound
matches. Instead, something like the following was required:
salt -C '* and not G@kernel:Darwin' test.ping
Excluding a minion based on its ID was also possible:
salt -C '* and not web-dc1-srv' test.ping
Matches can be grouped together with parentheses to explicitly declare precedence amongst groups.
salt -C '( ms-1 or G@id:ms-3 ) and G@id:ms-3' test.ping
Note
Be certain to note that spaces are required between the parentheses and targets. Failing to obey this rule may result in incorrect targeting!
New in version Beryllium.
Some matchers allow an optional delimiter character specified between the
leading matcher character and the @
pattern separator character. This
can be essential when the globbing or PCRE pattern may use the default
delimiter character :
. This avoids incorrect interpretation of the
pattern as part of the grain or pillar data structure traversal.
salt -C 'J|@foo|bar|^foo:bar$ or J!@gitrepo!https://github.com:example/project.git' test.ping
Nodegroups are declared using a compound target specification. The compound target documentation can be found here.
The nodegroups
master config file parameter is used to define
nodegroups. Here's an example nodegroup configuration within
/etc/salt/master
:
nodegroups:
group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com or bl*.domain.com'
group2: 'G@os:Debian and foo.domain.com'
group3: 'G@os:Debian and N@group1'
group4:
- 'G@foo:bar'
- 'or'
- 'G@foo:baz'
Note
The L
within group1 is matching a list of minions, while the G
in
group2 is matching specific grains. See the compound matchers documentation for more details.
New in version Beryllium.
Note
Nodgroups can reference other nodegroups as seen in group3
. Ensure
that you do not have circular references. Circular references will be
detected and cause partial expansion with a logged error message.
New in version Beryllium.
Compound nodegroups can be either string values or lists of string values. When the nodegroup is A string value will be tokenized by splitting on whitespace. This may be a problem if whitespace is necessary as part of a pattern. When a nodegroup is a list of strings then tokenization will happen for each list element as a whole.
To match a nodegroup on the CLI, use the -N
command-line option:
salt -N group1 test.ping
To match a nodegroup in your top file, make sure to put - match:
nodegroup
on the line directly following the nodegroup name.
base:
group1:
- match: nodegroup
- webserver
Note
When adding or modifying nodegroups to a master configuration file, the master must be restarted for those changes to be fully recognized.
A limited amount of functionality, such as targeting with -N from the command-line may be available without a restart.
The -b
(or --batch-size
) option allows commands to be executed on only
a specified number of minions at a time. Both percentages and finite numbers are
supported.
salt '*' -b 10 test.ping
salt -G 'os:RedHat' --batch-size 25% apache.signal restart
This will only run test.ping on 10 of the targeted minions at a time and then
restart apache on 25% of the minions matching os:RedHat
at a time and work
through them all until the task is complete. This makes jobs like rolling web
server restarts behind a load balancer or doing maintenance on BSD firewalls
using carp much easier with salt.
The batch system maintains a window of running minions, so, if there are a total of 150 minions targeted and the batch size is 10, then the command is sent to 10 minions, when one minion returns then the command is sent to one additional minion, so that the job is constantly running on 10 minions.
SECO range is a cluster-based metadata store developed and maintained by Yahoo!
The Range project is hosted here:
https://github.com/ytoolshed/range
Learn more about range here:
https://github.com/ytoolshed/range/wiki/
To utilize range support in Salt, a range server is required. Setting up a range server is outside the scope of this document. Apache modules are included in the range distribution.
With a working range server, cluster files must be defined. These files are written in YAML and define hosts contained inside a cluster. Full documentation on writing YAML range files is here:
https://github.com/ytoolshed/range/wiki/%22yamlfile%22-module-file-spec
Additionally, the Python seco range libraries must be installed on the salt master. One can verify that they have been installed correctly via the following command:
python -c 'import seco.range'
If no errors are returned, range is installed successfully on the salt master.
Range support must be enabled on the salt master by setting the hostname and port of the range server inside the master configuration file:
range_server: my.range.server.com:80
Following this, the master must be restarted for the change to have an effect.
Once a cluster has been defined, it can be targeted with a salt command by
using the -R
or --range
flags.
For example, given the following range YAML file being served from a range server:
$ cat /etc/range/test.yaml
CLUSTER: host1..100.test.com
APPS:
- frontend
- backend
- mysql
One might target host1 through host100 in the test.com domain with Salt as follows:
salt --range %test:CLUSTER test.ping
The following salt command would target three hosts: frontend
, backend
, and mysql
:
salt --range %test:APPS test.ping
Pillar is an interface for Salt designed to offer global values that can be distributed to all minions. Pillar data is managed in a similar way as the Salt State Tree.
Pillar was added to Salt in version 0.9.8
Note
Storing sensitive data
Unlike state tree, pillar data is only available for the targeted minion specified by the matcher type. This makes it useful for storing sensitive data specific to a particular minion.
The Salt Master server maintains a pillar_roots setup that matches the
structure of the file_roots used in the Salt file server. Like the
Salt file server the pillar_roots
option in the master config is based
on environments mapping to directories. The pillar data is then mapped to
minions based on matchers in a top file which is laid out in the same way
as the state top file. Salt pillars can use the same matcher types as the
standard top file.
The configuration for the pillar_roots
in the master config file
is identical in behavior and function as file_roots
:
pillar_roots:
base:
- /srv/pillar
This example configuration declares that the base environment will be located
in the /srv/pillar
directory. It must not be in a subdirectory of the
state tree.
The top file used matches the name of the top file used for States, and has the same structure:
/srv/pillar/top.sls
base:
'*':
- packages
In the above top file, it is declared that in the base
environment, the
glob matching all minions will have the pillar data found in the packages
pillar available to it. Assuming the pillar_roots
value of /srv/pillar
taken from above, the packages
pillar would be located at
/srv/pillar/packages.sls
.
Another example shows how to use other standard top matching types to deliver specific salt pillar data to minions with different properties.
Here is an example using the grains
matcher to target pillars to minions
by their os
grain:
dev:
'os:Debian':
- match: grain
- servers
/srv/pillar/packages.sls
{% if grains['os'] == 'RedHat' %}
apache: httpd
git: git
{% elif grains['os'] == 'Debian' %}
apache: apache2
git: git-core
{% endif %}
company: Foo Industries
The above pillar sets two key/value pairs. If a minion is running RedHat, then
the apache
key is set to httpd
and the git
key is set to the value
of git
. If the minion is running Debian, those values are changed to
apache2
and git-core
respctively. All minions that have this pillar
targeting to them via a top file will have the key of company
with a value
of Foo Industries
.
Consequently this data can be used from within modules, renderers, State SLS files, and more via the shared pillar dict:
apache:
pkg.installed:
- name: {{ pillar['apache'] }}
git:
pkg.installed:
- name: {{ pillar['git'] }}
Finally, the above states can utilize the values provided to them via Pillar. All pillar values targeted to a minion are available via the 'pillar' dictionary. As seen in the above example, Jinja substitution can then be utilized to access the keys and values in the Pillar dictionary.
Note that you cannot just list key/value-information in top.sls
. Instead,
target a minion to a pillar file and then list the keys and values in the
pillar. Here is an example top file that illustrates this point:
base:
'*':
- common_pillar
And the actual pillar file at '/srv/pillar/common_pillar.sls':
foo: bar
boo: baz
The separate pillar files all share the same namespace. Given a top.sls
of:
base:
'*':
- packages
- services
a packages.sls
file of:
bind: bind9
and a services.sls
file of:
bind: named
Then a request for the bind
pillar will only return named
; the
bind9
value is not available. It is better to structure your pillar files
with more hierarchy. For example your package.sls
file could look like:
packages:
bind: bind9
With some care, the pillar namespace can merge content from multiple pillar files under a single key, so long as conflicts are avoided as described above.
For example, if the above example were modified as follows, the values are merged below a single key:
base:
'*':
- packages
- services
And a packages.sls
file like:
bind:
package-name: bind9
version: 9.9.5
And a services.sls
file like:
bind:
port: 53
listen-on: any
The resulting pillar will be as follows:
$ salt-call pillar.get bind
local:
----------
listen-on:
any
package-name:
bind9
port:
53
version:
9.9.5
Note
Remember: conflicting keys will be overwritten in a non-deterministic manner!
New in version 0.16.0.
Pillar SLS files may include other pillar files, similar to State files. Two syntaxes are available for this purpose. The simple form simply includes the additional pillar as if it were part of the same file:
include:
- users
The full include form allows two additional options -- passing default values to the templating engine for the included pillar file as well as an optional key under which to nest the results of the included pillar:
include:
- users:
defaults:
sudo: ['bob', 'paul']
key: users
With this form, the included file (users.sls) will be nested within the 'users' key of the compiled pillar. Additionally, the 'sudo' value will be available as a template variable to users.sls.
Once the pillar is set up the data can be viewed on the minion via the
pillar
module, the pillar module comes with functions,
pillar.items
and pillar.raw
. pillar.items
will return a freshly reloaded pillar and pillar.raw
will return the current pillar without a refresh:
salt '*' pillar.items
Note
Prior to version 0.16.2, this function is named pillar.data
. This
function name is still supported for backwards compatibility.
New in version 0.14.0.
The pillar.get
function works much in the same
way as the get
method in a python dict, but with an enhancement: nested
dict components can be extracted using a : delimiter.
If a structure like this is in pillar:
foo:
bar:
baz: qux
Extracting it from the raw pillar in an sls formula or file template is done this way:
{{ pillar['foo']['bar']['baz'] }}
Now, with the new pillar.get
function the data
can be safely gathered and a default can be set, allowing the template to fall
back if the value is not available:
{{ salt['pillar.get']('foo:bar:baz', 'qux') }}
This makes handling nested structures much easier.
Note
pillar.get()
vs salt['pillar.get']()
It should be noted that within templating, the pillar
variable is just
a dictionary. This means that calling pillar.get()
inside of a
template will just use the default dictionary .get()
function which
does not include the extra :
delimiter functionality. It must be
called using the above syntax (salt['pillar.get']('foo:bar:baz',
'qux')
) to get the salt function, instead of the default dictionary
behavior.
When pillar data is changed on the master the minions need to refresh the data
locally. This is done with the saltutil.refresh_pillar
function.
salt '*' saltutil.refresh_pillar
This function triggers the minion to asynchronously refresh the pillar and will
always return None
.
Pillar data can be used when targeting minions. This allows for ultimate control and flexibility when targeting minions.
salt -I 'somekey:specialvalue' test.ping
Like with Grains, it is possible to use globbing
as well as match nested values in Pillar, by adding colons for each level that
is being traversed. The below example would match minions with a pillar named
foo
, which is a dict containing a key bar
, with a value beginning with
baz
:
salt -I 'foo:bar:baz*' test.ping
Pillar data can be set at the command line like the following example:
salt '*' state.highstate pillar='{"cheese": "spam"}'
This will create a dict with a key of 'cheese' and a value of 'spam'. A list can be created like this:
salt '*' state.highstate pillar='["cheese", "milk", "bread"]'
For convenience the data stored in the master configuration file is made available in all minion's pillars. This makes global configuration of services and systems very easy but may not be desired if sensitive data is stored in the master configuration.
To disable the master config from being added to the pillar set pillar_opts
to False
:
pillar_opts: False
Minion configuration options can be set on pillars. Any option that you want to modify, should be in the first level of the pillars, in the same way you set the options in the config file. For example, to configure the MySQL root password to be used by MySQL Salt execution module, set the following pillar variable:
mysql.pass: hardtoguesspassword
By default if there is an error rendering a pillar, the detailed error is hidden and replaced with:
Rendering SLS 'my.sls' failed. Please see master log for details.
The error is protected because it's possible to contain templating data which would give that minion information it shouldn't know, like a password!
To have the master provide the detailed error that could potentially carry
protected data set pillar_safe_render_error
to False
:
pillar_safe_render_error: True
Salt version 0.11.0 introduced the reactor system. The premise behind the reactor system is that with Salt's events and the ability to execute commands, a logic engine could be put in place to allow events to trigger actions, or more accurately, reactions.
This system binds sls files to event tags on the master. These sls files then define reactions. This means that the reactor system has two parts. First, the reactor option needs to be set in the master configuration file. The reactor option allows for event tags to be associated with sls reaction files. Second, these reaction files use highdata (like the state system) to define reactions to be executed.
A basic understanding of the event system is required to understand reactors. The event system is a local ZeroMQ PUB interface which fires salt events. This event bus is an open system used for sending information notifying Salt and other systems about operations.
The event system fires events with a very specific criteria. Every event has a tag. Event tags allow for fast top level filtering of events. In addition to the tag, each event has a data structure. This data structure is a dict, which contains information about the event.
Reactor SLS files and event tags are associated in the master config file. By default this is /etc/salt/master, or /etc/salt/master.d/reactor.conf.
New in version 2014.7.0: Added Reactor support for salt://
file paths.
In the master config section 'reactor:' is a list of event tags to be matched and each event tag has a list of reactor SLS files to be run.
reactor: # Master config section "reactor"
- 'salt/minion/*/start': # Match tag "salt/minion/*/start"
- /srv/reactor/start.sls # Things to do when a minion starts
- /srv/reactor/monitor.sls # Other things to do
- 'salt/cloud/*/destroyed': # Globs can be used to matching tags
- /srv/reactor/destroy/*.sls # Globs can be used to match file names
- 'myco/custom/event/tag': # React to custom event tags
- salt://reactor/mycustom.sls # Put reactor files under file_roots
Reactor sls files are similar to state and pillar sls files. They are by default yaml + Jinja templates and are passed familiar context variables.
They differ because of the addition of the tag
and data
variables.
tag
variable is just the tag in the fired event.data
variable is the event's data dict.Here is a simple reactor sls:
{% if data['id'] == 'mysql1' %}
highstate_run:
local.state.highstate:
- tgt: mysql1
{% endif %}
This simple reactor file uses Jinja to further refine the reaction to be made.
If the id
in the event data is mysql1
(in other words, if the name of
the minion is mysql1
) then the following reaction is defined. The same
data structure and compiler used for the state system is used for the reactor
system. The only difference is that the data is matched up to the salt command
API and the runner system. In this example, a command is published to the
mysql1
minion with a function of state.highstate
. Similarly, a runner
can be called:
{% if data['data']['overstate'] == 'refresh' %}
overstate_run:
runner.state.over
{% endif %}
This example will execute the state.overstate runner and initiate an overstate execution.
To fire an event from a minion call event.send
salt-call event.send 'foo' '{overstate: refresh}'
After this is called, any reactor sls files matching event tag foo
will
execute with {{ data['data']['overstate'] }}
equal to 'refresh'
.
See salt.modules.event
for more information.
The best way to see exactly what events are fired and what data is available in
each event is to use the state.event runner
.
See also
Example usage:
salt-run state.event pretty=True
Example output:
salt/job/20150213001905721678/new {
"_stamp": "2015-02-13T00:19:05.724583",
"arg": [],
"fun": "test.ping",
"jid": "20150213001905721678",
"minions": [
"jerry"
],
"tgt": "*",
"tgt_type": "glob",
"user": "root"
}
salt/job/20150213001910749506/ret/jerry {
"_stamp": "2015-02-13T00:19:11.136730",
"cmd": "_return",
"fun": "saltutil.find_job",
"fun_args": [
"20150213001905721678"
],
"id": "jerry",
"jid": "20150213001910749506",
"retcode": 0,
"return": {},
"success": true
}
The best window into the Reactor is to run the master in the foreground with debug logging enabled. The output will include when the master sees the event, what the master does in response to that event, and it will also include the rendered SLS file (or any errors generated while rendering the SLS file).
Stop the master.
Start the master manually:
salt-master -l debug
Look for log entries in the form:
[DEBUG ] Gathering reactors for tag foo/bar
[DEBUG ] Compiling reactions for tag foo/bar
[DEBUG ] Rendered data from file: /path/to/the/reactor_file.sls:
<... Rendered output appears here. ...>
The rendered output is the result of the Jinja parsing and is a good way to view the result of referencing Jinja variables. If the result is empty then Jinja produced an empty result and the Reactor will ignore it.
I.e., when to use `arg` and `kwarg` and when to specify the function arguments directly.
While the reactor system uses the same basic data structure as the state system, the functions that will be called using that data structure are different functions than are called via Salt's state system. The Reactor can call Runner modules using the runner prefix, Wheel modules using the wheel prefix, and can also cause minions to run Execution modules using the local prefix.
Changed in version 2014.7.0: The cmd
prefix was renamed to local
for consistency with other
parts of Salt. A backward-compatible alias was added for cmd
.
The Reactor runs on the master and calls functions that exist on the master. In the case of Runner and Wheel functions the Reactor can just call those functions directly since they exist on the master and are run on the master.
In the case of functions that exist on minions and are run on minions, the Reactor still needs to call a function on the master in order to send the necessary data to the minion so the minion can execute that function.
The Reactor calls functions exposed in Salt's Python API documentation. and thus the structure of Reactor files very transparently reflects the function signatures of those functions.
The Reactor sends commands down to minions in the exact same way Salt's CLI interface does. It calls a function locally on the master that sends the name of the function as well as a list of any arguments and a dictionary of any keyword arguments that the minion should use to execute that function.
Specifically, the Reactor calls the async version of this function
. You can see that function has 'arg' and 'kwarg'
parameters which are both values that are sent down to the minion.
Executing remote commands maps to the LocalClient interface which is used by the salt command. This interface more specifically maps to the cmd_async method inside of the LocalClient class. This means that the arguments passed are being passed to the cmd_async method, not the remote method. A field starts with local to use the LocalClient subsystem. The result is, to execute a remote command, a reactor formula would look like this:
clean_tmp:
local.cmd.run:
- tgt: '*'
- arg:
- rm -rf /tmp/*
The arg
option takes a list of arguments as they would be presented on the
command line, so the above declaration is the same as running this salt
command:
salt '*' cmd.run 'rm -rf /tmp/*'
Use the expr_form
argument to specify a matcher:
clean_tmp:
local.cmd.run:
- tgt: 'os:Ubuntu'
- expr_form: grain
- arg:
- rm -rf /tmp/*
clean_tmp:
local.cmd.run:
- tgt: 'G@roles:hbase_master'
- expr_form: compound
- arg:
- rm -rf /tmp/*
Any other parameters in the LocalClient().cmd()
method can be specified as well.
Calling Runner modules and wheel modules from the Reactor uses a more direct syntax since the function is being executed locally instead of sending a command to a remote system to be executed there. There are no 'arg' or 'kwarg' parameters (unless the Runner function or Wheel function accepts a paramter with either of those names.)
For example:
clear_the_grains_cache_for_all_minions:
runner.cache.clear_grains
If the runner takes arguments
then
they can be specified as well:
spin_up_more_web_machines:
runner.cloud.profile:
- prof: centos_6
- instances:
- web11 # These VM names would be generated via Jinja in a
- web12 # real-world example.
An interesting trick to pass data from the Reactor script to
state.highstate
or state.sls
is to pass it as inline Pillar data since
both functions take a keyword argument named pillar
.
The following example uses Salt's Reactor to listen for the event that is fired
when the key for a new minion is accepted on the master using salt-key
.
/etc/salt/master.d/reactor.conf
:
reactor:
- 'salt/key':
- /srv/salt/haproxy/react_new_minion.sls
The Reactor then fires a state.sls
command targeted to the HAProxy servers
and passes the ID of the new minion from the event to the state file via inline
Pillar.
/srv/salt/haproxy/react_new_minion.sls
:
{% if data['act'] == 'accept' and data['id'].startswith('web') %}
add_new_minion_to_pool:
local.state.sls:
- tgt: 'haproxy*'
- arg:
- haproxy.refresh_pool
- kwarg:
pillar:
new_minion: {{ data['id'] }}
{% endif %}
The above command is equivalent to the following command at the CLI:
salt 'haproxy*' state.sls haproxy.refresh_pool 'pillar={new_minion: minionid}'
This works with Orchestrate files as well:
call_some_orchestrate_file:
runner.state.orchestrate:
- mods: some_orchestrate_file
- pillar:
stuff: things
Which is equivalent to the following command at the CLI:
salt-run state.orchestrate some_orchestrate_file pillar='{stuff: things}'
Finally, that data is available in the state file using the normal Pillar
lookup syntax. The following example is grabbing web server names and IP
addresses from Salt Mine. If this state is invoked from the
Reactor then the custom Pillar value from above will be available and the new
minion will be added to the pool but with the disabled
flag so that HAProxy
won't yet direct traffic to it.
/srv/salt/haproxy/refresh_pool.sls
:
{% set new_minion = salt['pillar.get']('new_minion') %}
listen web *:80
balance source
{% for server,ip in salt['mine.get']('web*', 'network.interfaces', ['eth0']).items() %}
{% if server == new_minion %}
server {{ server }} {{ ip }}:80 disabled
{% else %}
server {{ server }} {{ ip }}:80 check
{% endif %}
{% endfor %}
In this example, we're going to assume that we have a group of servers that will come online at random and need to have keys automatically accepted. We'll also add that we don't want all servers being automatically accepted. For this example, we'll assume that all hosts that have an id that starts with 'ink' will be automatically accepted and have state.highstate executed. On top of this, we're going to add that a host coming up that was replaced (meaning a new key) will also be accepted.
Our master configuration will be rather simple. All minions that attempte to authenticate will match the tag of salt/auth. When it comes to the minion key being accepted, we get a more refined tag that includes the minion id, which we can use for matching.
/etc/salt/master.d/reactor.conf
:
reactor:
- 'salt/auth':
- /srv/reactor/auth-pending.sls
- 'salt/minion/ink*/start':
- /srv/reactor/auth-complete.sls
In this sls file, we say that if the key was rejected we will delete the key on the master and then also tell the master to ssh in to the minion and tell it to restart the minion, since a minion process will die if the key is rejected.
We also say that if the key is pending and the id starts with ink we will accept the key. A minion that is waiting on a pending key will retry authentication every ten seconds by default.
/srv/reactor/auth-pending.sls
:
{# Ink server faild to authenticate -- remove accepted key #}
{% if not data['result'] and data['id'].startswith('ink') %}
minion_remove:
wheel.key.delete:
- match: {{ data['id'] }}
minion_rejoin:
local.cmd.run:
- tgt: salt-master.domain.tld
- arg:
- ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no "{{ data['id'] }}" 'sleep 10 && /etc/init.d/salt-minion restart'
{% endif %}
{# Ink server is sending new key -- accept this key #}
{% if 'act' in data and data['act'] == 'pend' and data['id'].startswith('ink') %}
minion_add:
wheel.key.accept:
- match: {{ data['id'] }}
{% endif %}
No if statements are needed here because we already limited this action to just Ink servers in the master configuration.
/srv/reactor/auth-complete.sls
:
{# When an Ink server connects, run state.highstate. #}
highstate_run:
local.state.highstate:
- tgt: {{ data['id'] }}
- ret: smtp_return
The above will also return the highstate result data using the smtp_return
returner. The returner needs to be configured on the minion for this to
work. See salt.returners.smtp_return
documentation for
that.
Salt will sync all custom types (by running a saltutil.sync_all
) on every highstate. However, there is a
chicken-and-egg issue where, on the initial highstate, a minion will not yet
have these custom types synced when the top file is first compiled. This can be
worked around with a simple reactor which watches for minion_start
events,
which each minion fires when it first starts up and connects to the master.
On the master, create /srv/reactor/sync_grains.sls with the following contents:
sync_grains:
local.saltutil.sync_grains:
- tgt: {{ data['id'] }}
And in the master config file, add the following reactor configuration:
reactor:
- 'minion_start':
- /srv/reactor/sync_grains.sls
This will cause the master to instruct each minion to sync its custom grains when it starts, making these grains available when the initial highstate is executed.
Other types can be synced by replacing local.saltutil.sync_grains
with
local.saltutil.sync_modules
, local.saltutil.sync_all
, or whatever else
suits the intended use case.
The Salt Mine is used to collect arbitrary data from minions and store it on
the master. This data is then made available to all minions via the
salt.modules.mine
module.
The data is gathered on the minion and sent back to the master where only the most recent data is maintained (if long term data is required use returners or the external job cache).
To enable the Salt Mine the mine_functions option needs to be applied to a minion. This option can be applied via the minion's configuration file, or the minion's Pillar. The mine_functions option dictates what functions are being executed and allows for arguments to be passed in. If no arguments are passed, an empty list must be added:
mine_functions:
test.ping: []
network.ip_addrs:
interface: eth0
cidr: '10.0.0.0/8'
Function aliases can be used to provide friendly names, usage intentions or to allow multiple calls of the same function with different arguments. There is a different syntax for passing positional and key-value arguments. Mixing positional and key-value arguments is not supported.
New in version 2014.7.
mine_functions:
network.ip_addrs: [eth0]
networkplus.internal_ip_addrs: []
internal_ip_addrs:
mine_function: network.ip_addrs
cidr: 192.168.0.0/16
ip_list:
- mine_function: grains.get
- ip_interfaces
The Salt Mine functions are executed when the minion starts and at a given interval by the scheduler. The default interval is every 60 minutes and can be adjusted for the minion via the mine_interval option:
mine_interval: 60
As of the 2015.5.0 release of salt, salt-ssh supports mine.get
.
Because the minions cannot provide their own mine_functions
configuration,
we retrieve the args for specified mine functions in one of three places,
searched in the following order:
The mine_functions
are formatted exactly the same as in normal salt, just
stored in a different location. Here is an example of a flat roster containing
mine_functions
:
test:
host: 104.237.131.248
user: root
mine_functions:
cmd.run: ['echo "hello!"']
network.ip_addrs:
interface: eth0
Note
Because of the differences in the architecture of salt-ssh, mine.get
calls are somewhat inefficient. Salt must make a new salt-ssh call to each
of the minions in question to retrieve the requested data, much like a
publish call. However, unlike publish, it must run the requested function
as a wrapper function, so we can retrieve the function args from the pillar
of the minion in question. This results in a non-trivial delay in
retrieving the requested data.
One way to use data from Salt Mine is in a State. The values can be retrieved via Jinja and used in the SLS file. The following example is a partial HAProxy configuration file and pulls IP addresses from all minions with the "web" grain to add them to the pool of load balanced servers.
/srv/pillar/top.sls
:
base:
'G@roles:web':
- web
/srv/pillar/web.sls
:
mine_functions:
network.ip_addrs: [eth0]
/etc/salt/minion.d/mine.conf
:
mine_interval: 5
/srv/salt/haproxy.sls
:
haproxy_config:
file.managed:
- name: /etc/haproxy/config
- source: salt://haproxy_config
- template: jinja
/srv/salt/haproxy_config
:
<...file contents snipped...>
{% for server, addrs in salt['mine.get']('roles:web', 'network.ip_addrs', expr_form='grain').items() %}
server {{ server }} {{ addrs[0] }}:80 check
{% endfor %}
<...file contents snipped...>
Salt's External Authentication System (eAuth) allows for Salt to pass through command authorization to any external authentication system, such as PAM or LDAP.
Note
eAuth using the PAM external auth system requires salt-master to be run as root as this system needs root access to check authentication.
The external authentication system allows for specific users to be granted access to execute specific functions on specific minions. Access is configured in the master configuration file and uses the access control system:
external_auth:
pam:
thatch:
- 'web*':
- test.*
- network.*
steve:
- .*
The above configuration allows the user thatch
to execute functions
in the test and network modules on the minions that match the web* target.
User steve
is given unrestricted access to minion commands.
Note
The PAM module does not allow authenticating as root
.
To allow access to wheel modules or runner
modules the following @
syntax must be used:
external_auth:
pam:
thatch:
- '@wheel' # to allow access to all wheel modules
- '@runner' # to allow access to all runner modules
- '@jobs' # to allow access to the jobs runner and/or wheel module
Note
The runner/wheel markup is different, since there are no minions to scope the acl to.
Note
Globs will not match wheel or runners! They must be explicitly allowed with @wheel or @runner.
The external authentication system can then be used from the command-line by
any user on the same system as the master with the -a
option:
$ salt -a pam web\* test.ping
The system will ask the user for the credentials required by the authentication system and then publish the command.
To apply permissions to a group of users in an external authentication system,
append a %
to the ID:
external_auth:
pam:
admins%:
- '*':
- 'pkg.*'
With external authentication alone, the authentication credentials will be required with every call to Salt. This can be alleviated with Salt tokens.
Tokens are short term authorizations and can be easily created by just
adding a -T
option when authenticating:
$ salt -T -a pam web\* test.ping
Now a token will be created that has a expiration of 12 hours (by default).
This token is stored in a file named salt_token
in the active user's home
directory.
Once the token is created, it is sent with all subsequent communications. User authentication does not need to be entered again until the token expires.
Token expiration time can be set in the Salt master config file.
Note
LDAP usage requires that you have installed python-ldap.
Salt supports both user and group authentication for LDAP (and Active Directory accessed via its LDAP interface)
LDAP configuration happens in the Salt master configuration file.
Server configuration values and their defaults:
auth.ldap.server: localhost
auth.ldap.port: 389
auth.ldap.tls: False
auth.ldap.scope: 2
auth.ldap.uri: ''
auth.ldap.tls: False
auth.ldap.no_verify: False
auth.ldap.anonymous: False
auth.ldap.groupou: 'Groups'
auth.ldap.groupclass: 'posixGroup'
auth.ldap.accountattributename: 'memberUid'
# These are only for Active Directory
auth.ldap.activedirectory: False
auth.ldap.persontype: 'person'
Salt also needs to know which Base DN to search for users and groups and the DN to bind to:
auth.ldap.basedn: dc=saltstack,dc=com
auth.ldap.binddn: cn=admin,dc=saltstack,dc=com
To bind to a DN, a password is required
auth.ldap.bindpw: mypassword
Salt uses a filter to find the DN associated with a user. Salt
substitutes the {{ username }}
value for the username when querying LDAP
auth.ldap.filter: uid={{ username }}
For OpenLDAP, to determine group membership, one can specify an OU that contains
group data. This is prepended to the basedn to create a search path. Then
the results are filtered against auth.ldap.groupclass
, default
posixGroup
, and the account's 'name' attribute, memberUid
by default.
auth.ldap.groupou: Groups
Active Directory handles group membership differently, and does not utilize the
groupou
configuration variable. AD needs the following options in
the master config:
auth.ldap.activedirectory: True
auth.ldap.filter: sAMAccountName={{username}}
auth.ldap.accountattributename: sAMAccountName
auth.ldap.groupclass: group
auth.ldap.persontype: person
To determine group membership in AD, the username and password that is entered when LDAP is requested as the eAuth mechanism on the command line is used to bind to AD's LDAP interface. If this fails, then it doesn't matter what groups the user belongs to, he or she is denied access. Next, the distinguishedName of the user is looked up with the following LDAP search:
(&(<value of auth.ldap.accountattributename>={{username}})
(objectClass=<value of auth.ldap.persontype>)
)
This should return a distinguishedName that we can use to filter for group membership. Then the following LDAP quey is executed:
(&(member=<distinguishedName from search above>)
(objectClass=<value of auth.ldap.groupclass>)
)
external_auth:
ldap:
test_ldap_user:
- '*':
- test.ping
To configure an LDAP group, append a %
to the ID:
external_auth:
ldap:
test_ldap_group%:
- '*':
- test.echo
New in version 0.10.4.
Salt maintains a standard system used to open granular control to non
administrative users to execute Salt commands. The access control system
has been applied to all systems used to configure access to non administrative
control interfaces in Salt.These interfaces include, the peer
system, the
external auth
system and the client acl
system.
The access control system mandated a standard configuration syntax used in all of the three aforementioned systems. While this adds functionality to the configuration in 0.10.4, it does not negate the old configuration.
Now specific functions can be opened up to specific minions from specific users in the case of external auth and client ACLs, and for specific minions in the case of the peer system.
The access controls are manifested using matchers in these configurations:
client_acl:
fred:
- web\*:
- pkg.list_pkgs
- test.*
- apache.*
In the above example, fred is able to send commands only to minions which match the specified glob target. This can be expanded to include other functions for other minions based on standard targets.
external_auth:
pam:
dave:
- test.ping
- mongo\*:
- network.*
- log\*:
- network.*
- pkg.*
- 'G@os:RedHat':
- kmod.*
steve:
- .*
The above allows for all minions to be hit by test.ping by dave, and adds a few functions that dave can execute on other minions. It also allows steve unrestricted access to salt commands.
New in version 0.9.7.
Since Salt executes jobs running on many systems, Salt needs to be able to manage jobs running on many systems.
Salt Minions maintain a proc directory in the Salt cachedir
. The proc
directory maintains files named after the executed job ID. These files contain
the information about the current running jobs on the minion and allow for
jobs to be looked up. This is located in the proc directory under the
cachedir, with a default configuration it is under /var/cache/salt/proc
.
Salt 0.9.7 introduced a few new functions to the saltutil module for managing jobs. These functions are:
running
Returns the data of all running jobs that are found in the proc directory.find_job
Returns specific data about a certain job based on job id.signal_job
Allows for a given jid to be sent a signal.term_job
Sends a termination signal (SIGTERM, 15) to the process controlling the
specified job.kill_job
Sends a kill signal (SIGKILL, 9) to the process controlling the
specified job.These functions make up the core of the back end used to manage jobs at the minion level.
A convenience runner front end and reporting system has been added as well. The jobs runner contains functions to make viewing data easier and cleaner.
The jobs runner contains a number of functions...
The active function runs saltutil.running on all minions and formats the return data about all running jobs in a much more usable and compact format. The active function will also compare jobs that have returned and jobs that are still running, making it easier to see what systems have completed a job and what systems are still being waited on.
# salt-run jobs.active
When jobs are executed the return data is sent back to the master and cached.
By default it is cached for 24 hours, but this can be configured via the
keep_jobs
option in the master configuration.
Using the lookup_jid runner will display the same return data that the initial
job invocation with the salt command would display.
# salt-run jobs.lookup_jid <job id number>
Before finding a historic job, it may be required to find the job id. list_jobs will parse the cached execution data and display all of the job data for jobs that have already, or partially returned.
# salt-run jobs.list_jobs
In Salt versions greater than 0.12.0, the scheduling system allows incremental executions on minions or the master. The schedule system exposes the execution of any execution function on minions or any runner on the master.
Scheduling is enabled via the schedule
option on either the master or minion
config files, or via a minion's pillar data. Schedules that are impletemented via
pillar data, only need to refresh the minion's pillar data, for example by using
saltutil.refresh_pillar
. Schedules implemented in the master or minion config
have to restart the application in order for the schedule to be implemented.
Note
The scheduler executes different functions on the master and minions. When running on the master the functions reference runner functions, when running on the minion the functions specify execution functions.
A scheduled run has no output on the minion unless the config is set to info level or higher. Refer to minion logging settings.
Specify maxrunning
to ensure that there are no more than N copies of
a particular routine running. Use this for jobs that may be long-running
and could step on each other or otherwise double execute. The default for
maxrunning
is 1.
States are executed on the minion, as all states are. You can pass positional arguments and provide a yaml dict of named arguments.
schedule:
job1:
function: state.sls
seconds: 3600
args:
- httpd
kwargs:
test: True
This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour)
schedule:
job1:
function: state.sls
seconds: 3600
args:
- httpd
kwargs:
test: True
splay: 15
This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) splaying the time between 0 and 15 seconds
schedule:
job1:
function: state.sls
seconds: 3600
args:
- httpd
kwargs:
test: True
splay:
start: 10
end: 15
This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) splaying the time between 10 and 15 seconds
New in version 2014.7.0.
Frequency of jobs can also be specified using date strings supported by the python dateutil library. This requires python-dateutil to be installed on the minion.
schedule:
job1:
function: state.sls
args:
- httpd
kwargs:
test: True
when: 5:00pm
This will schedule the command: state.sls httpd test=True at 5:00pm minion localtime.
schedule:
job1:
function: state.sls
args:
- httpd
kwargs:
test: True
when:
- Monday 5:00pm
- Tuesday 3:00pm
- Wednesday 5:00pm
- Thursday 3:00pm
- Friday 5:00pm
This will schedule the command: state.sls httpd test=True at 5pm on Monday, Wednesday, and Friday, and 3pm on Tuesday and Thursday.
schedule:
job1:
function: state.sls
seconds: 3600
args:
- httpd
kwargs:
test: True
range:
start: 8:00am
end: 5:00pm
This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) between the hours of 8am and 5pm. The range parameter must be a dictionary with the date strings using the dateutil format. This requires python-dateutil to be installed on the minion.
New in version 2014.7.0.
The scheduler also supports ensuring that there are no more than N copies of a particular routine running. Use this for jobs that may be long-running and could step on each other or pile up in case of infrastructure outage.
The default for maxrunning is 1.
schedule:
long_running_job:
function: big_file_transfer
jid_include: True
schedule:
log-loadavg:
function: cmd.run
seconds: 3660
args:
- 'logger -t salt < /proc/loadavg'
kwargs:
stateful: False
shell: \bin\sh
To set up a highstate to run on a minion every 60 minutes set this in the minion config or pillar:
schedule:
highstate:
function: state.highstate
minutes: 60
Time intervals can be specified as seconds, minutes, hours, or days.
Runner executions can also be specified on the master within the master configuration file:
schedule:
overstate:
function: state.over
seconds: 35
minutes: 30
hours: 3
The above configuration will execute the state.over runner every 3 hours, 30 minutes and 35 seconds, or every 12,635 seconds.
The scheduler is also useful for tasks like gathering monitoring data about a minion, this schedule option will gather status data and send it to a MySQL returner database:
schedule:
uptime:
function: status.uptime
seconds: 60
returner: mysql
meminfo:
function: status.meminfo
minutes: 5
returner: mysql
Since specifying the returner repeatedly can be tiresome, the
schedule_returner
option is available to specify one or a list of global
returners to be used by the minions when scheduling.
In Salt versions greater than 0.12.0, the scheduling system allows incremental executions on minions or the master. The schedule system exposes the execution of any execution function on minions or any runner on the master.
Scheduling is enabled via the schedule
option on either the master or minion
config files, or via a minion's pillar data. Schedules that are impletemented via
pillar data, only need to refresh the minion's pillar data, for example by using
saltutil.refresh_pillar
. Schedules implemented in the master or minion config
have to restart the application in order for the schedule to be implemented.
Note
The scheduler executes different functions on the master and minions. When running on the master the functions reference runner functions, when running on the minion the functions specify execution functions.
A scheduled run has no output on the minion unless the config is set to info level or higher. Refer to minion logging settings.
Specify maxrunning
to ensure that there are no more than N copies of
a particular routine running. Use this for jobs that may be long-running
and could step on each other or otherwise double execute. The default for
maxrunning
is 1.
States are executed on the minion, as all states are. You can pass positional arguments and provide a yaml dict of named arguments.
schedule:
job1:
function: state.sls
seconds: 3600
args:
- httpd
kwargs:
test: True
This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour)
schedule:
job1:
function: state.sls
seconds: 3600
args:
- httpd
kwargs:
test: True
splay: 15
This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) splaying the time between 0 and 15 seconds
schedule:
job1:
function: state.sls
seconds: 3600
args:
- httpd
kwargs:
test: True
splay:
start: 10
end: 15
This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) splaying the time between 10 and 15 seconds
New in version 2014.7.0.
Frequency of jobs can also be specified using date strings supported by the python dateutil library. This requires python-dateutil to be installed on the minion.
schedule:
job1:
function: state.sls
args:
- httpd
kwargs:
test: True
when: 5:00pm
This will schedule the command: state.sls httpd test=True at 5:00pm minion localtime.
schedule:
job1:
function: state.sls
args:
- httpd
kwargs:
test: True
when:
- Monday 5:00pm
- Tuesday 3:00pm
- Wednesday 5:00pm
- Thursday 3:00pm
- Friday 5:00pm
This will schedule the command: state.sls httpd test=True at 5pm on Monday, Wednesday, and Friday, and 3pm on Tuesday and Thursday.
schedule:
job1:
function: state.sls
seconds: 3600
args:
- httpd
kwargs:
test: True
range:
start: 8:00am
end: 5:00pm
This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) between the hours of 8am and 5pm. The range parameter must be a dictionary with the date strings using the dateutil format. This requires python-dateutil to be installed on the minion.
New in version 2014.7.0.
The scheduler also supports ensuring that there are no more than N copies of a particular routine running. Use this for jobs that may be long-running and could step on each other or pile up in case of infrastructure outage.
The default for maxrunning is 1.
schedule:
long_running_job:
function: big_file_transfer
jid_include: True
schedule:
log-loadavg:
function: cmd.run
seconds: 3660
args:
- 'logger -t salt < /proc/loadavg'
kwargs:
stateful: False
shell: \bin\sh
To set up a highstate to run on a minion every 60 minutes set this in the minion config or pillar:
schedule:
highstate:
function: state.highstate
minutes: 60
Time intervals can be specified as seconds, minutes, hours, or days.
Runner executions can also be specified on the master within the master configuration file:
schedule:
overstate:
function: state.over
seconds: 35
minutes: 30
hours: 3
The above configuration will execute the state.over runner every 3 hours, 30 minutes and 35 seconds, or every 12,635 seconds.
The scheduler is also useful for tasks like gathering monitoring data about a minion, this schedule option will gather status data and send it to a MySQL returner database:
schedule:
uptime:
function: status.uptime
seconds: 60
returner: mysql
meminfo:
function: status.meminfo
minutes: 5
returner: mysql
Since specifying the returner repeatedly can be tiresome, the
schedule_returner
option is available to specify one or a list of global
returners to be used by the minions when scheduling.
The Salt Master maintains a job cache of all job executions which can be queried via the jobs runner. The way this job cache is managed is very pluggable via Salt's underlying returner interface.
A number of options are available when configuring the job cache. The default caching system uses local storage on the Salt Master and can be found in the job cache directory (on Linux systems this is typically /var/cache/salt/master/jobs). The default caching system is suitable for most deployments as it does not typically require any further configuration or management.
The default job cache is a temporary cache and jobs will be stored for 24 hours. If the default cache needs to store jobs for a different period the time can be easily adjusted by changing the keep_jobs parameter in the Salt Master configuration file. The value passed in is measured via hours:
keep_jobs: 24
Many deployments may wish to use an external database to maintain a long term register of executed jobs. Salt comes with two main mechanisms to do this, the master job cache and the external job cache. The difference is how the external data store is accessed.
New in version 2014.7.
The master job cache setting makes the built in job cache on the master modular. This system allows for the default cache to be swapped out by the Salt returner system. To configure the master job cache, set up an external returner database based on the instructions included with each returner and then simply add the following configuration to the master configuration file:
master_job_cache: mysql
The external job cache setting instructs the minions to directly contact the
data store. This scenario is helpful when the data store needs to be made
available to the minions. This can be an effective way to share historic data
across an infrastructure as data can be retrieved from the external job cache
via the ret
execution module.
To configure the external job cache, set up a returner database in the manner described in the specific returner documentation. Ensure that the returner database is accessible from the minions, and set the ext_job_cache setting in the master configuration file:
ext_job_cache: redis
The SDB interface is designed to store and retrieve data that, unlike pillars and grains, is not necessarily minion-specific. The initial design goal was to allow passwords to be stored in a secure database, such as one managed by the keyring package, rather than as plain-text files. However, as a generic database interface, it could conceptually be used for a number of other purposes.
SDB was added to Salt in version 2014.7.0. SDB is currently experimental, and should probably not be used in production.
In order to use the SDB interface, a configuration profile must be set up in
either the master or minion configuration file. The configuration stanza
includes the name/ID that the profile will be referred to as, a driver
setting, and any other arguments that are necessary for the SDB module that will
be used. For instance, a profile called mykeyring
, which uses the
system
service in the keyring
module would look like:
mykeyring:
driver: keyring
service: system
It is recommended to keep the name of the profile simple, as it is used in the SDB URI as well.
SDB is designed to make small database queries (hence the name, SDB) using a compact URL. This allows users to reference a database value quickly inside a number of Salt configuration areas, without a lot of overhead. The basic format of an SDB URI is:
sdb://<profile>/<args>
The profile refers to the configuration profile defined in either the master or the minion configuration file. The args are specific to the module referred to in the profile, but will typically only need to refer to the key of a key/value pair inside the database. This is because the profile itself should define as many other parameters as possible.
For example, a profile might be set up to reference credentials for a specific OpenStack account. The profile might look like:
kevinopenstack:
driver: keyring
service: salt.cloud.openstack.kevin
And the URI used to reference the password might look like:
sdb://kevinopenstack/password
There is currently one function that MUST exist in any SDB module (get()
)
and one that MAY exist (set_()
). If using a (set_()
) function, a
__func_alias__
dictionary MUST be declared in the module as well:
__func_alias__ = {
'set_': 'set',
}
This is because set
is a Python built-in, and therefore functions should not
be created which are called set()
. The __func_alias__
functionality is
provided via Salt's loader interfaces, and allows legally-named functions to be
referred to using names that would otherwise be unwise to use.
The get()
function is required, as it will be called via functions in other
areas of the code which make use of the sdb://
URI. For example, the
config.get
function in the config
execution module uses this function.
The set_()
function may be provided, but is not required, as some sources
may be read-only, or may be otherwise unwise to access via a URI (for instance,
because of SQL injection attacks).
A simple example of an SDB module is salt/sdb/keyring_db.py
, as it provides
basic examples of most, if not all, of the types of functionality that are
available not only for SDB modules, but for Salt modules in general.
The Salt Event System is used to fire off events enabling third party applications or external processes to react to behavior within Salt.
The event system is comprised of a two primary components:
- The event sockets which publishes events.
- The event library which can listen to events and send events into the salt system.
These events are fired on the Salt Master event bus. This list is not comprehensive.
salt/auth
Fired when a minion performs an authentication check with the master.
Variables: |
|
---|
Note
Minions fire auth events on fairly regular basis for a number
of reasons. Writing reactors to respond to events through
the auth cycle can lead to infinite reactor event loops
(minion tries to auth, reactor responds by doing something
that generates another auth event, minion sends auth event,
etc.). Consider reacting to salt/key
or salt/minion/<MID>/start
or firing a custom event tag instead.
salt/minion/<MID>/start
Fired every time a minion connects to the Salt master.
Variables: | id -- The minion ID. |
---|
salt/key
Fired when accepting and rejecting minions keys on the Salt master.
Variables: |
|
---|
Warning
If a master is in auto_accept mode
, salt/key
events
will not be fired when the keys are accepted. In addition, pre-seeding
keys (like happens through Salt-Cloud) will not cause
firing of these events.
salt/job/<JID>/new
Fired as a new job is sent out to minions.
Variables: |
|
---|
salt/job/<JID>/ret/<MID>
Fired each time a minion returns data for a job.
Variables: |
|
---|
salt/presence/present
Events fired on a regular interval about currently connected, newly
connected, or recently disconnected minions. Requires the
presence_events
setting to be enabled.
Variables: | present -- A list of minions that are currently connected to the Salt master. |
---|
salt/presence/change
Fired when the Presence system detects new minions connect or disconnect.
Variables: |
|
---|
Unlike other Master events, salt-cloud
events are not fired on behalf of a
Salt Minion. Instead, salt-cloud
events are fired on behalf of a VM. This
is because the minion-to-be may not yet exist to fire events to or also may have
been destroyed.
This behavior is reflected by the name
variable in the event data for
salt-cloud
events as compared to the id
variable for Salt
Minion-triggered events.
salt/cloud/<VM NAME>/creating
Fired when salt-cloud starts the VM creation process.
Variables: |
|
---|
salt/cloud/<VM NAME>/deploying
Fired when the VM is available and salt-cloud begins deploying Salt to the new VM.
Variables: |
|
---|
salt/cloud/<VM NAME>/requesting
Fired when salt-cloud sends the request to create a new VM.
Variables: |
|
---|
salt/cloud/<VM NAME>/querying
Fired when salt-cloud queries data for a new instance.
Variables: |
|
---|
salt/cloud/<VM NAME>/tagging
Fired when salt-cloud tags a new instance.
Variables: |
|
---|
salt/cloud/<VM NAME>/waiting_for_ssh
Fired while the salt-cloud deploy process is waiting for ssh to become available on the new instance.
Variables: |
|
---|
salt/cloud/<VM NAME>/deploy_script
Fired once the deploy script is finished.
Variables: | event -- description of the event. |
---|
salt/cloud/<VM NAME>/created
Fired once the new instance has been fully created.
Variables: |
|
---|
salt/cloud/<VM NAME>/destroying
Fired when salt-cloud requests the destruction of an instance.
Variables: |
|
---|
salt/cloud/<VM NAME>/destroyed
Fired when an instance has been destroyed.
Variables: |
|
---|
Salt's Event Bus is used heavily within Salt and it is also written to integrate heavily with existing tooling and scripts. There is a variety of ways to consume it.
The quickest way to watch the event bus is by calling the state.event
runner
:
salt-run state.event pretty=True
That runner is designed to interact with the event bus from external tools and shell scripts. See the documentation for more examples.
Salt's event bus can be consumed
salt.netapi.rest_cherrypy.app.Events
as an HTTP stream from
external tools or services.
curl -SsNk https://salt-api.example.com:8000/events?token=05A3
Python scripts can access the event bus only as the same system user that Salt is running as.
The event system is accessed via the event library and can only be accessed
by the same system user that Salt is running as. To listen to events a
SaltEvent object needs to be created and then the get_event function needs to
be run. The SaltEvent object needs to know the location that the Salt Unix
sockets are kept. In the configuration this is the sock_dir
option. The
sock_dir
option defaults to "/var/run/salt/master" on most systems.
The following code will check for a single event:
import salt.config
import salt.utils.event
opts = salt.config.client_config('/etc/salt/master')
event = salt.utils.event.get_event(
'master',
sock_dir=opts['sock_dir'],
transport=opts['transport'],
opts=opts)
data = event.get_event()
Events will also use a "tag". Tags allow for events to be filtered by prefix. By default all events will be returned. If only authentication events are desired, then pass the tag "salt/auth".
The get_event
method has a default poll time assigned of 5 seconds. To
change this time set the "wait" option.
The following example will only listen for auth events and will wait for 10 seconds instead of the default 5.
data = event.get_event(wait=10, tag='salt/auth')
To retrieve the tag as well as the event data, pass full=True
:
evdata = event.get_event(wait=10, tag='salt/job', full=True)
tag, data = evdata['tag'], evdata['data']
Instead of looking for a single event, the iter_events
method can be used to
make a generator which will continually yield salt events.
The iter_events method also accepts a tag but not a wait time:
for data in event.iter_events(tag='salt/auth'):
print(data)
And finally event tags can be globbed, such as they can be in the Reactor, using the fnmatch library.
import fnmatch
import salt.config
import salt.utils.event
opts = salt.config.client_config('/etc/salt/master')
sevent = salt.utils.event.get_event(
'master',
sock_dir=opts['sock_dir'],
transport=opts['transport'],
opts=opts)
while True:
ret = sevent.get_event(full=True)
if ret is None:
continue
if fnmatch.fnmatch(ret['tag'], 'salt/job/*/ret/*'):
do_something_with_job_return(ret['data'])
It is possible to fire events on either the minion's local bus or to fire events intended for the master.
To fire a local event from the minion on the command line call the
event.fire
execution function:
salt-call event.fire '{"data": "message to be sent in the event"}' 'tag'
To fire an event to be sent up to the master from the minion call the
event.send
execution function. Remember
YAML can be used at the CLI in function arguments:
salt-call event.send 'myco/mytag/success' '{success: True, message: "It works!"}'
If a process is listening on the minion, it may be useful for a user on the master to fire an event to it:
# Job on minion
import salt.utils.event
event = salt.utils.event.MinionEvent(**__opts__)
for evdata in event.iter_events(tag='customtag/'):
return evdata # do your processing here...
salt minionname event.fire '{"data": "message for the minion"}' 'customtag/african/unladen'
Events can be very useful when writing execution modules, in order to inform various processes on the master when a certain task has taken place. This is easily done using the normal cross-calling syntax:
# /srv/salt/_modules/my_custom_module.py
def do_something():
'''
Do something and fire an event to the master when finished
CLI Example::
salt '*' my_custom_module:do_something
'''
# do something!
__salt__['event.send']('myco/my_custom_module/finished', {
'finished': True,
'message': "The something is finished!",
})
Firing events from custom Python code is quite simple and mirrors how it is done at the CLI:
import salt.client
caller = salt.client.Caller()
caller.sminion.functions['event.send'](
'myco/myevent/success',
{
'success': True,
'message': "It works!",
}
)
The beacon system allows the minion to hook into system processes and
continually translate external events into the salt event bus. The
primary example of this is the inotify
beacon. This
beacon uses inotify to watch configured files or directories on the minion for
changes, creation, deletion etc.
This allows for the changes to be sent up to the master where the reactor can respond to changes.
The beacon system, like many others in Salt, can be configured via the minion pillar, grains, or local config file:
beacons:
inotify:
/etc/httpd/conf.d: {}
/opt: {}
Optionally, a beacon can be run on an interval other than the default
loop_interval
, which is typically set to 1 second.
To run a beacon every 5 seconds, for example, provide an interval
argument
to a beacon.
beacons:
inotify:
/etc/httpd/conf.d: {}
/opt: {}
interval: 5
load:
- 1m:
- 0.0
- 2.0
- 5m:
- 0.0
- 1.5
- 15m:
- 0.1
- 1.0
- interval: 10
Beacon plugins use the standard Salt loader system, meaning that many of the
constructs from other plugin systems holds true, such as the __virtual__
function.
The important function in the Beacon Plugin is the beacon
function. When
the beacon is configured to run, this function will be executed repeatedly
by the minion. The beacon
function therefore cannot block and should be
as lightweight as possible. The beacon
also must return a list of dicts,
each dict in the list will be translated into an event on the master.
Please see the inotify
beacon as an example.
The beacons system will look for a function named beacon in the module. If this function is not present then the beacon will not be fired. This function is called on a regular basis and defaults to being called on every iteration of the minion, which can be tens to hundreds of times a second. This means that the beacon function cannot block and should not be CPU or IO intensive.
The beacon function will be passed in the configuration for the executed beacon. This makes it easy to establish a flexible configuration for each called beacon. This is also the preferred way to ingest the beacon's configuration as it allows for the configuration to be dynamically updated while the minion is running by configuring the beacon in the minion's pillar.
The information returned from the beacon is expected to follow a predefined structure. The returned value needs to be a list of dictionaries (standard python dictionaries are preferred, no ordered dicts are needed).
The dictionaries represent individual events to be fired on the minion and master event buses. Each dict is a single event. The dict can contain any arbitrary keys but the 'tag' key will be extracted and added to the tag of the fired event.
The return data structure would look something like this:
[{'changes': ['/foo/bar'], 'tag': 'foo'},
{'changes': ['/foo/baz'], 'tag': 'bar'}]
Execution modules are still the preferred location for all work and system interaction to happen in Salt. For this reason the __salt__ variable is available inside the beacon.
Please be careful when calling functions in __salt__, while this is the preferred means of executing complicated routines in Salt not all of the execution modules have been written with beacons in mind. Watch out for execution modules that may be CPU intense or IO bound. Please feel free to add new execution modules and functions to back specific beacons.
In addition to the processes that the Salt Master automatically spawns, it is possible to configure it to start additional custom processes.
This is useful if a dedicated process is needed that should run throughout
the life of the Salt Master. For periodic independent tasks, a
scheduled runner
may be more appropriate.
Processes started in this way will be restarted if they die and will be killed when the Salt Master is shut down.
Processes are declared in the master config file with the ext_processes option. Processes will be started in the order they are declared.
ext_processes:
- mymodule.TestProcess
- mymodule.AnotherProcess
# Import python libs
import time
import logging
from multiprocessing import Process
# Import Salt libs
from salt.utils.event import SaltEvent
log = logging.getLogger(__name__)
class TestProcess(Process):
def __init__(self, opts):
Process.__init__(self)
self.opts = opts
def run(self):
self.event = SaltEvent('master', self.opts['sock_dir'])
i = 0
while True:
self.event.fire_event({'iteration': i}, 'ext_processes/test{0}')
time.sleep(60)
The Salt Syndic interface is a powerful tool which allows for the construction of Salt command topologies. A basic Salt setup has a Salt Master commanding a group of Salt Minions. The Syndic interface is a special passthrough minion, it is run on a master and connects to another master, then the master that the Syndic minion is listening to can control the minions attached to the master running the syndic.
The intent for supporting many layouts is not presented with the intent of supposing the use of any single topology, but to allow a more flexible method of controlling many systems.
Since the Syndic only needs to be attached to a higher level master the
configuration is very simple. On a master that is running a syndic to connect
to a higher level master the syndic_master
option needs to be
set in the master config file. The syndic_master
option contains the
hostname or IP address of the master server that can control the master that
the syndic is running on.
The master that the syndic connects to sees the syndic as an ordinary minion,
and treats it as such. the higher level master will need to accept the syndic's
minion key like any other minion. This master will also need to set the
order_masters
value in the configuration to True
. The
order_masters
option in the config on the higher level master is very
important, to control a syndic extra information needs to be sent with the
publications, the order_masters
option makes sure that the extra data is
sent out.
To sum up, you have those configuration options available on the master side:
syndic_master
: MasterOfMaster ip/addresssyndic_master_port
: MasterOfMaster ret_portsyndic_log_file
: path to the logfile (absolute or not)syndic_pidfile
: path to the pidfile (absolute or not)
Each Syndic must provide its own file_roots
directory. Files will not be
automatically transferred from the master-master.
The Syndic is a separate daemon that needs to be started on the master that is controlled by a higher master. Starting the Syndic daemon is the same as starting the other Salt daemons.
# salt-syndic
Note
If you have an exceptionally large infrastructure or many layers of
syndics, you may find that the CLI doesn't wait long enough for the syndics
to return their events. If you think this is the case, you can set the
syndic_wait
value in the upper master config. The default
value is 1
, and should work for the majority of deployments.
The salt-syndic
is little more than a command and event forwarder. When a
command is issued from a higher-level master, it will be received by the
configured syndics on lower-level masters, and propagated to to their minions,
and other syndics that are bound to them further down in the hierarchy. When
events and job return data are generated by minions, they aggregated back,
through the same syndic(s), to the master which issued the command.
The master sitting at the top of the hierarchy (the Master of Masters) will not
be running the salt-syndic
daemon. It will have the salt-master
daemon running, and optionally, the salt-minion
daemon. Each syndic
connected to an upper-level master will have both the salt-master
and the
salt-syndic
daemon running, and optionally, the salt-minion
daemon.
Nodes on the lowest points of the hierarchy (minions which do not propagate
data to another level) will only have the salt-minion
daemon running. There
is no need for either salt-master
or salt-syndic
to be running on a
standard minion.
In order for the high-level master to return information from minions that are
below the syndic(s), the CLI requires a short wait time in order to allow the
syndic(s) to gather responses from their minions. This value is defined in the
syndic_wait
and has a default of five seconds.
While it is possible to run a syndic without a minion installed on the same machine,
it is recommended, for a faster CLI response time, to do so. Without a minion
installed on the syndic, the timeout value of syndic_wait
increases
significantly - about three-fold. With a minion installed on the syndic, the CLI
timeout resides at the value defined in syndic_wait
.
Note
To reduce the amount of time the CLI waits for minions to respond, install a minion
on the syndic or tune the value of the syndic_wait
configuration.
Proxy minions are a developing Salt feature that enables controlling devices that, for whatever reason, cannot run a standard salt-minion. Examples include network gear that has an API but runs a proprietary OS, devices with limited CPU or memory, or devices that could run a minion, but for security reasons, will not.
Proxy minions are not an "out of the box" feature. Because there are an infinite number of controllable devices, you will most likely have to write the interface yourself. Fortunately, this is only as difficult as the actual interface to the proxied device. Devices that have an existing Python module (PyUSB for example) would be relatively simple to interface. Code to control a device that has an HTML REST-based interface should be easy. Code to control your typical housecat would be excellent source material for a PhD thesis.
Salt proxy-minions provide the 'plumbing' that allows device enumeration and discovery, control, status, remote execution, and state management.
The following diagram may be helpful in understanding the structure of a Salt installation that includes proxy-minions:
The key thing to remember is the left-most section of the diagram. Salt's nature is to have a minion connect to a master, then the master may control the minion. However, for proxy minions, the target device cannot run a minion, and thus must rely on a separate minion to fire up the proxy-minion and make the initial and persistent connection.
After the proxy minion is started and initiates its connection to the 'dumb' device, it connects back to the salt-master and ceases to be affiliated in any way with the minion that started it.
To create support for a proxied device one needs to create four things:
Proxy minions require no configuration parameters in /etc/salt/master.
Salt's Pillar system is ideally suited for configuring proxy-minions. Proxies can either be designated via a pillar file in pillar_roots, or through an external pillar. External pillars afford the opportunity for interfacing with a configuration management system, database, or other knowledgeable system that that may already contain all the details of proxy targets. To use static files in pillar_roots, pattern your files after the following examples, which are based on the diagram above:
/srv/pillar/top.sls
base:
minioncontroller1:
- networkswitches
minioncontroller2:
- reallydumbdevices
minioncontroller3:
- smsgateway
/srv/pillar/networkswitches.sls
proxy:
dumbdevice1:
proxytype: networkswitch
host: 172.23.23.5
username: root
passwd: letmein
dumbdevice2:
proxytype: networkswitch
host: 172.23.23.6
username: root
passwd: letmein
dumbdevice3:
proxytype: networkswitch
host: 172.23.23.7
username: root
passwd: letmein
/srv/pillar/reallydumbdevices.sls
proxy:
dumbdevice4:
proxytype: i2c_lightshow
i2c_address: 1
dumbdevice5:
proxytype: i2c_lightshow
i2c_address: 2
dumbdevice6:
proxytype: 433mhz_wireless
/srv/pillar/smsgateway.sls
proxy:
minioncontroller3:
dumbdevice7:
proxytype: sms_serial
deventry: /dev/tty04
Note the contents of each minioncontroller key may differ widely based on the type of device that the proxy-minion is managing.
In the above example
Because of the way pillar works, each of the salt-minions that fork off the proxy minions will only see the keys specific to the proxies it will be handling. In other words, from the above example, only minioncontroller1 will see the connection information for dumbdevices 1, 2, and 3. Minioncontroller2 will see configuration data for dumbdevices 4, 5, and 6, and minioncontroller3 will be privy to dumbdevice7.
Also, in general, proxy-minions are lightweight, so the machines that run them could conceivably control a large number of devices. The example above is just to illustrate that it is possible for the proxy services to be spread across many machines if necessary, or intentionally run on machines that need to control devices because of some physical interface (e.g. i2c and serial above). Another reason to divide proxy services might be security. In more secure environments only certain machines may have a network path to certain devices.
Now our salt-minions know if they are supposed to spawn a proxy-minion process to control a particular device. That proxy-minion process will initiate a connection back to the master to enable control.
A proxytype is a Python class called 'Proxyconn' that encapsulates all the code necessary to interface with a device. Proxytypes are located inside the salt.proxy module. At a minimum a proxytype object must implement the following methods:
proxytype(self)
: Returns a string with the name of the proxy type.
proxyconn(self, **kwargs)
: Provides the primary way to connect and communicate
with the device. Some proxyconns instantiate a particular object that opens a
network connection to a device and leaves the connection open for communication.
Others simply abstract a serial connection or even implement endpoints to communicate
via REST over HTTP.
id(self, opts)
: Returns a unique, unchanging id for the controlled device. This is
the "name" of the device, and is used by the salt-master for targeting and key
authentication.
Optionally, the class may define a shutdown(self, opts)
method if the
controlled device should be informed when the minion goes away cleanly.
It is highly recommended that the test.ping
execution module also be defined
for a proxytype. The code for ping
should contact the controlled device and make
sure it is really available.
Here is an example proxytype used to interface to Juniper Networks devices that run the Junos operating system. Note the additional library requirements--most of the "hard part" of talking to these devices is handled by the jnpr.junos, jnpr.junos.utils, and jnpr.junos.cfg modules.
# Import python libs
import logging
import os
import jnpr.junos
import jnpr.junos.utils
import jnpr.junos.cfg
HAS_JUNOS = True
class Proxyconn(object):
def __init__(self, details):
self.conn = jnpr.junos.Device(user=details['username'], host=details['host'], password=details['passwd'])
self.conn.open()
self.conn.bind(cu=jnpr.junos.cfg.Resource)
def proxytype(self):
return 'junos'
def id(self, opts):
return self.conn.facts['hostname']
def ping(self):
return self.conn.connected
def shutdown(self, opts):
print('Proxy module {} shutting down!!'.format(opts['id']))
try:
self.conn.close()
except Exception:
pass
Grains are data about minions. Most proxied devices will have a paltry amount of data as compared to a typical Linux server. Because proxy-minions are started by a regular minion, they inherit a sizeable number of grain settings which can be useful, especially when targeting (PYTHONPATH, for example).
All proxy minions set a grain called 'proxy'. If it is present, you know the minion is controlling another device. To add more grains to your proxy minion for a particular device, create a file in salt/grains named [proxytype].py and place inside it the different functions that need to be run to collect the data you are interested in. Here's an example:
Salt states and execution modules, by, and large, cannot "automatically" work
with proxied devices. Execution modules like pkg
or sqlite3
have no
meaning on a network switch or a housecat. For a state/execution module to be
available to a proxy-minion, the __proxyenabled__
variable must be defined
in the module as an array containing the names of all the proxytypes that this
module can support. The array can contain the special value *
to indicate
that the module supports all proxies.
If no __proxyenabled__
variable is defined, then by default, the
state/execution module is unavailable to any proxy.
Here is an excerpt from a module that was modified to support proxy-minions:
def ping():
if 'proxyobject' in __opts__:
if 'ping' in __opts__['proxyobject'].__attr__():
return __opts['proxyobject'].ping()
else:
return False
else:
return True
And then in salt.proxy.junos we find
def ping(self):
return self.connected
The Junos API layer lacks the ability to do a traditional 'ping', so the example simply checks the connection object field that indicates if the ssh connection was successfully made to the device.
Note
The RAET transport is in very early development, it is functional but no promises are yet made as to its reliability or security. As for reliability and security, the encryption used has been audited and our tests show that raet is reliable. With this said we are still conducting more security audits and pushing the reliability. This document outlines the encryption used in RAET
New in version 2014.7.0.
The Reliable Asynchronous Event Transport, or RAET, is an alternative transport medium developed specifically with Salt in mind. It has been developed to allow queuing to happen up on the application layer and comes with socket layer encryption. It also abstracts a great deal of control over the socket layer and makes it easy to bubble up errors and exceptions.
RAET also offers very powerful message routing capabilities, allowing for messages to be routed between processes on a single machine all the way up to processes on multiple machines. Messages can also be restricted, allowing processes to be sent messages of specific types from specific sources allowing for trust to be established.
Using RAET in Salt is easy, the main difference is that the core dependencies change, instead of needing pycrypto, M2Crypto, ZeroMQ, and PYZMQ, the packages libsodium, libnacl, ioflo, and raet are required. Encryption is handled very cleanly by libnacl, while the queueing and flow control is handled by ioflo. Distribution packages are forthcoming, but libsodium can be easily installed from source, or many distributions do ship packages for it. The libnacl and ioflo packages can be easily installed from pypi, distribution packages are in the works.
Once the new deps are installed the 2014.7 release or higher of Salt needs to be installed.
Once installed, modify the configuration files for the minion and master to set the transport to raet:
/etc/salt/master
:
transport: raet
/etc/salt/minion
:
transport: raet
Now start salt as it would normally be started, the minion will connect to the master and share long term keys, which can then in turn be managed via salt-key. Remote execution and salt states will function in the same way as with Salt over ZeroMQ.
The 2014.7 release of RAET is not complete! The Syndic and Multi Master have not been completed yet and these are slated for completion in the 2015.5.0 release.
Also, Salt-Raet allows for more control over the client but these hooks have not been implemented yet, thereforre the client still uses the same system as the ZeroMQ client. This means that the extra reliability that RAET exposes has not yet been implemented in the CLI client.
Why make an alternative transport for Salt? There are many reasons, but the primary motivation came from customer requests, many large companies came with requests to run Salt over an alternative transport, the reasoning was varied, from performance and scaling improvements to licensing concerns. These customers have partnered with SaltStack to make RAET a reality.
RAET has been designed to allow salt to have greater communication capabilities. It has been designed to allow for development into features which out ZeroMQ topologies can't match.
Many of the proposed features are still under development and will be announced as they enter proof of concept phases, but these features include salt-fuse - a filesystem over salt, salt-vt - a parallel api driven shell over the salt transport and many others.
RAET is reliable, hence the name (Reliable Asynchronous Event Transport).
The concern posed by some over RAET reliability is based on the fact that RAET uses UDP instead of TCP and UDP does not have built in reliability.
RAET itself implements the needed reliability layers that are not natively present in UDP, this allows RAET to dynamically optimize packet delivery in a way that keeps it both reliable and asynchronous.
When using RAET, ZeroMQ is not required. RAET is a complete networking replacement. It is noteworthy that RAET is not a ZeroMQ replacement in a general sense, the ZeroMQ constructs are not reproduced in RAET, but they are instead implemented in such a way that is specific to Salt's needs.
RAET is primarily an async communication layer over truly async connections, defaulting to UDP. ZeroMQ is over TCP and abstracts async constructs within the socket layer.
Salt is not dropping ZeroMQ support and has no immediate plans to do so.
RAET uses Dan Bernstein's NACL encryption libraries and CurveCP handshake. The libnacl python binding binds to both libsodium and tweetnacl to execute the underlying cryptography. This allows us to completely rely on an externally developed cryptography system.
For more information on libsodium and CurveCP please see: http://doc.libsodium.org/ http://curvecp.org/
Raet Programming Introduction
The Salt Windows Software Repository provides a package manager and software repository similar to what is provided by yum and apt on Linux.
It permits the installation of software using the installers on remote windows machines. In many senses, the operation is similar to that of the other package managers salt is aware of:
pkg.installed
and similar states work on Windows.pkg.install
and similar module functions work on Windows.pkg.refresh_db
executed
against it to pick up the latest version of the package database.High level differences to yum and apt are:
The install state/module function of the windows package manager works roughly as follows:
pkg.list_pkgs
and store the resultpkg.list_pkgs
results)pkg.list_pkgs
and compare to the result stored from
before installation.pkg.list_pkgs
results.If there are any problems in using the package manager it is likely to
be due to the data in your sls files not matching the difference
between the pre and post pkg.list_pkgs
results.
By default, the Windows software repository is found at /srv/salt/win/repo
This can be changed in the master config file (default location is
/etc/salt/master
) by modifying the win_repo
variable. Each piece of
software should have its own directory which contains the installers and a
package definition file. This package definition file is a YAML file named
init.sls
.
The package definition file should look similar to this example for Firefox:
/srv/salt/win/repo/firefox/init.sls
Firefox:
17.0.1:
installer: 'salt://win/repo/firefox/English/Firefox Setup 17.0.1.exe'
full_name: Mozilla Firefox 17.0.1 (x86 en-US)
locale: en_US
reboot: False
install_flags: ' -ms'
uninstaller: '%ProgramFiles(x86)%/Mozilla Firefox/uninstall/helper.exe'
uninstall_flags: ' /S'
16.0.2:
installer: 'salt://win/repo/firefox/English/Firefox Setup 16.0.2.exe'
full_name: Mozilla Firefox 16.0.2 (x86 en-US)
locale: en_US
reboot: False
install_flags: ' -ms'
uninstaller: '%ProgramFiles(x86)%/Mozilla Firefox/uninstall/helper.exe'
uninstall_flags: ' /S'
15.0.1:
installer: 'salt://win/repo/firefox/English/Firefox Setup 15.0.1.exe'
full_name: Mozilla Firefox 15.0.1 (x86 en-US)
locale: en_US
reboot: False
install_flags: ' -ms'
uninstaller: '%ProgramFiles(x86)%/Mozilla Firefox/uninstall/helper.exe'
uninstall_flags: ' /S'
More examples can be found here: https://github.com/saltstack/salt-winrepo
The version number and full_name
need to match the output from pkg.list_pkgs
so that the status can be verified when running highstate.
Note: It is still possible to successfully install packages using pkg.install
even if they don't match which can make this hard to troubleshoot.
salt 'test-2008' pkg.list_pkgs
test-2008
----------
7-Zip 9.20 (x64 edition):
9.20.00.0
Microsoft .NET Framework 4 Client Profile:
4.0.30319,4.0.30319
Microsoft .NET Framework 4 Extended:
4.0.30319,4.0.30319
Microsoft Visual C++ 2008 Redistributable - x64 9.0.21022:
9.0.21022
Mozilla Firefox 17.0.1 (x86 en-US):
17.0.1
Mozilla Maintenance Service:
17.0.1
NSClient++ (x64):
0.3.8.76
Notepad++:
6.4.2
Salt Minion 0.16.0:
0.16.0
If any of these preinstalled packages already exist in winrepo the full_name will be automatically renamed to their package name during the next update (running highstate or installing another package).
test-2008:
----------
7zip:
9.20.00.0
Microsoft .NET Framework 4 Client Profile:
4.0.30319,4.0.30319
Microsoft .NET Framework 4 Extended:
4.0.30319,4.0.30319
Microsoft Visual C++ 2008 Redistributable - x64 9.0.21022:
9.0.21022
Mozilla Maintenance Service:
17.0.1
Notepad++:
6.4.2
Salt Minion 0.16.0:
0.16.0
firefox:
17.0.1
nsclient:
0.3.9.328
Add msiexec: True
if using an MSI installer requiring the use of msiexec
/i
to install and msiexec /x
to uninstall.
The install_flags
and uninstall_flags
are flags passed to the software
installer to cause it to perform a silent install. These can often be found by
adding /?
or /h
when running the installer from the command line. A
great resource for finding these silent install flags can be found on the WPKG
project's wiki:
7zip:
9.20.00.0:
installer: salt://win/repo/7zip/7z920-x64.msi
full_name: 7-Zip 9.20 (x64 edition)
reboot: False
install_flags: ' /q '
msiexec: True
uninstaller: salt://win/repo/7zip/7z920-x64.msi
uninstall_flags: ' /qn'
Add cache_dir: True
when the installer requires multiple source files. The
directory containing the installer file will be recursively cached on the minion.
Only applies to salt: installer URLs.
sqlexpress:
12.0.2000.8:
installer: 'salt://win/repo/sqlexpress/setup.exe'
full_name: Microsoft SQL Server 2014 Setup (English)
reboot: False
install_flags: ' /ACTION=install /IACCEPTSQLSERVERLICENSETERMS /Q'
cache_dir: True
Once the sls file has been created, generate the repository cache file with the winrepo runner:
salt-run winrepo.genrepo
Then update the repository cache file on your minions, exactly how it's done for the Linux package managers:
salt '*' pkg.refresh_db
Now you can query the available version of Firefox using the Salt pkg module.
salt '*' pkg.available_version Firefox
{'Firefox': {'15.0.1': 'Mozilla Firefox 15.0.1 (x86 en-US)',
'16.0.2': 'Mozilla Firefox 16.0.2 (x86 en-US)',
'17.0.1': 'Mozilla Firefox 17.0.1 (x86 en-US)'}}
As you can see, there are three versions of Firefox available for installation.
You can refer a software package by its name
or its full_name
surround
by single quotes.
salt '*' pkg.install 'Firefox'
The above line will install the latest version of Firefox.
salt '*' pkg.install 'Firefox' version=16.0.2
The above line will install version 16.0.2 of Firefox.
If a different version of the package is already installed it will be replaced with the version in winrepo (only if the package itself supports live updating).
You can also specify the full name:
salt '*' pkg.install 'Mozilla Firefox 17.0.1 (x86 en-US)'
Uninstall software using the pkg module:
salt '*' pkg.remove 'Firefox'
salt '*' pkg.purge 'Firefox'
pkg.purge
just executes pkg.remove
on Windows. At some point in the
future pkg.purge
may direct the installer to remove all configs and
settings for software packages that support that option.
In order to facilitate managing a Salt Windows software repo with Salt on a
Standalone Minion on Windows, a new module named winrepo has been added to
Salt. winrepo matches what is available in the salt runner and allows you to
manage the Windows software repo contents. Example: salt '*'
winrepo.genrepo
Windows software package definitions can also be hosted in one or more git repositories. The default repo is one hosted on GitHub.com by SaltStack,Inc., which includes package definitions for open source software. This repo points to the HTTP or ftp locations of the installer files. Anyone is welcome to send a pull request to this repo to add new package definitions. Browse the repo here: https://github.com/saltstack/salt-winrepo .
Configure which git repos the master can search for package definitions by
modifying or extending the win_gitrepos
configuration option list in the
master config.
Checkout each git repo in win_gitrepos
, compile your package repository
cache and then refresh each minion's package cache:
salt-run winrepo.update_git_repos
salt-run winrepo.genrepo
salt '*' pkg.refresh_db
If the package seems to install properly, but salt reports a failure
then it is likely you have a version or full_name
mismatch.
Check the exact full_name
and version used by the package. Use
pkg.list_pkgs
to check that the names and version exactly match
what is installed.
Ensure you have (re)generated the repository cache file and then updated the repository cache on the relevant minions:
salt-run winrepo.genrepo
salt 'MINION' pkg.refresh_db
On windows server 2003, you need to install optional windows component "wmi windows installer provider" to have full list of installed packages. If you don't have this, salt-minion can't report some installed software.
Salt is capable of managing Windows systems, however due to various differences between the operating systems, there are some things you need to keep in mind.
This document will contain any quirks that apply across Salt or generally across multiple module functions. Any Windows-specific behavior for particular module functions will be documented in the module function documentation. Therefore this document should be read in conjunction with the module function documentation.
Salt was originally written for managing Unix-based systems, and therefore the file module functions were designed around that security model. Rather than trying to shoehorn that model on to Windows, Salt ignores these parameters and makes non-applicable module functions unavailable instead.
One of the commonly ignored parameters is the group
parameter for managing
files. Under Windows, while files do have a 'primary group' property, this is
rarely used. It generally has no bearing on permissions unless intentionally
configured and is most commonly used to provide Unix compatibility (e.g.
Services For Unix, NFS services).
Because of this, any file module functions that typically require a group, do
not under Windows. Attempts to directly use file module functions that operate
on the group (e.g. file.chgrp
) will return a pseudo-value and cause a log
message to appear. No group parameters will be acted on.
If you do want to access and change the 'primary group' property and understand
the implications, use the file.get_pgid
or file.get_pgroup
functions or
the pgroup
parameter on the file.chown
module function.
Windows is case-insensitive, but however preserves the case of names and it is this preserved form that is returned from system functions. This causes some issues with Salt because it assumes case-sensitive names. These issues generally occur in the state functions and can cause bizarre looking errors.
To avoid such issues, always pretend Windows is case-sensitive and use the right
case for names, e.g. specify user=Administrator
instead of
user=administrator
.
Follow issue 11801 for any changes to this behavior.
Salt does not understand the various forms that Windows usernames can come in, e.g. username, mydomainusername, username@mydomain.tld can all refer to the same user. In fact, Salt generally only considers the raw username value, i.e. the username without the domain or host information.
Using these alternative forms will likely confuse Salt and cause odd errors to happen. Use only the raw username value in the correct case to avoid problems.
Follow issue 11801 for any changes to this behavior.
Each Windows system has built-in _None_ group. This is the default 'primary group' for files for users not on a domain environment.
Unfortunately, the word _None_ has special meaning in Python - it is a special
value indicating 'nothing', similar to null
or nil
in other languages.
To specify the None group, it must be specified in quotes, e.g.
./salt '*' file.chpgrp C:\path\to\file "'None'"
.
Under Windows, if any symbolic link loops are detected or if there are too many levels of symlinks (defaults to 64), an error is always raised.
For some functions, this behavior is different to the behavior on Unix platforms. In general, avoid symlink loops on either platform.
There is no support in Salt for modifying ACLs, and therefore no support for changing file permissions, besides modifying the owner/user.
Salt Cloud is now part of Salt proper. It was merged in as of Salt version 2014.1.0.
On Ubuntu, install Salt Cloud by using following command:
sudo add-apt-repository ppa:saltstack/salt
sudo apt-get update
sudo apt-get install salt-cloud
If using Salt Cloud on OS X, curl-ca-bundle
must be installed. Presently,
this package is not available via brew
, but it is available using MacPorts:
sudo port install curl-ca-bundle
Salt Cloud depends on apache-libcloud
. Libcloud can be installed via pip
with pip install apache-libcloud
.
Installing Salt for development enables Salt Cloud development as well, just
make sure apache-libcloud
is installed as per above paragraph.
See these instructions: Installing Salt for development.
Salt Cloud needs, at least, one configured Provider and Profile to be functional.
To create a VM with salt cloud, use command:
salt-cloud -p <profile> name_of_vm
Assuming there is a profile configured as following:
fedora_rackspace:
provider: rackspace
image: Fedora 17
size: 256 server
script: bootstrap-salt
Then, the command to create new VM named fedora_http_01
is:
salt-cloud -p fedora_rackspace fedora_http_01
To destroy a created-by-salt-cloud VM, use command:
salt-cloud -d name_of_vm
For example, to delete the VM created on above example, use:
salt-cloud -d fedora_http_01
Salt cloud designates virtual machines inside the profile configuration file.
The profile configuration file defaults to /etc/salt/cloud.profiles
and is
a yaml configuration. The syntax for declaring profiles is simple:
fedora_rackspace:
provider: rackspace
image: Fedora 17
size: 256 server
script: bootstrap-salt
It should be noted that the script
option defaults to bootstrap-salt
,
and does not normally need to be specified. Further examples in this document
will not show the script
option.
A few key pieces of information need to be declared and can change based on the public cloud provider. A number of additional parameters can also be inserted:
centos_rackspace:
provider: rackspace
image: CentOS 6.2
size: 1024 server
minion:
master: salt.example.com
append_domain: webs.example.com
grains:
role: webserver
The image must be selected from available images. Similarly, sizes must be selected from the list of sizes. To get a list of available images and sizes use the following command:
salt-cloud --list-images openstack
salt-cloud --list-sizes openstack
Some parameters can be specified in the main Salt cloud configuration file and then are applied to all cloud profiles. For instance if only a single cloud provider is being used then the provider option can be declared in the Salt cloud configuration file.
In addition to /etc/salt/cloud.profiles
, profiles can also be specified in
any file matching cloud.profiles.d/*conf
which is a sub-directory relative
to the profiles configuration file(with the above configuration file as an
example, /etc/salt/cloud.profiles.d/*.conf
). This allows for more
extensible configuration, and plays nicely with various configuration
management tools as well as version control systems.
rhel_ec2:
provider: ec2
image: ami-e565ba8c
size: t1.micro
minion:
cheese: edam
ubuntu_ec2:
provider: ec2
image: ami-7e2da54e
size: t1.micro
minion:
cheese: edam
ubuntu_rackspace:
provider: rackspace
image: Ubuntu 12.04 LTS
size: 256 server
minion:
cheese: edam
fedora_rackspace:
provider: rackspace
image: Fedora 17
size: 256 server
minion:
cheese: edam
cent_linode:
provider: linode
image: CentOS 6.2 64bit
size: Linode 512
cent_gogrid:
provider: gogrid
image: 12834
size: 512MB
cent_joyent:
provider: joyent
image: centos-6
size: Small 1GB
A number of options exist when creating virtual machines. They can be managed directly from profiles and the command line execution, or a more complex map file can be created. The map file allows for a number of virtual machines to be created and associated with specific profiles.
Map files have a simple format, specify a profile and then a list of virtual machines to make from said profile:
fedora_small:
- web1
- web2
- web3
fedora_high:
- redis1
- redis2
- redis3
cent_high:
- riak1
- riak2
- riak3
This map file can then be called to roll out all of these virtual machines. Map files are called from the salt-cloud command with the -m option:
$ salt-cloud -m /path/to/mapfile
Remember, that as with direct profile provisioning the -P option can be passed to create the virtual machines in parallel:
$ salt-cloud -m /path/to/mapfile -P
A map file can also be enforced to represent the total state of a cloud
deployment by using the --hard
option. When using the hard option any vms
that exist but are not specified in the map file will be destroyed:
$ salt-cloud -m /path/to/mapfile -P -H
Be careful with this argument, it is very dangerous! In fact, it is so dangerous that in order to use it, you must explicitly enable it in the main configuration file.
enable_hard_maps: True
A map file can include grains and minion configuration options:
fedora_small:
- web1:
minion:
log_level: debug
grains:
cheese: tasty
omelet: du fromage
- web2:
minion:
log_level: warn
grains:
cheese: more tasty
omelet: with peppers
A map file may also be used with the various query options:
$ salt-cloud -m /path/to/mapfile -Q
{'ec2': {'web1': {'id': 'i-e6aqfegb',
'image': None,
'private_ips': [],
'public_ips': [],
'size': None,
'state': 0}},
'web2': {'Absent'}}
...or with the delete option:
$ salt-cloud -m /path/to/mapfile -d
The following virtual machines are set to be destroyed:
web1
web2
Proceed? [N/y]
Warning
Specifying Nodes with Maps on the Command Line
Specifying the name of a node or nodes with the maps options on the command
line is not supported. This is especially important to remember when
using --destroy
with maps; salt-cloud
will ignore any arguments
passed in which are not directly relevant to the map file. When using
``--destroy`` with a map, every node in the map file will be deleted!
Maps don't provide any useful information for destroying individual nodes,
and should not be used to destroy a subset of a map.
Bootstrapping a new master in the map is as simple as:
fedora_small:
- web1:
make_master: True
- web2
- web3
Notice that ALL bootstrapped minions from the map will answer to the newly created salt-master.
To make any of the bootstrapped minions answer to the bootstrapping salt-master as opposed to the newly created salt-master, as an example:
fedora_small:
- web1:
make_master: True
minion:
master: <the local master ip address>
local_master: True
- web2
- web3
The above says the minion running on the newly created salt-master responds to the local master, ie, the master used to bootstrap these VMs.
Another example:
fedora_small:
- web1:
make_master: True
- web2
- web3:
minion:
master: <the local master ip address>
local_master: True
The above example makes the web3
minion answer to the local master, not the
newly created master.
Once a VM has been created, there are a number of actions that can be performed on it. The "reboot" action can be used across all providers, but all other actions are specific to the cloud provider. In order to perform an action, you may specify it from the command line, including the name(s) of the VM to perform the action on:
$ salt-cloud -a reboot vm_name
$ salt-cloud -a reboot vm1 vm2 vm2
Or you may specify a map which includes all VMs to perform the action on:
$ salt-cloud -a reboot -m /path/to/mapfile
The following is a list of actions currently supported by salt-cloud:
all providers:
- reboot
ec2:
- start
- stop
joyent:
- stop
Another useful reference for viewing more salt-cloud actions is the :ref:Salt Cloud Feature Matrix <salt-cloud-feature-matrix>
Cloud functions work much the same way as cloud actions, except that they don't perform an operation on a specific instance, and so do not need a machine name to be specified. However, since they perform an operation on a specific cloud provider, that provider must be specified.
$ salt-cloud -f show_image ec2 image=ami-fd20ad94
There are three universal salt-cloud functions that are extremely useful for gathering information about instances on a provider basis:
list_nodes
: Returns some general information about the instances for the given provider.list_nodes_full
: Returns all information about the instances for the given provider.list_nodes_select
: Returns select information about the instances for the given provider.$ salt-cloud -f list_nodes linode
$ salt-cloud -f list_nodes_full linode
$ salt-cloud -f list_nodes_select linode
Another useful reference for viewing salt-cloud functions is the :ref:Salt Cloud Feature Matrix <salt-cloud-feature-matrix>
A number of core configuration options and some options that are global to the
VM profiles can be set in the cloud configuration file. By default this file is
located at /etc/salt/cloud
.
When salt cloud is operating in parallel mode via the -P
argument, you can
control the thread pool size by specifying the pool_size
parameter with
a positive integer value.
By default, the thread pool size will be set to the number of VMs that salt cloud is operating on.
pool_size: 10
The default minion configuration is set up in this file. Minions created by salt-cloud derive their configuration from this file. Almost all parameters found in Configuring the Salt Minion can be used here.
minion:
master: saltmaster.example.com
In particular, this is the location to specify the location of the salt master and its listening port, if the port is not set to the default.
The data specific to interacting with public clouds is set up here.
Cloud provider configuration syntax can live in several places. The first is in
/etc/salt/cloud
:
# /etc/salt/cloud
providers:
my-aws-migrated-config:
id: HJGRYCILJLKJYG
key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn'
keyname: test
securitygroup: quick-start
private_key: /root/test.pem
provider: aws
Cloud provider configuration data can also be housed in /etc/salt/cloud.providers
or any file matching /etc/salt/cloud.providers.d/*.conf
. All files in any of these
locations will be parsed for cloud provider data.
Using the example configuration above:
# /etc/salt/cloud.providers
# or could be /etc/salt/cloud.providers.d/*.conf
my-aws-config:
id: HJGRYCILJLKJYG
key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn'
keyname: test
securitygroup: quick-start
private_key: /root/test.pem
provider: aws
Note
Salt Cloud provider configurations within /etc/cloud.provider.d/ should not
specify the ``providers
starting key.
It is also possible to have multiple cloud configuration blocks within the same alias block. For example:
production-config:
- id: HJGRYCILJLKJYG
key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn'
keyname: test
securitygroup: quick-start
private_key: /root/test.pem
provider: aws
- user: example_user
apikey: 123984bjjas87034
provider: rackspace
However, using this configuration method requires a change with profile configuration blocks. The provider alias needs to have the provider key value appended as in the following example:
rhel_aws_dev:
provider: production-config:aws
image: ami-e565ba8c
size: t1.micro
rhel_aws_prod:
provider: production-config:aws
image: ami-e565ba8c
size: High-CPU Extra Large Instance
database_prod:
provider: production-config:rackspace
image: Ubuntu 12.04 LTS
size: 256 server
Notice that because of the multiple entries, one has to be explicit about the provider alias and
name, from the above example, production-config: aws
.
This data interactions with the salt-cloud
binary regarding its --list-location
,
--list-images
, and --list-sizes
which needs a cloud provider as an argument. The argument
used should be the configured cloud provider alias. If the provider alias has multiple entries,
<provider-alias>: <provider-name>
should be used.
To allow for a more extensible configuration, --providers-config
, which defaults to
/etc/salt/cloud.providers
, was added to the cli parser. It allows for the providers'
configuration to be added on a per-file basis.
It is possible to configure cloud providers using pillars. This is only used when inside the cloud
module. You can setup a variable called cloud
that contains your profile and provider to pass
that information to the cloud servers instead of having to copy the full configuration to every
minion. In your pillar file, you would use something like this:
cloud:
ssh_key_name: saltstack
ssh_key_file: /root/.ssh/id_rsa
update_cachedir: True
diff_cache_events: True
change_password: True
providers:
my-nova:
identity_url: https://identity.api.rackspacecloud.com/v2.0/
compute_region: IAD
user: myuser
api_key: apikey
tenant: 123456
provider: nova
my-openstack:
identity_url: https://identity.api.rackspacecloud.com/v2.0/tokens
user: user2
apikey: apikey2
tenant: 654321
compute_region: DFW
provider: openstack
compute_name: cloudServersOpenStack
profiles:
ubuntu-nova:
provider: my-nova
size: performance1-8
image: bb02b1a3-bc77-4d17-ab5b-421d89850fca
script_args: git develop
ubuntu-openstack:
provider: my-openstack
size: performance1-8
image: bb02b1a3-bc77-4d17-ab5b-421d89850fca
script_args: git develop
To use Salt Cloud with Scaleway, you need to get an access key
and an API token
. API tokens
are unique identifiers associated with your Scaleway account.
To retrieve your access key
and API token
, log-in to the Scaleway control panel, open the pull-down menu on your account name and click on "My Credentials" link.
If you do not have API token
you can create one by clicking the "Create New Token" button on the right corner.
my-scaleway-config:
access_key: 15cf404d-4560-41b1-9a0c-21c3d5c4ff1f
token: a7347ec8-5de1-4024-a5e3-24b77d1ba91d
provider: scaleway
Note
In the cloud profile that uses this provider configuration, the syntax for the
provider
required field would be provider: my-scaleway-config
.
Rackspace cloud requires two configuration options; a user
and an apikey
:
my-rackspace-config:
user: example_user
apikey: 123984bjjas87034
provider: rackspace-config
Note
In the cloud profile that uses this provider configuration, the syntax for the
provider
required field would be provider: my-rackspace-config
.
A number of configuration options are required for Amazon AWS including id
,
key
, keyname
, sercuritygroup
, and private_key
:
my-aws-quick-start:
id: HJGRYCILJLKJYG
key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn'
keyname: test
securitygroup: quick-start
private_key: /root/test.pem
provider: aws
my-aws-default:
id: HJGRYCILJLKJYG
key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn'
keyname: test
securitygroup: default
private_key: /root/test.pem
provider: aws
Note
In the cloud profile that uses this provider configuration, the syntax for the
provider
required field would be either provider: my-aws-quick-start
or provider: my-aws-default
.
Linode requires a single API key, but the default root password also needs to be set:
my-linode-config:
apikey: asldkgfakl;sdfjsjaslfjaklsdjf;askldjfaaklsjdfhasldsadfghdkf
password: F00barbaz
ssh_pubkey: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKHEOLLbeXgaqRQT9NBAopVz366SdYc0KKX33vAnq+2R user@host
ssh_key_file: ~/.ssh/id_ed25519
provider: linode
The password needs to be 8 characters and contain lowercase, uppercase, and numbers.
Note
In the cloud profile that uses this provider configuration, the syntax for the
provider
required field would be provider: my-linode-config
The Joyent cloud requires three configuration parameters: The username and password that are used to log into the Joyent system, as well as the location of the private SSH key associated with the Joyent account. The SSH key is needed to send the provisioning commands up to the freshly created virtual machine.
my-joyent-config:
user: fred
password: saltybacon
private_key: /root/joyent.pem
provider: joyent
Note
In the cloud profile that uses this provider configuration, the syntax for the
provider
required field would be provider: my-joyent-config
To use Salt Cloud with GoGrid, log into the GoGrid web interface and create an API key. Do this by clicking on "My Account" and then going to the API Keys tab.
The apikey
and the sharedsecret
configuration parameters need to
be set in the configuration file to enable interfacing with GoGrid:
my-gogrid-config:
apikey: asdff7896asdh789
sharedsecret: saltybacon
provider: gogrid
Note
In the cloud profile that uses this provider configuration, the syntax for the
provider
required field would be provider: my-gogrid-config
.
OpenStack configuration differs between providers, and at the moment several options need to be specified. This module has been officially tested against the HP and the Rackspace implementations, and some examples are provided for both.
# For HP
my-openstack-hp-config:
identity_url:
'https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/'
compute_name: Compute
compute_region: 'az-1.region-a.geo-1'
tenant: myuser-tenant1
user: myuser
ssh_key_name: mykey
ssh_key_file: '/etc/salt/hpcloud/mykey.pem'
password: mypass
provider: openstack
# For Rackspace
my-openstack-rackspace-config:
identity_url: 'https://identity.api.rackspacecloud.com/v2.0/tokens'
compute_name: cloudServersOpenStack
protocol: ipv4
compute_region: DFW
protocol: ipv4
user: myuser
tenant: 5555555
password: mypass
provider: openstack
If you have an API key for your provider, it may be specified instead of a password:
my-openstack-hp-config:
apikey: 901d3f579h23c8v73q9
my-openstack-rackspace-config:
apikey: 901d3f579h23c8v73q9
Note
In the cloud profile that uses this provider configuration, the syntax for the
provider
required field would be either provider: my-openstack-hp-config
or provider: my-openstack-rackspace-config
.
You will certainly need to configure the user
, tenant
, and either
password
or apikey
.
If your OpenStack instances only have private IP addresses and a CIDR range of private addresses are not reachable from the salt-master, you may set your preference to have Salt ignore it:
my-openstack-config:
ignore_cidr: 192.168.0.0/16
For in-house OpenStack Essex installation, libcloud needs the service_type :
my-openstack-config:
identity_url: 'http://control.openstack.example.org:5000/v2.0/'
compute_name : Compute Service
service_type : compute
Using Salt for DigitalOcean requires a client_key
and an api_key
. These
can be found in the DigitalOcean web interface, in the "My Settings" section,
under the API Access tab.
my-digitalocean-config:
provider: digital_ocean
personal_access_token: xxx
location: New York 1
Note
In the cloud profile that uses this provider configuration, the syntax for the
provider
required field would be provider: my-digital-ocean-config
.
Using Salt with Parallels requires a user
, password
and URL
. These
can be obtained from your cloud provider.
my-parallels-config:
user: myuser
password: xyzzy
url: https://api.cloud.xmission.com:4465/paci/v1.0/
provider: parallels
Note
In the cloud profile that uses this provider configuration, the syntax for the
provider
required field would be provider: my-parallels-config
.
Using Salt with Proxmox requires a user
, password
, and URL
. These can be
obtained from your cloud provider. Both PAM and PVE users can be used.
my-proxmox-config:
provider: proxmox
user: saltcloud@pve
password: xyzzy
url: your.proxmox.host
Note
In the cloud profile that uses this provider configuration, the syntax for the
provider
required field would be provider: my-proxmox-config
.
The lxc driver uses saltify to install salt and attach the lxc container as a new lxc minion. As soon as we can, we manage baremetal operation over SSH. You can also destroy those containers via this driver.
devhost10-lxc:
target: devhost10
provider: lxc
And in the map file:
devhost10-lxc:
provider: devhost10-lxc
from_container: ubuntu
backing: lvm
sudo: True
size: 3g
ip: 10.0.3.9
minion:
master: 10.5.0.1
master_port: 4506
lxc_conf:
- lxc.utsname: superlxc
Note
In the cloud profile that uses this provider configuration, the syntax for the
provider
required field would be provider: devhost10-lxc
.
The Saltify driver is a new, experimental driver for installing Salt on
existing machines (virtual or bare metal). Because it does not use an actual
cloud provider, it needs no configuration in the main cloud config file.
However, it does still require a profile to be set up, and is most useful when
used inside a map file. The key parameters to be set are ssh_host
,
ssh_username
and either ssh_keyfile
or ssh_password
. These may all
be set in either the profile or the map. An example configuration might use the
following in cloud.profiles:
make_salty:
provider: saltify
And in the map file:
make_salty:
- myinstance:
ssh_host: 54.262.11.38
ssh_username: ubuntu
ssh_keyfile: '/etc/salt/mysshkey.pem'
sudo: True
Note
In the cloud profile that uses this provider configuration, the syntax for the
provider
required field would be provider: make_salty
.
As of 0.8.7, the option to extend both the profiles and cloud providers
configuration and avoid duplication was added. The extends feature works on the
current profiles configuration, but, regarding the cloud providers
configuration, only works in the new syntax and respective configuration
files, i.e. /etc/salt/salt/cloud.providers
or
/etc/salt/cloud.providers.d/*.conf
.
Note
Extending cloud profiles and providers is not recursive. For example, a profile that is extended by a second profile is possible, but the second profile cannot be extended by a third profile.
Also, if a profile (or provider) is extending another profile and each contains a list of values, the lists from the extending profile will override the list from the original profile. The lists are not merged together.
Some example usage on how to use extends
with profiles. Consider
/etc/salt/salt/cloud.profiles
containing:
development-instances:
provider: my-ec2-config
size: t1.micro
ssh_username: ec2_user
securitygroup:
- default
deploy: False
Amazon-Linux-AMI-2012.09-64bit:
image: ami-54cf5c3d
extends: development-instances
Fedora-17:
image: ami-08d97e61
extends: development-instances
CentOS-5:
provider: my-aws-config
image: ami-09b61d60
extends: development-instances
The above configuration, once parsed would generate the following profiles data:
[{'deploy': False,
'image': 'ami-08d97e61',
'profile': 'Fedora-17',
'provider': 'my-ec2-config',
'securitygroup': ['default'],
'size': 't1.micro',
'ssh_username': 'ec2_user'},
{'deploy': False,
'image': 'ami-09b61d60',
'profile': 'CentOS-5',
'provider': 'my-aws-config',
'securitygroup': ['default'],
'size': 't1.micro',
'ssh_username': 'ec2_user'},
{'deploy': False,
'image': 'ami-54cf5c3d',
'profile': 'Amazon-Linux-AMI-2012.09-64bit',
'provider': 'my-ec2-config',
'securitygroup': ['default'],
'size': 't1.micro',
'ssh_username': 'ec2_user'},
{'deploy': False,
'profile': 'development-instances',
'provider': 'my-ec2-config',
'securitygroup': ['default'],
'size': 't1.micro',
'ssh_username': 'ec2_user'}]
Pretty cool right?
Some example usage on how to use extends
within the cloud providers
configuration. Consider /etc/salt/salt/cloud.providers
containing:
my-develop-envs:
- id: HJGRYCILJLKJYG
key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn'
keyname: test
securitygroup: quick-start
private_key: /root/test.pem
location: ap-southeast-1
availability_zone: ap-southeast-1b
provider: aws
- user: myuser@mycorp.com
password: mypass
ssh_key_name: mykey
ssh_key_file: '/etc/salt/ibm/mykey.pem'
location: Raleigh
provider: ibmsce
my-productions-envs:
- extends: my-develop-envs:ibmsce
user: my-production-user@mycorp.com
location: us-east-1
availability_zone: us-east-1
The above configuration, once parsed would generate the following providers data:
'providers': {
'my-develop-envs': [
{'availability_zone': 'ap-southeast-1b',
'id': 'HJGRYCILJLKJYG',
'key': 'kdjgfsgm;woormgl/aserigjksjdhasdfgn',
'keyname': 'test',
'location': 'ap-southeast-1',
'private_key': '/root/test.pem',
'provider': 'aws',
'securitygroup': 'quick-start'
},
{'location': 'Raleigh',
'password': 'mypass',
'provider': 'ibmsce',
'ssh_key_file': '/etc/salt/ibm/mykey.pem',
'ssh_key_name': 'mykey',
'user': 'myuser@mycorp.com'
}
],
'my-productions-envs': [
{'availability_zone': 'us-east-1',
'location': 'us-east-1',
'password': 'mypass',
'provider': 'ibmsce',
'ssh_key_file': '/etc/salt/ibm/mykey.pem',
'ssh_key_name': 'mykey',
'user': 'my-production-user@mycorp.com'
}
]
}
It is possible to use Salt Cloud to spin up Windows instances, and then install Salt on them. This functionality is available on all cloud providers that are supported by Salt Cloud. However, it may not necessarily be available on all Windows images.
Salt Cloud makes use of impacket and winexe to set up the Windows Salt Minion installer.
impacket is usually available as either the impacket or the python-impacket package, depending on the distribution. More information on impacket can be found at the project home:
winexe is less commonly available in distribution-specific repositories. However, it is currently being built for various distributions in 3rd party channels:
Optionally WinRM can be used instead of winexe if the python module pywinrm is available and WinRM is supported on the target Windows version. Information on pywinrm can be found at the project home:
Additionally, a copy of the Salt Minion Windows installer must be present on the system on which Salt Cloud is running. This installer may be downloaded from saltstack.com:
Because Salt Cloud makes use of smbclient and winexe, port 445 must be open on the target image. This port is not generally open by default on a standard Windows distribution, and care must be taken to use an image in which this port is open, or the Windows firewall is disabled.
If supported by the cloud provider, a PowerShell script may be used to open up this port automatically, using the cloud provider's userdata. The following script would open up port 445, and apply the changes:
<powershell>
New-NetFirewallRule -Name "SMB445" -DisplayName "SMB445" -Protocol TCP -LocalPort 445
Set-Item (dir wsman:\localhost\Listener\*\Port -Recurse).pspath 445 -Force
Restart-Service winrm
</powershell>
For EC2, this script may be saved as a file, and specified in the provider or profile configuration as userdata_file. For instance:
userdata_file: /etc/salt/windows-firewall.ps1
Configuration is set as usual, with some extra configuration settings. The location of the Windows installer on the machine that Salt Cloud is running on must be specified. This may be done in any of the regular configuration files (main, providers, profiles, maps). For example:
Setting the installer in /etc/salt/cloud.providers
:
my-softlayer:
provider: softlayer
user: MYUSER1138
apikey: 'e3b68aa711e6deadc62d5b76355674beef7cc3116062ddbacafe5f7e465bfdc9'
minion:
master: saltmaster.example.com
win_installer: /root/Salt-Minion-2014.7.0-AMD64-Setup.exe
win_username: Administrator
win_password: letmein
smb_port: 445
The default Windows user is Administrator, and the default Windows password is blank.
If WinRM is to be used use_winrm
needs to be set to True.
On EC2, when the win_password is set to auto, Salt Cloud will query EC2 for an auto-generated password. This password is expected to take at least 4 minutes to generate, adding additional time to the deploy process.
When the EC2 API is queried for the auto-generated password, it will be returned in a message encrypted with the specified keyname. This requires that the appropriate private_key file is also specified. Such a profile configuration might look like:
windows-server-2012:
provider: my-ec2-config
image: ami-c49c0dac
size: m1.small
securitygroup: windows
keyname: mykey
private_key: /root/mykey.pem
userdata_file: /etc/salt/windows-firewall.ps1
win_installer: /root/Salt-Minion-2014.7.0-AMD64-Setup.exe
win_username: Administrator
win_password: auto
The Aliyun ECS (Elastic Computer Service) is one of the most popular public cloud providers in China. This cloud provider can be used to manage aliyun instance using salt-cloud.
This driver requires the Python requests
library to be installed.
Using Salt for Aliyun ECS requires aliyun access key id and key secret. These can be found in the aliyun web interface, in the "User Center" section, under "My Service" tab.
# Note: This example is for /etc/salt/cloud.providers or any file in the
# /etc/salt/cloud.providers.d/ directory.
my-aliyun-config:
# aliyun Access Key ID
id: wDGEwGregedg3435gDgxd
# aliyun Access Key Secret
key: GDd45t43RDBTrkkkg43934t34qT43t4dgegerGEgg
location: cn-qingdao
provider: aliyun
Set up an initial profile at /etc/salt/cloud.profiles
or in the
/etc/salt/cloud.profiles.d/
directory:
aliyun_centos:
provider: my-aliyun-config
size: ecs.t1.small
location: cn-qingdao
securitygroup: G1989096784427999
image: centos6u3_64_20G_aliaegis_20130816.vhd
Sizes can be obtained using the --list-sizes
option for the salt-cloud
command:
# salt-cloud --list-sizes my-aliyun-config
my-aliyun-config:
----------
aliyun:
----------
ecs.c1.large:
----------
CpuCoreCount:
8
InstanceTypeId:
ecs.c1.large
MemorySize:
16.0
...SNIP...
Images can be obtained using the --list-images
option for the salt-cloud
command:
# salt-cloud --list-images my-aliyun-config
my-aliyun-config:
----------
aliyun:
----------
centos5u8_64_20G_aliaegis_20131231.vhd:
----------
Architecture:
x86_64
Description:
ImageId:
centos5u8_64_20G_aliaegis_20131231.vhd
ImageName:
CentOS 5.8 64位
ImageOwnerAlias:
system
ImageVersion:
1.0
OSName:
CentOS 5.8 64位
Platform:
CENTOS5
Size:
20
Visibility:
public
...SNIP...
Locations can be obtained using the --list-locations
option for the salt-cloud
command:
my-aliyun-config:
----------
aliyun:
----------
cn-beijing:
----------
LocalName:
北京
RegionId:
cn-beijing
cn-hangzhou:
----------
LocalName:
杭州
RegionId:
cn-hangzhou
cn-hongkong:
----------
LocalName:
香港
RegionId:
cn-hongkong
cn-qingdao:
----------
LocalName:
青岛
RegionId:
cn-qingdao
Security Group can be obtained using the -f list_securitygroup
option
for the salt-cloud
command:
# salt-cloud --location=cn-qingdao -f list_securitygroup my-aliyun-config
my-aliyun-config:
----------
aliyun:
----------
G1989096784427999:
----------
Description:
G1989096784427999
SecurityGroupId:
G1989096784427999
Note
Aliyun ECS REST API documentation is available from Aliyun ECS API.
New in version 2014.1.0.
Azure is a cloud service by Microsoft providing virtual machines, SQL services, media services, and more. This document describes how to use Salt Cloud to create a virtual machine on Azure, with Salt installed.
More information about Azure is located at http://www.windowsazure.com/.
Set up the provider config at /etc/salt/cloud.providers.d/azure.conf
:
# Note: This example is for /etc/salt/cloud.providers.d/azure.conf
my-azure-config:
provider: azure
subscription_id: 3287abc8-f98a-c678-3bde-326766fd3617
certificate_path: /etc/salt/azure.pem
# Set up the location of the salt master
#
minion:
master: saltmaster.example.com
# Optional
management_host: management.core.windows.net
The certificate used must be generated by the user. OpenSSL can be used to create the management certificates. Two certificates are needed: a .cer file, which is uploaded to Azure, and a .pem file, which is stored locally.
To create the .pem file, execute the following command:
openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout /etc/salt/azure.pem -out /etc/salt/azure.pem
To create the .cer file, execute the following command:
openssl x509 -inform pem -in /etc/salt/azure.pem -outform der -out /etc/salt/azure.cer
After creating these files, the .cer file will need to be uploaded to Azure via the "Upload a Management Certificate" action of the "Management Certificates" tab within the "Settings" section of the management portal.
Optionally, a management_host
may be configured, if necessary for the region.
Set up an initial profile at /etc/salt/cloud.profiles
:
azure-ubuntu:
provider: my-azure-config
image: 'b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-12_04_3-LTS-amd64-server-20131003-en-us-30GB'
size: Small
location: 'East US'
ssh_username: azureuser
ssh_password: verybadpass
slot: production
media_link: 'http://portalvhdabcdefghijklmn.blob.core.windows.net/vhds'
These options are described in more detail below. Once configured, the profile can be realized with a salt command:
salt-cloud -p azure-ubuntu newinstance
This will create an salt minion instance named newinstance
in Azure. If
the command was executed on the salt-master, its Salt key will automatically
be signed on the master.
Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt:
salt newinstance test.ping
The following options are currently available for Azure.
The name of the provider as configured in /etc/salt/cloud.providers.d/azure.conf.
The name of the image to use to create a VM. Available images can be viewed using the following command:
salt-cloud --list-images my-azure-config
The name of the size to use to create a VM. Available sizes can be viewed using the following command:
salt-cloud --list-sizes my-azure-config
The name of the location to create a VM in. Available locations can be viewed using the following command:
salt-cloud --list-locations my-azure-config
The name of the affinity group to create a VM in. Either a location
or an
affinity_group
may be specified, but not both. See Affinity Groups below.
The user to use to log into the newly-created VM to install Salt.
The password to use to log into the newly-created VM to install Salt.
The environment to which the hosted service is deployed. Valid values are staging or production. When set to production, the resulting URL of the new VM will be <vm_name>.cloudapp.net. When set to staging, the resulting URL will contain a generated hash instead.
This is the URL of the container that will store the disk that this VM uses. Currently, this container must already exist. If a VM has previously been created in the associated account, a container should already exist. In the web interface, go into the Storage area and click one of the available storage selections. Click the Containers link, and then copy the URL from the container that will be used. It generally looks like:
http://portalvhdabcdefghijklmn.blob.core.windows.net/vhds
The name of the service in which to create the VM. If this is not specified, then a service will be created with the same name as the VM.
This action is a thin wrapper around --full-query
, which displays details on
a single instance only. In an environment with several machines, this will save
a user from having to sort through all instance data, just to examine a single
instance.
salt-cloud -a show_instance myinstance
There are certain options which can be specified in the global cloud
configuration file (usually /etc/salt/cloud
) which affect Salt Cloud's
behavior when a VM is destroyed.
New in version Beryllium.
Default is False
. When set to True
, Salt Cloud will wait for the VM to
be destroyed, then attempt to destroy the main disk that is associated with the
VM.
New in version Beryllium.
Default is False
. Requires cleanup_disks
to be set to True
. When
also set to True
, Salt Cloud will ask Azure to delete the VHD associated
with the disk that is also destroyed.
New in version Beryllium.
Default is False
. Requires cleanup_disks
to be set to True
. When
also set to True
, Salt Cloud will wait for the disk to be destroyed, then
attempt to remove the service that is associated with the VM. Because the disk
belongs to the service, the disk must be destroyed before the service can be.
New in version Beryllium.
An account can have one or more hosted services. A hosted service is required in order to create a VM. However, as mentioned above, if a hosted service is not specified when a VM is created, then one will automatically be created with the name of the name. The following functions are also available.
Create a hosted service. The following options are available.
Required. The name of the hosted service to create.
Required. A label to apply to the hosted service.
Optional. A longer description of the hosted service.
Required, if affinity_group
is not set. The location in which to create the
hosted service. Either the location
or the affinity_group
must be set,
but not both.
Required, if location
is not set. The affinity group in which to create the
hosted service. Either the location
or the affinity_group
must be set,
but not both.
Optional. Dictionary containing name/value pairs of hosted service properties. You can have a maximum of 50 extended property name/value pairs. The maximum length of the Name element is 64 characters, only alphanumeric characters and underscores are valid in the Name, and the name must start with a letter. The value has a maximum length of 255 characters.
The following example illustrates creating a hosted service.
salt-cloud -f create_service my-azure name=my-service label=my-service location='West US'
Return details about a specific hosted service. Can also be called with
get_service
.
salt-cloud -f show_storage my-azure name=my-service
List all hosted services associates with the subscription.
salt-cloud -f list_services my-azure-config
Delete a specific hosted service.
salt-cloud -f delete_service my-azure name=my-service
New in version Beryllium.
Salt Cloud can manage storage accounts associated with the account. The following functions are available. Deprecated marked as deprecated are marked as such as per the SDK documentation, but are still included for completeness with the SDK.
Create a storage account. The following options are supported.
Required. The name of the storage account to create.
Required. A label to apply to the storage account.
Optional. A longer description of the storage account.
Required, if affinity_group
is not set. The location in which to create the
storage account. Either the location
or the affinity_group
must be set,
but not both.
Required, if location
is not set. The affinity group in which to create the
storage account. Either the location
or the affinity_group
must be set,
but not both.
Optional. Dictionary containing name/value pairs of storage account properties. You can have a maximum of 50 extended property name/value pairs. The maximum length of the Name element is 64 characters, only alphanumeric characters and underscores are valid in the Name, and the name must start with a letter. The value has a maximum length of 255 characters.
Deprecated. Replaced by the account_type parameter.
Specifies whether the account supports locally-redundant storage, geo-redundant storage, zone-redundant storage, or read access geo-redundant storage. Possible values are:
The following example illustrates creating a storage account.
salt-cloud -f create_storage my-azure name=my-storage label=my-storage location='West US'
List all storage accounts associates with the subscription.
salt-cloud -f list_storage my-azure-config
Return details about a specific storage account. Can also be called with
get_storage
.
salt-cloud -f show_storage my-azure name=my-storage
Update details concerning a storage account. Any of the options available in
create_storage
can be used, but the name cannot be changed.
salt-cloud -f update_storage my-azure name=my-storage label=my-storage
Delete a specific storage account.
salt-cloud -f delete_storage my-azure name=my-storage
Returns the primary and secondary access keys for the specified storage account.
salt-cloud -f show_storage_keys my-azure name=my-storage
Regenerate storage account keys. Requires a key_type ("primary" or "secondary") to be specified.
salt-cloud -f regenerate_storage_keys my-azure name=my-storage key_type=primary
New in version Beryllium.
When a VM is created, a disk will also be created for it. The following functions are available for managing disks. Deprecated marked as deprecated are marked as such as per the SDK documentation, but are still included for completeness with the SDK.
Return details about a specific disk. Can also be called with get_disk
.
salt-cloud -f show_disk my-azure name=my-disk
Update details for a disk. The following options are available.
Required. The name of the disk to update.
Deprecated.
Required. The label for the disk.
Deprecated. The location of the disk in the account, including the storage container that it is in. This should not need to be changed.
Deprecated. If renaming the disk, the new name.
Deprecated.
The following example illustrates updating a disk.
salt-cloud -f update_disk my-azure name=my-disk label=my-disk
New in version Beryllium.
Stored at the cloud service level, these certificates are used by your deployed services. For more information on service certificates, see the following link:
The following functions are available.
List service certificates associated with the account.
salt-cloud -f list_service_certificates my-azure
Show the data for a specific service certificate associated with the account.
The name
, thumbprint
, and thumbalgorithm
can be obtained from
list_service_certificates
. Can also be called with
get_service_certificate
.
salt-cloud -f show_service_certificate my-azure name=my_service_certificate \
thumbalgorithm=sha1 thumbprint=0123456789ABCDEF
Add a service certificate to the account. This requires that a certificate already exists, which is then added to the account. For more information on creating the certificate itself, see:
The following options are available.
Required. The name of the hosted service that the certificate will belong to.
Required. The base-64 encoded form of the pfx file.
Required. The service certificate format. The only supported value is pfx.
The certificate password.
salt-cloud -f add_service_certificate my-azure name=my-cert \
data='...CERT_DATA...' certificate_format=pfx password=verybadpass
Delete a service certificate from the account. The name
, thumbprint
,
and thumbalgorithm
can be obtained from list_service_certificates
.
salt-cloud -f delete_service_certificate my-azure \
name=my_service_certificate \
thumbalgorithm=sha1 thumbprint=0123456789ABCDEF
New in version Beryllium.
A Azure management certificate is an X.509 v3 certificate used to authenticate an agent, such as Visual Studio Tools for Windows Azure or a client application that uses the Service Management API, acting on behalf of the subscription owner to manage subscription resources. Azure management certificates are uploaded to Azure and stored at the subscription level. The management certificate store can hold up to 100 certificates per subscription. These certificates are used to authenticate your Windows Azure deployment.
For more information on management certificates, see the following link.
The following functions are available.
List management certificates associated with the account.
salt-cloud -f list_management_certificates my-azure
Show the data for a specific management certificate associated with the account.
The name
, thumbprint
, and thumbalgorithm
can be obtained from
list_management_certificates
. Can also be called with
get_management_certificate
.
salt-cloud -f show_management_certificate my-azure name=my_management_certificate \
thumbalgorithm=sha1 thumbprint=0123456789ABCDEF
Management certificates must have a key length of at least 2048 bits and should reside in the Personal certificate store. When the certificate is installed on the client, it should contain the private key of the certificate. To upload to the certificate to the Microsoft Azure Management Portal, you must export it as a .cer format file that does not contain the private key. For more information on creating management certificates, see the following link:
The following options are available.
A base64 representation of the management certificate public key.
The thumb print that uniquely identifies the management certificate.
The certificate's raw data in base-64 encoded .cer format.
salt-cloud -f add_management_certificate my-azure public_key='...PUBKEY...' \
thumbprint=0123456789ABCDEF data='...CERT_DATA...'
Delete a management certificate from the account. The thumbprint
can be
obtained from list_management_certificates
.
salt-cloud -f delete_management_certificate my-azure thumbprint=0123456789ABCDEF
New in version Beryllium.
The following are functions for managing virtual networks.
List input endpoints associated with the deployment.
salt-cloud -f list_virtual_networks my-azure service=myservice deployment=mydeployment
New in version Beryllium.
Input endpoints are used to manage port access for roles. Because endpoints
cannot be managed by the Azure Python SDK, Salt Cloud uses the API directly.
With versions of Python before 2.7.9, the requests-python
package needs to
be installed in order for this to work. Additionally, the following needs to be
set in the master's configuration file:
requests_lib: True
The following functions are available.
List input endpoints associated with the deployment
salt-cloud -f list_input_endpoints my-azure service=myservice deployment=mydeployment
Show an input endpoint associated with the deployment
salt-cloud -f show_input_endpoint my-azure service=myservice \
deployment=mydeployment name=SSH
Add an input endpoint to the deployment. Please note that there may be a delay before the changes show up. The following options are available.
Required. The name of the hosted service which the VM belongs to.
Required. The name of the deployment that the VM belongs to. If the VM was created with Salt Cloud, the deployment name probably matches the VM name.
Required. The name of the role that the VM belongs to. If the VM was created with Salt Cloud, the role name probably matches the VM name.
Required. The name of the input endpoint. This typically matches the port that the endpoint is set to. For instance, port 22 would be called SSH.
Required. The public (Internet-facing) port that is used for the endpoint.
Optional. The private port on the VM itself that will be matched with the port.
This is typically the same as the port
. If this value is not specified, it
will be copied from port
.
Required. Either tcp
or udp
.
Optional. If an internal load balancer exists in the account, it can be used
with a direct server return. The default value is False
. Please see the
following article for an explanation of this option.
Optional. The default value is 4
. Please see the following article for an
explanation of this option.
The following example illustrates adding an input endpoint.
salt-cloud -f add_input_endpoint my-azure service=myservice \
deployment=mydeployment role=myrole name=HTTP local_port=80 \
port=80 protocol=tcp enable_direct_server_return=False \
timeout_for_tcp_idle_connection=4
Updates the details for a specific input endpoint. All options from
add_input_endpoint
are supported.
salt-cloud -f update_input_endpoint my-azure service=myservice \
deployment=mydeployment role=myrole name=HTTP local_port=80 \
port=80 protocol=tcp enable_direct_server_return=False \
timeout_for_tcp_idle_connection=4
Delete an input endpoint from the deployment. Please note that there may be a delay before the changes show up. The following items are required.
The following example illustrates deleting an input endpoint.
The name of the hosted service which the VM belongs to.
The name of the deployment that the VM belongs to. If the VM was created with Salt Cloud, the deployment name probably matches the VM name.
The name of the role that the VM belongs to. If the VM was created with Salt Cloud, the role name probably matches the VM name.
The name of the input endpoint. This typically matches the port that the endpoint is set to. For instance, port 22 would be called SSH.
salt-cloud -f delete_input_endpoint my-azure service=myservice \
deployment=mydeployment role=myrole name=HTTP
New in version Beryllium.
Affinity groups allow you to group your Azure services to optimize performance. All services and VMs within an affinity group will be located in the same region. For more information on Affinity groups, see the following link:
The following functions are available.
List input endpoints associated with the account
salt-cloud -f list_affinity_groups my-azure
Show an affinity group associated with the account
salt-cloud -f show_affinity_group my-azure service=myservice \
deployment=mydeployment name=SSH
Create a new affinity group. The following options are supported.
Required. The name of the new affinity group.
Required. The region in which the affinity group lives.
Required. A label describing the new affinity group.
Optional. A longer description of the affinity group.
salt-cloud -f create_affinity_group my-azure name=my_affinity_group \
label=my-affinity-group location='West US'
Update an affinity group's properties
salt-cloud -f update_affinity_group my-azure name=my_group label=my_group
Delete a specific affinity group associated with the account
salt-cloud -f delete_affinity_group my-azure name=my_affinity_group
New in version Beryllium.
Azure storage containers and their contents can be managed with Salt Cloud. This is not as elegant as using one of the other available clients in Windows, but it benefits Linux and Unix users, as there are fewer options available on those platforms.
Blob storage must be configured differently than the standard Azure
configuration. Both a storage_account
and a storage_key
must be
specified either through the Azure provider configuration (in addition to the
other Azure configuration) or via the command line.
storage_account: mystorage
storage_key: ffhj334fDSGFEGDFGFDewr34fwfsFSDFwe==
This is one of the storage accounts that is available via the list_storage
function.
Both a primary and a secondary storage_key
can be obtained by running the
show_storage_keys
function. Either key may be used.
The following functions are made available through Salt Cloud for managing blog storage.
Creates the URL to access a blob
salt-cloud -f make_blob_url my-azure container=mycontainer blob=myblob
Name of the container.
Name of the blob.
Name of the storage account. If not specified, derives the host base from the provider configuration.
Protocol to use: 'http' or 'https'. If not specified, derives the host base from the provider configuration.
Live host base URL. If not specified, derives the host base from the provider configuration.
List containers associated with the storage account
salt-cloud -f list_storage_containers my-azure
Create a storage container
salt-cloud -f create_storage_container my-azure name=mycontainer
Name of container to create.
Optional. A dict with name_value pairs to associate with the container as metadata. Example:{'Category':'test'}
Optional. Possible values include: container, blob
Specify whether to throw an exception when the container exists.
Show a container associated with the storage account
salt-cloud -f show_storage_container my-azure name=myservice
Name of container to show.
Show a storage container's metadata
salt-cloud -f show_storage_container_metadata my-azure name=myservice
Name of container to show.
If specified, show_storage_container_metadata only succeeds if the container's lease is active and matches this ID.
Set a storage container's metadata
salt-cloud -f set_storage_container my-azure name=mycontainer \
x_ms_meta_name_values='{"my_name": "my_value"}'
Name of existing container.
meta_name_values
````````````
A dict containing name, value for metadata.
Example: {'category':'test'}
lease_id
````
If specified, set_storage_container_metadata only succeeds if the
container's lease is active and matches this ID.
Show a storage container's acl
salt-cloud -f show_storage_container_acl my-azure name=myservice
Name of existing container.
If specified, show_storage_container_acl only succeeds if the container's lease is active and matches this ID.
Set a storage container's acl
salt-cloud -f set_storage_container my-azure name=mycontainer
Name of existing container.
SignedIdentifers instance
Optional. Possible values include: container, blob
If specified, set_storage_container_acl only succeeds if the container's lease is active and matches this ID.
Delete a container associated with the storage account
salt-cloud -f delete_storage_container my-azure name=mycontainer
Name of container to create.
Specify whether to throw an exception when the container exists.
If specified, delete_storage_container only succeeds if the container's lease is active and matches this ID.
Lease a container associated with the storage account
salt-cloud -f lease_storage_container my-azure name=mycontainer
Name of container to create.
Required. Possible values: acquire|renew|release|break|change
Required if the container has an active lease.
Specifies the duration of the lease, in seconds, or negative one (-1) for a lease that never expires. A non-infinite lease can be between 15 and 60 seconds. A lease duration cannot be changed using renew or change. For backwards compatibility, the default is 60, and the value is only used on an acquire operation.
Optional. For a break operation, this is the proposed duration of seconds that the lease should continue before it is broken, between 0 and 60 seconds. This break period is only used if it is shorter than the time remaining on the lease. If longer, the time remaining on the lease is used. A new lease will not be available before the break period has expired, but the lease may be held for longer than the break period. If this header does not appear with a break operation, a fixed-duration lease breaks after the remaining lease period elapses, and an infinite lease breaks immediately.
Optional for acquire, required for change. Proposed lease ID, in a GUID string format.
List blobs associated with the container
salt-cloud -f list_blobs my-azure container=mycontainer
The name of the storage container
Optional. Filters the results to return only blobs whose names begin with the specified prefix.
Optional. A string value that identifies the portion of the list to be returned with the next list operation. The operation returns a marker value within the response body if the list returned was not complete. The marker value may then be used in a subsequent call to request the next set of list items. The marker value is opaque to the client.
Optional. Specifies the maximum number of blobs to return, including all BlobPrefix elements. If the request does not specify maxresults or specifies a value greater than 5,000, the server will return up to 5,000 items. Setting maxresults to a value less than or equal to zero results in error response code 400 (Bad Request).
Optional. Specifies one or more datasets to include in the response. To specify more than one of these options on the URI, you must separate each option with a comma. Valid values are:
snapshots:
Specifies that snapshots should be included in the
enumeration. Snapshots are listed from oldest to newest in
the response.
metadata:
Specifies that blob metadata be returned in the response.
uncommittedblobs:
Specifies that blobs for which blocks have been uploaded,
but which have not been committed using Put Block List
(REST API), be included in the response.
copy:
Version 2012-02-12 and newer. Specifies that metadata
related to any current or previous Copy Blob operation
should be included in the response.
Optional. When the request includes this parameter, the operation returns a BlobPrefix element in the response body that acts as a placeholder for all blobs whose names begin with the same substring up to the appearance of the delimiter character. The delimiter may be a single character or a string.
Show a blob's service properties
salt-cloud -f show_blob_service_properties my-azure
Sets the properties of a storage account's Blob service, including Windows Azure Storage Analytics. You can also use this operation to set the default request version for all incoming requests that do not have a version specified.
salt-cloud -f set_blob_service_properties my-azure
a StorageServiceProperties object.
Optional. The timeout parameter is expressed in seconds.
Returns all user-defined metadata, standard HTTP properties, and system properties for the blob.
salt-cloud -f show_blob_properties my-azure container=mycontainer blob=myblob
Name of existing container.
Name of existing blob.
Required if the blob has an active lease.
Set a blob's properties
salt-cloud -f set_blob_properties my-azure
Name of existing container.
Name of existing blob.
Optional. Modifies the cache control string for the blob.
Optional. Sets the blob's content type.
Optional. Sets the blob's MD5 hash.
Optional. Sets the blob's content encoding.
Optional. Sets the blob's content language.
Required if the blob has an active lease.
Optional. Sets the blob's Content-Disposition header. The Content-Disposition response header field conveys additional information about how to process the response payload, and also can be used to attach additional metadata. For example, if set to attachment, it indicates that the user-agent should not display the response, but instead show a Save As dialog with a filename other than the blob name specified.
Upload a blob
salt-cloud -f put_blob my-azure container=base name=top.sls blob_path=/srv/salt/top.sls
salt-cloud -f put_blob my-azure container=base name=content.txt blob_content='Some content'
Name of existing container.
Name of existing blob.
The path on the local machine of the file to upload as a blob. Either this or blob_content must be specified.
The actual content to be uploaded as a blob. Either this or blob_path must me specified.
Optional. The Blob service stores this value but does not use or modify it.
Optional. Specifies the natural languages used by this resource.
Optional. An MD5 hash of the blob content. This hash is used to verify the integrity of the blob during transport. When this header is specified, the storage service checks the hash that has arrived with the one that was sent. If the two hashes do not match, the operation will fail with error code 400 (Bad Request).
Optional. Set the blob's content type.
Optional. Set the blob's content encoding.
Optional. Set the blob's content language.
Optional. Set the blob's MD5 hash.
Optional. Sets the blob's cache control.
A dict containing name, value for metadata.
Required if the blob has an active lease.
Download a blob
salt-cloud -f get_blob my-azure container=base name=top.sls local_path=/srv/salt/top.sls
salt-cloud -f get_blob my-azure container=base name=content.txt return_content=True
Name of existing container.
Name of existing blob.
The path on the local machine to download the blob to. Either this or return_content must be specified.
Whether or not to return the content directly from the blob. If specified, must be True or False. Either this or the local_path must be specified.
Optional. The snapshot parameter is an opaque DateTime value that, when present, specifies the blob snapshot to retrieve.
Required if the blob has an active lease.
callback for progress with signature function(current, total) where current is the number of bytes transfered so far, and total is the size of the blob.
Maximum number of parallel connections to use when the blob size exceeds 64MB. Set to 1 to download the blob chunks sequentially. Set to 2 or more to download the blob chunks in parallel. This uses more system resources but will download faster.
Number of times to retry download of blob chunk if an error occurs.
Sleep time in secs between retries.
DigitalOcean is a public cloud provider that specializes in Linux instances.
Using Salt for DigitalOcean requires a personal_access_token
, an ssh_key_file
,
and at least one SSH key name in ssh_key_names
. More ssh_key_names
can be added
by separating each key with a comma. The personal_access_token
can be found in the
DigitalOcean web interface in the "Apps & API" section. The SSH key name can be found
under the "SSH Keys" section.
# Note: This example is for /etc/salt/cloud.providers or any file in the
# /etc/salt/cloud.providers.d/ directory.
my-digitalocean-config:
provider: digital_ocean
personal_access_token: xxx
ssh_key_file: /path/to/ssh/key/file
ssh_key_names: my-key-name,my-key-name-2
location: New York 1
Set up an initial profile at /etc/salt/cloud.profiles
or in the
/etc/salt/cloud.profiles.d/
directory:
digitalocean-ubuntu:
provider: my-digitalocean-config
image: Ubuntu 14.04 x32
size: 512MB
location: New York 1
private_networking: True
backups_enabled: True
ipv6: True
Locations can be obtained using the --list-locations
option for the salt-cloud
command:
# salt-cloud --list-locations my-digitalocean-config
my-digitalocean-config:
----------
digital_ocean:
----------
Amsterdam 1:
----------
available:
False
features:
[u'backups']
name:
Amsterdam 1
sizes:
[]
slug:
ams1
...SNIP...
Sizes can be obtained using the --list-sizes
option for the salt-cloud
command:
# salt-cloud --list-sizes my-digitalocean-config
my-digitalocean-config:
----------
digital_ocean:
----------
512MB:
----------
cost_per_hour:
0.00744
cost_per_month:
5.0
cpu:
1
disk:
20
id:
66
memory:
512
name:
512MB
slug:
None
...SNIP...
Images can be obtained using the --list-images
option for the salt-cloud
command:
# salt-cloud --list-images my-digitalocean-config
my-digitalocean-config:
----------
digital_ocean:
----------
Arch Linux 2013.05 x64:
----------
distribution:
Arch Linux
id:
350424
name:
Arch Linux 2013.05 x64
public:
True
slug:
None
...SNIP...
Note
DigitalOcean's concept of Applications
is nothing more than a
pre-configured instance (same as a normal Droplet). You will find examples
such Docker 0.7 Ubuntu 13.04 x64
and Wordpress on Ubuntu 12.10
when using the --list-images
option. These names can be used just like
the rest of the standard instances when specifying an image in the cloud
profile configuration.
Note
If your domain's DNS is managed with DigitalOcean, you can automatically
create A-records for newly created droplets. Use create_dns_record: True
in your config to enable this. Add delete_dns_record: True
to also
delete records when a droplet is destroyed.
Note
Additional documentation is available from DigitalOcean.
Amazon EC2 is a very widely used public cloud platform and one of the core platforms Salt Cloud has been built to support.
Previously, the suggested provider for AWS EC2 was the aws
provider. This
has been deprecated in favor of the ec2
provider. Configuration using the
old aws
provider will still function, but that driver is no longer in
active development.
This driver requires the Python requests
library to be installed.
The following example illustrates some of the options that can be set. These parameters are discussed in more detail below.
# Note: This example is for /etc/salt/cloud.providers or any file in the
# /etc/salt/cloud.providers.d/ directory.
my-ec2-southeast-public-ips:
# Set up the location of the salt master
#
minion:
master: saltmaster.example.com
# Set up grains information, which will be common for all nodes
# using this provider
grains:
node_type: broker
release: 1.0.1
# Specify whether to use public or private IP for deploy script.
#
# Valid options are:
# private_ips - The salt-cloud command is run inside the EC2
# public_ips - The salt-cloud command is run outside of EC2
#
ssh_interface: public_ips
# Optionally configure the Windows credential validation number of
# retries and delay between retries. This defaults to 10 retries
# with a one second delay betwee retries
win_deploy_auth_retries: 10
win_deploy_auth_retry_delay: 1
# Set the EC2 access credentials (see below)
#
id: 'use-instance-role-credentials'
key: 'use-instance-role-credentials'
# Make sure this key is owned by root with permissions 0400.
#
private_key: /etc/salt/my_test_key.pem
keyname: my_test_key
securitygroup: default
# Optionally configure default region
# Use salt-cloud --list-locations <provider> to obtain valid regions
#
location: ap-southeast-1
availability_zone: ap-southeast-1b
# Configure which user to use to run the deploy script. This setting is
# dependent upon the AMI that is used to deploy. It is usually safer to
# configure this individually in a profile, than globally. Typical users
# are:
#
# Amazon Linux -> ec2-user
# RHEL -> ec2-user
# CentOS -> ec2-user
# Ubuntu -> ubuntu
#
ssh_username: ec2-user
# Optionally add an IAM profile
iam_profile: 'arn:aws:iam::123456789012:instance-profile/ExampleInstanceProfile'
provider: ec2
my-ec2-southeast-private-ips:
# Set up the location of the salt master
#
minion:
master: saltmaster.example.com
# Specify whether to use public or private IP for deploy script.
#
# Valid options are:
# private_ips - The salt-master is also hosted with EC2
# public_ips - The salt-master is hosted outside of EC2
#
ssh_interface: private_ips
# Optionally configure the Windows credential validation number of
# retries and delay between retries. This defaults to 10 retries
# with a one second delay betwee retries
win_deploy_auth_retries: 10
win_deploy_auth_retry_delay: 1
# Set the EC2 access credentials (see below)
#
id: 'use-instance-role-credentials'
key: 'use-instance-role-credentials'
# Make sure this key is owned by root with permissions 0400.
#
private_key: /etc/salt/my_test_key.pem
keyname: my_test_key
# This one should NOT be specified if VPC was not configured in AWS to be
# the default. It might cause an error message which sais that network
# interfaces and an instance-level security groups may not be specified
# on the same request.
#
securitygroup: default
# Optionally configure default region
#
location: ap-southeast-1
availability_zone: ap-southeast-1b
# Configure which user to use to run the deploy script. This setting is
# dependent upon the AMI that is used to deploy. It is usually safer to
# configure this individually in a profile, than globally. Typical users
# are:
#
# Amazon Linux -> ec2-user
# RHEL -> ec2-user
# CentOS -> ec2-user
# Ubuntu -> ubuntu
#
ssh_username: ec2-user
# Optionally add an IAM profile
iam_profile: 'my other profile name'
provider: ec2
The id
and key
settings may be found in the Security Credentials area
of the AWS Account page:
https://portal.aws.amazon.com/gp/aws/securityCredentials
Both are located in the Access Credentials area of the page, under the Access
Keys tab. The id
setting is labeled Access Key ID, and the key
setting
is labeled Secret Access Key.
Note: if either id
or key
is set to 'use-instance-role-credentials' it is
assumed that Salt is running on an AWS instance, and the instance role
credentials will be retrieved and used. Since both the id
and key
are
required parameters for the AWS ec2 provider, it is recommended to set both
to 'use-instance-role-credentials' for this functionality.
A "static" and "permanent" Access Key ID and Secret Key can be specified, but this is not recommended. Instance role keys are rotated on a regular basis, and are the recommended method of specifying AWS credentials.
For Windows instances, it may take longer than normal for the instance to be
ready. In these circumstances, the provider configuration can be configured
with a win_deploy_auth_retries
and/or a win_deploy_auth_retry_delay
setting, which default to 10 retries and a one second delay between retries.
These retries and timeouts relate to validating the Administrator password
once AWS provides the credentials via the AWS API.
For Windows instances, it may take longer than normal for the instance to be
ready. In these circumstances, the provider configuration can be configured
with a win_deploy_auth_retries
and/or a win_deploy_auth_retry_delay
setting, which default to 10 retries and a one second delay between retries.
These retries and timeouts relate to validating the Administrator password
once AWS provides the credentials via the AWS API.
In order to create an instance with Salt installed and configured, a key pair will need to be created. This can be done in the EC2 Management Console, in the Key Pairs area. These key pairs are unique to a specific region. Keys in the us-east-1 region can be configured at:
https://console.aws.amazon.com/ec2/home?region=us-east-1#s=KeyPairs
Keys in the us-west-1 region can be configured at
https://console.aws.amazon.com/ec2/home?region=us-west-1#s=KeyPairs
...and so on. When creating a key pair, the browser will prompt to download a pem file. This file must be placed in a directory accessible by Salt Cloud, with permissions set to either 0400 or 0600.
An instance on EC2 needs to belong to a security group. Like key pairs, these are unique to a specific region. These are also configured in the EC2 Management Console. Security groups for the us-east-1 region can be configured at:
https://console.aws.amazon.com/ec2/home?region=us-east-1#s=SecurityGroups
...and so on.
A security group defines firewall rules which an instance will adhere to. If the salt-master is configured outside of EC2, the security group must open the SSH port (usually port 22) in order for Salt Cloud to install Salt.
Amazon EC2 instances support the concept of an instance profile, which is a logical container for the IAM role. At the time that you launch an EC2 instance, you can associate the instance with an instance profile, which in turn corresponds to the IAM role. Any software that runs on the EC2 instance is able to access AWS using the permissions associated with the IAM role.
Scaffolding the profile is a 2-step configuration process:
Configure an IAM Role from the IAM Management Console.
Attach this role to a new profile. It can be done with the AWS CLI:
> aws iam create-instance-profile --instance-profile-name PROFILE_NAME > aws iam add-role-to-instance-profile --instance-profile-name PROFILE_NAME --role-name ROLE_NAME
Once the profile is created, you can use the PROFILE_NAME to configure your cloud profiles.
Set up an initial profile at /etc/salt/cloud.profiles
:
base_ec2_private:
provider: my-ec2-southeast-private-ips
image: ami-e565ba8c
size: t1.micro
ssh_username: ec2-user
base_ec2_public:
provider: my-ec2-southeast-public-ips
image: ami-e565ba8c
size: t1.micro
ssh_username: ec2-user
base_ec2_db:
provider: my-ec2-southeast-public-ips
image: ami-e565ba8c
size: m1.xlarge
ssh_username: ec2-user
volumes:
- { size: 10, device: /dev/sdf }
- { size: 10, device: /dev/sdg, type: io1, iops: 1000 }
- { size: 10, device: /dev/sdh, type: io1, iops: 1000 }
# optionally add tags to profile:
tag: {'Environment': 'production', 'Role': 'database'}
# force grains to sync after install
sync_after_install: grains
base_ec2_vpc:
provider: my-ec2-southeast-public-ips
image: ami-a73264ce
size: m1.xlarge
ssh_username: ec2-user
script: /etc/salt/cloud.deploy.d/user_data.sh
network_interfaces:
- DeviceIndex: 0
PrivateIpAddresses:
- Primary: True
#auto assign public ip (not EIP)
AssociatePublicIpAddress: True
SubnetId: subnet-813d4bbf
SecurityGroupId:
- sg-750af413
volumes:
- { size: 10, device: /dev/sdf }
- { size: 10, device: /dev/sdg, type: io1, iops: 1000 }
- { size: 10, device: /dev/sdh, type: io1, iops: 1000 }
del_root_vol_on_destroy: True
del_all_vol_on_destroy: True
tag: {'Environment': 'production', 'Role': 'database'}
sync_after_install: grains
The profile can now be realized with a salt command:
# salt-cloud -p base_ec2 ami.example.com
# salt-cloud -p base_ec2_public ami.example.com
# salt-cloud -p base_ec2_private ami.example.com
This will create an instance named ami.example.com
in EC2. The minion that
is installed on this instance will have an id
of ami.example.com
. If
the command was executed on the salt-master, its Salt key will automatically be
signed on the master.
Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt:
# salt 'ami.example.com' test.ping
The following settings are always required for EC2:
# Set the EC2 login data
my-ec2-config:
id: HJGRYCILJLKJYG
key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn'
keyname: test
securitygroup: quick-start
private_key: /root/test.pem
provider: ec2
EC2 allows a location to be set for servers to be deployed in. Availability zones exist inside regions, and may be added to increase specificity.
my-ec2-config:
# Optionally configure default region
location: ap-southeast-1
availability_zone: ap-southeast-1b
EC2 instances can have a public or private IP, or both. When an instance is deployed, Salt Cloud needs to log into it via SSH to run the deploy script. By default, the public IP will be used for this. If the salt-cloud command is run from another EC2 instance, the private IP should be used.
my-ec2-config:
# Specify whether to use public or private IP for deploy script
# private_ips or public_ips
ssh_interface: public_ips
Many EC2 instances do not allow remote access to the root user by default. Instead, another user must be used to run the deploy script using sudo. Some common usernames include ec2-user (for Amazon Linux), ubuntu (for Ubuntu instances), admin (official Debian) and bitnami (for images provided by Bitnami).
my-ec2-config:
# Configure which user to use to run the deploy script
ssh_username: ec2-user
Multiple usernames can be provided, in which case Salt Cloud will attempt to guess the correct username. This is mostly useful in the main configuration file:
my-ec2-config:
ssh_username:
- ec2-user
- ubuntu
- admin
- bitnami
Multiple security groups can also be specified in the same fashion:
my-ec2-config:
securitygroup:
- default
- extra
Your instances may optionally make use of EC2 Spot Instances. The following example will request that spot instances be used and your maximum bid will be $0.10. Keep in mind that different spot prices may be needed based on the current value of the various EC2 instance sizes. You can check current and past spot instance pricing via the EC2 API or AWS Console.
my-ec2-config:
spot_config:
spot_price: 0.10
By default, the spot instance type is set to 'one-time', meaning it will
be launched and, if it's ever terminated for whatever reason, it will not
be recreated. If you would like your spot instances to be relaunched after
a termination (by your or AWS), set the type
to 'persistent'.
NOTE: Spot instances are a great way to save a bit of money, but you do run the risk of losing your spot instances if the current price for the instance size goes above your maximum bid.
The following parameters may be set in the cloud configuration file to control various aspects of the spot instance launching:
wait_for_spot_timeout
: seconds to wait before giving up on spot instance
launch (default=600)wait_for_spot_interval
: seconds to wait in between polling requests to
determine if a spot instance is available (default=30)wait_for_spot_interval_multiplier
: a multiplier to add to the interval in
between requests, which is useful if AWS is throttling your requests
(default=1)wait_for_spot_max_failures
: maximum number of failures before giving up
on launching your spot instance (default=10)If you find that you're being throttled by AWS while polling for spot instances, you can set the following in your core cloud configuration file that will double the polling interval after each request to AWS.
wait_for_spot_interval: 1
wait_for_spot_interval_multiplier: 2
See the AWS Spot Instances documentation for more information.
Block device mappings enable you to specify additional EBS volumes or instance store volumes when the instance is launched. This setting is also available on each cloud profile. Note that the number of instance stores varies by instance type. If more mappings are provided than are supported by the instance type, mappings will be created in the order provided and additional mappings will be ignored. Consult the AWS documentation for a listing of the available instance stores, and device names.
my-ec2-config:
block_device_mappings:
- DeviceName: /dev/sdb
VirtualName: ephemeral0
- DeviceName: /dev/sdc
VirtualName: ephemeral1
You can also use block device mappings to change the size of the root device at the provisioning time. For example, assuming the root device is '/dev/sda', you can set its size to 100G by using the following configuration.
my-ec2-config:
block_device_mappings:
- DeviceName: /dev/sda
Ebs.VolumeSize: 100
Ebs.VolumeType: gp2
Ebs.SnapshotId: dummy0
Existing EBS volumes may also be attached (not created) to your instances or
you can create new EBS volumes based on EBS snapshots. To simply attach an
existing volume use the volume_id
parameter.
device: /dev/xvdj
volume_id: vol-12345abcd
Or, to create a volume from an EBS snapshot, use the snapshot
parameter.
device: /dev/xvdj
snapshot: snap-abcd12345
Note that volume_id
will take precedence over the snapshot
parameter.
Tags can be set once an instance has been launched.
my-ec2-config:
tag:
tag0: value
tag1: value
One of the features of EC2 is the ability to tag resources. In fact, under the hood, the names given to EC2 instances by salt-cloud are actually just stored as a tag called Name. Salt Cloud has the ability to manage these tags:
salt-cloud -a get_tags mymachine
salt-cloud -a set_tags mymachine tag1=somestuff tag2='Other stuff'
salt-cloud -a del_tags mymachine tag1,tag2,tag3
It is possible to manage tags on any resource in EC2 with a Resource ID, not just instances:
salt-cloud -f get_tags my_ec2 resource_id=af5467ba
salt-cloud -f set_tags my_ec2 resource_id=af5467ba tag1=somestuff
salt-cloud -f del_tags my_ec2 resource_id=af5467ba tag1,tag2,tag3
As mentioned above, EC2 instances are named via a tag. However, renaming an instance by renaming its tag will cause the salt keys to mismatch. A rename function exists which renames both the instance, and the salt keys.
salt-cloud -a rename mymachine newname=yourmachine
EC2 allows the user to enable and disable termination protection on a specific instance. An instance with this protection enabled cannot be destroyed.
salt-cloud -a enable_term_protect mymachine
salt-cloud -a disable_term_protect mymachine
When instances on EC2 are destroyed, there will be a lag between the time that the action is sent, and the time that Amazon cleans up the instance. During this time, the instance still retails a Name tag, which will cause a collision if the creation of an instance with the same name is attempted before the cleanup occurs. In order to avoid such collisions, Salt Cloud can be configured to rename instances when they are destroyed. The new name will look something like:
myinstance-DEL20f5b8ad4eb64ed88f2c428df80a1a0c
In order to enable this, add rename_on_destroy line to the main configuration file:
my-ec2-config:
rename_on_destroy: True
Normally, images can be queried on a cloud provider by passing the
--list-images
argument to Salt Cloud. This still holds true for EC2:
salt-cloud --list-images my-ec2-config
However, the full list of images on EC2 is extremely large, and querying all of
the available images may cause Salt Cloud to behave as if frozen. Therefore,
the default behavior of this option may be modified, by adding an owner
argument to the provider configuration:
owner: aws-marketplace
The possible values for this setting are amazon
, aws-marketplace
,
self
, <AWS account ID>
or all
. The default setting is amazon
.
Take note that all
and aws-marketplace
may cause Salt Cloud to appear
as if it is freezing, as it tries to handle the large amount of data.
It is also possible to perform this query using different settings without
modifying the configuration files. To do this, call the avail_images
function directly:
salt-cloud -f avail_images my-ec2-config owner=aws-marketplace
The following are lists of available AMI images, generally sorted by OS. These lists are on 3rd-party websites, are not managed by Salt Stack in any way. They are provided here as a reference for those who are interested, and contain no warranty (express or implied) from anyone affiliated with Salt Stack. Most of them have never been used, much less tested, by the Salt Stack team.
This is a function that describes an AMI on EC2. This will give insight as to the defaults that will be applied to an instance using a particular AMI.
$ salt-cloud -f show_image ec2 image=ami-fd20ad94
This action is a thin wrapper around --full-query
, which displays details on a
single instance only. In an environment with several machines, this will save a
user from having to sort through all instance data, just to examine a single
instance.
$ salt-cloud -a show_instance myinstance
This argument enables switching of the EbsOptimized setting which default to 'false'. Indicates whether the instance is optimized for EBS I/O. This optimization provides dedicated throughput to Amazon EBS and an optimized configuration stack to provide optimal Amazon EBS I/O performance. This optimization isn't available with all instance types. Additional usage charges apply when using an EBS-optimized instance.
This setting can be added to the profile or map file for an instance.
If set to True, this setting will enable an instance to be EbsOptimized
ebs_optimized: True
This can also be set as a cloud provider setting in the EC2 cloud configuration:
my-ec2-config:
ebs_optimized: True
This argument overrides the default DeleteOnTermination setting in the AMI for the EBS root volumes for an instance. Many AMIs contain 'false' as a default, resulting in orphaned volumes in the EC2 account, which may unknowingly be charged to the account. This setting can be added to the profile or map file for an instance.
If set, this setting will apply to the root EBS volume
del_root_vol_on_destroy: True
This can also be set as a cloud provider setting in the EC2 cloud configuration:
my-ec2-config:
del_root_vol_on_destroy: True
This argument overrides the default DeleteOnTermination setting in the AMI for the not-root EBS volumes for an instance. Many AMIs contain 'false' as a default, resulting in orphaned volumes in the EC2 account, which may unknowingly be charged to the account. This setting can be added to the profile or map file for an instance.
If set, this setting will apply to any (non-root) volumes that were created by salt-cloud using the 'volumes' setting.
The volumes will not be deleted under the following conditions * If a volume is detached before terminating the instance * If a volume is created without this setting and attached to the instance
del_all_vols_on_destroy: True
This can also be set as a cloud provider setting in the EC2 cloud configuration:
my-ec2-config:
del_all_vols_on_destroy: True
The setting for this may be changed on all volumes of an existing instance using one of the following commands:
salt-cloud -a delvol_on_destroy myinstance
salt-cloud -a keepvol_on_destroy myinstance
salt-cloud -a show_delvol_on_destroy myinstance
The setting for this may be changed on a volume on an existing instance using one of the following commands:
salt-cloud -a delvol_on_destroy myinstance device=/dev/sda1
salt-cloud -a delvol_on_destroy myinstance volume_id=vol-1a2b3c4d
salt-cloud -a keepvol_on_destroy myinstance device=/dev/sda1
salt-cloud -a keepvol_on_destroy myinstance volume_id=vol-1a2b3c4d
salt-cloud -a show_delvol_on_destroy myinstance device=/dev/sda1
salt-cloud -a show_delvol_on_destroy myinstance volume_id=vol-1a2b3c4d
EC2 allows the user to enable and disable termination protection on a specific instance. An instance with this protection enabled cannot be destroyed. The EC2 driver adds a show_term_protect action to the regular EC2 functionality.
salt-cloud -a show_term_protect mymachine
salt-cloud -a enable_term_protect mymachine
salt-cloud -a disable_term_protect mymachine
Normally, EC2 endpoints are build using the region and the service_url. The resulting endpoint would follow this pattern:
ec2.<region>.<service_url>
This results in an endpoint that looks like:
ec2.us-east-1.amazonaws.com
There are other projects that support an EC2 compatibility layer, which this scheme does not account for. This can be overridden by specifying the endpoint directly in the main cloud configuration file:
my-ec2-config:
endpoint: myendpoint.example.com:1138/services/Cloud
The EC2 driver has several functions and actions for management of EBS volumes.
A volume may be created, independent of an instance. A zone must be specified. A size or a snapshot may be specified (in GiB). If neither is given, a default size of 10 GiB will be used. If a snapshot is given, the size of the snapshot will be used.
salt-cloud -f create_volume ec2 zone=us-east-1b
salt-cloud -f create_volume ec2 zone=us-east-1b size=10
salt-cloud -f create_volume ec2 zone=us-east-1b snapshot=snap12345678
salt-cloud -f create_volume ec2 size=10 type=standard
salt-cloud -f create_volume ec2 size=10 type=io1 iops=1000
Unattached volumes may be attached to an instance. The following values are required; name or instance_id, volume_id, and device.
salt-cloud -a attach_volume myinstance volume_id=vol-12345 device=/dev/sdb1
The details about an existing volume may be retrieved.
salt-cloud -a show_volume myinstance volume_id=vol-12345
salt-cloud -f show_volume ec2 volume_id=vol-12345
An existing volume may be detached from an instance.
salt-cloud -a detach_volume myinstance volume_id=vol-12345
A volume that is not attached to an instance may be deleted.
salt-cloud -f delete_volume ec2 volume_id=vol-12345
The EC2 driver has the ability to manage key pairs.
A key pair is required in order to create an instance. When creating a key pair with this function, the return data will contain a copy of the private key. This private key is not stored by Amazon, will not be obtainable past this point, and should be stored immediately.
salt-cloud -f create_keypair ec2 keyname=mykeypair
This function will show the details related to a key pair, not including the private key itself (which is not stored by Amazon).
salt-cloud -f show_keypair ec2 keyname=mykeypair
This function removes the key pair from Amazon.
salt-cloud -f delete_keypair ec2 keyname=mykeypair
In the amazon web interface, identify the id of the subnet into which your image should be created. Then, edit your cloud.profiles file like so:-
profile-id:
provider: provider-name
subnetid: subnet-XXXXXXXX
image: ami-XXXXXXXX
size: m1.medium
ssh_username: ubuntu
securitygroupid:
- sg-XXXXXXXX
New in version 2014.7.0.
Launching into a VPC allows you to specify more complex configurations for the network interfaces of your virtual machines, for example:-
profile-id:
provider: provider-name
image: ami-XXXXXXXX
size: m1.medium
ssh_username: ubuntu
# Do not include either 'subnetid' or 'securitygroupid' here if you are
# going to manually specify interface configuration
#
network_interfaces:
- DeviceIndex: 0
SubnetId: subnet-XXXXXXXX
SecurityGroupId:
- sg-XXXXXXXX
# Uncomment this line if you would like to set an explicit private
# IP address for the ec2 instance
#
# PrivateIpAddress: 192.168.1.66
# Uncomment this to associate an existing Elastic IP Address with
# this network interface:
#
# associate_eip: eni-XXXXXXXX
# You can allocate more than one IP address to an interface. Use the
# 'ip addr list' command to see them.
#
# SecondaryPrivateIpAddressCount: 2
# Uncomment this to allocate a new Elastic IP Address to this
# interface (will be associated with the primary private ip address
# of the interface
#
# allocate_new_eip: True
# Uncomment this instead to allocate a new Elastic IP Address to
# both the primary private ip address and each of the secondary ones
#
allocate_new_eips: True
Note that it is an error to assign a 'subnetid' or 'securitygroupid' to a profile where the interfaces are manually configured like this. These are both really properties of each network interface, not of the machine itself.
GoGrid is a public cloud provider supporting Linux and Windows.
To use Salt Cloud with GoGrid log into the GoGrid web interface and create an API key. Do this by clicking on "My Account" and then going to the API Keys tab.
The apikey
and the sharedsecret
configuration parameters need to be set
in the configuration file to enable interfacing with GoGrid:
# Note: This example is for /etc/salt/cloud.providers or any file in the
# /etc/salt/cloud.providers.d/ directory.
my-gogrid-config:
provider: gogrid
apikey: asdff7896asdh789
sharedsecret: saltybacon
Set up an initial profile at /etc/salt/cloud.profiles
or in the
/etc/salt/cloud.profiles.d/
directory:
gogrid_512:
provider: my-gogrid-config
size: 512MB
image: CentOS 6.2 (64-bit) w/ None
Sizes can be obtained using the --list-sizes
option for the salt-cloud
command:
# salt-cloud --list-sizes my-gogrid-config
my-gogrid-config:
----------
gogrid:
----------
512MB:
----------
bandwidth:
None
disk:
30
driver:
get_uuid:
id:
512MB
name:
512MB
price:
0.095
ram:
512
uuid:
bde1e4d7c3a643536e42a35142c7caac34b060e9
...SNIP...
Images can be obtained using the --list-images
option for the salt-cloud
command:
# salt-cloud --list-images my-gogrid-config
my-gogrid-config:
----------
gogrid:
----------
CentOS 6.4 (64-bit) w/ None:
----------
driver:
extra:
----------
get_uuid:
id:
18094
name:
CentOS 6.4 (64-bit) w/ None
uuid:
bfd4055389919e01aa6261828a96cf54c8dcc2c4
...SNIP...
Google Compute Engine (GCE) is Google-infrastructure as a service that lets you run your large-scale computing workloads on virtual machines. This document covers how to use Salt Cloud to provision and manage your virtual machines hosted within Google's infrastructure.
You can find out more about GCE and other Google Cloud Platform services at https://cloud.google.com.
Sign up for Google Cloud Platform
Go to https://cloud.google.com and use your Google account to sign up for Google Cloud Platform and complete the guided instructions.
Create a Project
Next, go to the console at https://cloud.google.com/console and create a new Project. Make sure to select your new Project if you are not automatically directed to the Project.
Projects are a way of grouping together related users, services, and billing. You may opt to create multiple Projects and the remaining instructions will need to be completed for each Project if you wish to use GCE and Salt Cloud to manage your virtual machines.
Enable the Google Compute Engine service
In your Project, either just click Compute Engine to the left, or go to the APIs & auth section and APIs link and enable the Google Compute Engine service.
Create a Service Account
To set up authorization, navigate to APIs & auth section and then the
Credentials link and click the CREATE NEW CLIENT ID button. Select
Service Account and click the Create Client ID button. This will
automatically download a .json
file, which should be ignored. Look for
a new Service Account section in the page and record the generated email
address for the matching key/fingerprint. The email address will be used
in the service_account_email_address
of the /etc/salt/cloud
file.
Key Format
In the new Service Account section, click Generate new P12 key, which
will automatically download a .p12
private key file. The .p12
private key needs to be converted to a format compatible with libcloud.
This new Google-generated private key was encrypted using notasecret as
a passphrase. Use the following command and record the location of the
converted private key and record the location for use in the
service_account_private_key
of the /etc/salt/cloud
file:
openssl pkcs12 -in ORIG.p12 -passin pass:notasecret \
-nodes -nocerts | openssl rsa -out NEW.pem
Set up the cloud config at /etc/salt/cloud
:
# Note: This example is for /etc/salt/cloud
providers:
gce-config:
# Set up the Project name and Service Account authorization
#
project: "your-project-id"
service_account_email_address: "123-a5gt@developer.gserviceaccount.com"
service_account_private_key: "/path/to/your/NEW.pem"
# Set up the location of the salt master
#
minion:
master: saltmaster.example.com
# Set up grains information, which will be common for all nodes
# using this provider
grains:
node_type: broker
release: 1.0.1
provider: gce
Note
The value provided for project
must not contain underscores or spaces and
is labeled as "Project ID" on the Google Developers Console.
Set up an initial profile at /etc/salt/cloud.profiles
:
all_settings:
image: centos-6
size: n1-standard-1
location: europe-west1-b
network: default
tags: '["one", "two", "three"]'
metadata: '{"one": "1", "2": "two"}'
use_persistent_disk: True
delete_boot_pd: False
deploy: True
make_master: False
provider: gce-config
The profile can be realized now with a salt command:
salt-cloud -p all_settings gce-instance
This will create an salt minion instance named gce-instance
in GCE. If
the command was executed on the salt-master, its Salt key will automatically
be signed on the master.
Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt:
salt 'ami.example.com' test.ping
Consult the sample profile below for more information about GCE specific settings. Some of them are mandatory and are properly labeled below but typically also include a hard-coded default.
all_settings:
# Image is used to define what Operating System image should be used
# to for the instance. Examples are Debian 7 (wheezy) and CentOS 6.
#
# MANDATORY
#
image: centos-6
# A 'size', in GCE terms, refers to the instance's 'machine type'. See
# the on-line documentation for a complete list of GCE machine types.
#
# MANDATORY
#
size: n1-standard-1
# A 'location', in GCE terms, refers to the instance's 'zone'. GCE
# has the notion of both Regions (e.g. us-central1, europe-west1, etc)
# and Zones (e.g. us-central1-a, us-central1-b, etc).
#
# MANDATORY
#
location: europe-west1-b
# Use this setting to define the network resource for the instance.
# All GCE projects contain a network named 'default' but it's possible
# to use this setting to create instances belonging to a different
# network resource.
#
network: default
# GCE supports instance/network tags and this setting allows you to
# set custom tags. It should be a list of strings and must be
# parse-able by the python ast.literal_eval() function to convert it
# to a python list.
#
tags: '["one", "two", "three"]'
# GCE supports instance metadata and this setting allows you to
# set custom metadata. It should be a hash of key/value strings and
# parse-able by the python ast.literal_eval() function to convert it
# to a python dictionary.
#
metadata: '{"one": "1", "2": "two"}'
# Use this setting to ensure that when new instances are created,
# they will use a persistent disk to preserve data between instance
# terminations and re-creations.
#
use_persistent_disk: True
# In the event that you wish the boot persistent disk to be permanently
# deleted when you destroy an instance, set delete_boot_pd to True.
#
delete_boot_pd: False
# Specify whether to use public or private IP for deploy script.
# Valid options are:
# private_ips - The salt-master is also hosted with GCE
# public_ips - The salt-master is hosted outside of GCE
ssh_interface: public_ips
# Per instance setting: Used a named fixed IP address to this host.
# Valid options are:
# ephemeral - The host will use a GCE ephemeral IP
# None - No external IP will be configured on this host.
# Optionally, pass the name of a GCE address to use a fixed IP address.
# If the address does not already exist, it will be created.
external_ip: "ephemeral"
GCE instances do not allow remote access to the root user by default.
Instead, another user must be used to run the deploy script using sudo.
Append something like this to /etc/salt/cloud.profiles
:
all_settings:
...
# SSH to GCE instances as gceuser
ssh_username: gceuser
# Use the local private SSH key file located here
ssh_keyfile: /etc/cloud/google_compute_engine
If you have not already used this SSH key to login to instances in this GCE project you will also need to add the public key to your projects metadata at https://cloud.google.com/console. You could also add it via the metadata setting too:
all_settings:
...
metadata: '{"one": "1", "2": "two",
"sshKeys": "gceuser:ssh-rsa <Your SSH Public Key> gceuser@host"}'
This action is a thin wrapper around --full-query
, which displays details on a
single instance only. In an environment with several machines, this will save a
user from having to sort through all instance data, just to examine a single
instance.
salt-cloud -a show_instance myinstance
As noted in the provider configuration, it's possible to force the boot
persistent disk to be deleted when you destroy the instance. The way that
this has been implemented is to use the instance metadata to record the
cloud profile used when creating the instance. When destroy
is called,
if the instance contains a salt-cloud-profile
key, it's value is used
to reference the matching profile to determine if delete_boot_pd
is
set to True
.
Be aware that any GCE instances created with salt cloud will contain this
custom salt-cloud-profile
metadata entry.
It's also possible to list several GCE resources similar to what can be done with other providers. The following commands can be used to list GCE zones (locations), machine types (sizes), and images.
salt-cloud --list-locations gce
salt-cloud --list-sizes gce
salt-cloud --list-images gce
The Compute Engine provider provides functions via salt-cloud to manage your Persistent Disks. You can create and destroy disks as well as attach and detach them from running instances.
When creating a disk, you can create an empty disk and specify its size (in GB), or specify either an 'image' or 'snapshot'.
salt-cloud -f create_disk gce disk_name=pd location=us-central1-b size=200
Deleting a disk only requires the name of the disk to delete
salt-cloud -f delete_disk gce disk_name=old-backup
Attaching a disk to an existing instance is really an 'action' and requires both an instance name and disk name. It's possible to use this ation to create bootable persistent disks if necessary. Compute Engine also supports attaching a persistent disk in READ_ONLY mode to multiple instances at the same time (but then cannot be attached in READ_WRITE to any instance).
salt-cloud -a attach_disk myinstance disk_name=pd mode=READ_WRITE boot=yes
Detaching a disk is also an action against an instance and only requires the name of the disk. Note that this does not safely sync and umount the disk from the instance. To ensure no data loss, you must first make sure the disk is unmounted from the instance.
salt-cloud -a detach_disk myinstance disk_name=pd
It's also possible to look up the details for an existing disk with either a function or an action.
salt-cloud -a show_disk myinstance disk_name=pd
salt-cloud -f show_disk gce disk_name=pd
You can take a snapshot of an existing disk's content. The snapshot can then in turn be used to create other persistent disks. Note that to prevent data corruption, it is strongly suggested that you unmount the disk prior to taking a snapshot. You must name the snapshot and provide the name of the disk.
salt-cloud -f create_snapshot gce name=backup-20140226 disk_name=pd
You can delete a snapshot when it's no longer needed by specifying the name of the snapshot.
salt-cloud -f delete_snapshot gce name=backup-20140226
Use this function to look up information about the snapshot.
salt-cloud -f show_snapshot gce name=backup-20140226
Compute Engine supports multiple private networks per project. Instances within a private network can easily communicate with each other by an internal DNS service that resolves instance names. Instances within a private network can also communicate with either directly without needing special routing or firewall rules even if they span different regions/zones.
Networks also support custom firewall rules. By default, traffic between instances on the same private network is open to all ports and protocols. Inbound SSH traffic (port 22) is also allowed but all other inbound traffic is blocked.
New networks require a name and CIDR range. New instances can be created and added to this network by setting the network name during create. It is not possible to add/remove existing instances to a network.
salt-cloud -f create_network gce name=mynet cidr=10.10.10.0/24
Destroy a network by specifying the name. Make sure that there are no instances associated with the network prior to deleting it or you'll have a bad day.
salt-cloud -f delete_network gce name=mynet
Specify the network name to view information about the network.
salt-cloud -f show_network gce name=mynet
Create a new named static IP address in a region.
salt-cloud -f create_address gce name=my-fixed-ip region=us-central1
Delete an existing named fixed IP address.
salt-cloud -f delete_address gce name=my-fixed-ip region=us-central1
View details on a named address.
salt-cloud -f show_address gce name=my-fixed-ip region=us-central1
You'll need to create custom firewall rules if you want to allow other traffic than what is described above. For instance, if you run a web service on your instances, you'll need to explicitly allow HTTP and/or SSL traffic. The firewall rule must have a name and it will use the 'default' network unless otherwise specified with a 'network' attribute. Firewalls also support instance tags for source/destination
salt-cloud -f create_fwrule gce name=web allow=tcp:80,tcp:443,icmp
Deleting a firewall rule will prevent any previously allowed traffic for the named firewall rule.
salt-cloud -f delete_fwrule gce name=web
Use this function to review an existing firewall rule's information.
salt-cloud -f show_fwrule gce name=web
Compute Engine possess a load-balancer feature for splitting traffic across multiple instances. Please reference the documentation for a more complete discription.
The load-balancer functionality is slightly different than that described in Google's documentation. The concept of TargetPool and ForwardingRule are consolidated in salt-cloud/libcloud. HTTP Health Checks are optional.
HTTP Health Checks can be used as a means to toggle load-balancing across instance members, or to detect if an HTTP site is functioning. A common use-case is to set up a health check URL and if you want to toggle traffic on/off to an instance, you can temporarily have it return a non-200 response. A non-200 response to the load-balancer's health check will keep the LB from sending any new traffic to the "down" instance. Once the instance's health check URL beings returning 200-responses, the LB will again start to send traffic to it. Review Compute Engine's documentation for allowable parameters. You can use the following salt-cloud functions to manage your HTTP health checks.
salt-cloud -f create_hc gce name=myhc path=/ port=80
salt-cloud -f delete_hc gce name=myhc
salt-cloud -f show_hc gce name=myhc
When creating a new load-balancer, it requires a name, region, port range, and list of members. There are other optional parameters for protocol, and list of health checks. Deleting or showing details about the LB only requires the name.
salt-cloud -f create_lb gce name=lb region=... ports=80 members=w1,w2,w3
salt-cloud -f delete_lb gce name=lb
salt-cloud -f show_lb gce name=lb
You can also create a load balancer using a named fixed IP addressby specifying the name of the address. If the address does not exist yet it will be created.
salt-cloud -f create_lb gce name=my-lb region=us-central1 ports=234 members=s1,s2,s3 address=my-lb-ip
It is possible to attach or detach an instance from an existing load-balancer. Both the instance and load-balancer must exist before using these functions.
salt-cloud -f attach_lb gce name=lb member=w4
salt-cloud -f detach_lb gce name=lb member=oops
HP Cloud is a major public cloud platform and uses the libcloud openstack driver. The current version of OpenStack that HP Cloud uses is Havana. When an instance is booted, it must have a floating IP added to it in order to connect to it and further below you will see an example that adds context to this statement.
To use the openstack driver for HP Cloud, set up the cloud provider configuration file as in the example shown below:
/etc/salt/cloud.providers.d/hpcloud.conf
:
hpcloud-config:
# Set the location of the salt-master
#
minion:
master: saltmaster.example.com
# Configure HP Cloud using the OpenStack plugin
#
identity_url: https://region-b.geo-1.identity.hpcloudsvc.com:35357/v2.0/tokens
compute_name: Compute
protocol: ipv4
# Set the compute region:
#
compute_region: region-b.geo-1
# Configure HP Cloud authentication credentials
#
user: myname
tenant: myname-project1
password: xxxxxxxxx
# keys to allow connection to the instance launched
#
ssh_key_name: yourkey
ssh_key_file: /path/to/key/yourkey.priv
provider: openstack
The subsequent example that follows is using the openstack driver.
Originally, HP Cloud, in its OpenStack Essex version (1.0), had 3 availability zones in one region, US West (region-a.geo-1), which each behaved each as a region.
This has since changed, and the current OpenStack Havana version of HP Cloud (1.1) now has simplified this and now has two regions to choose from:
region-a.geo-1 -> US West
region-b.geo-1 -> US East
The user
is the same user as is used to log into the HP Cloud management
UI. The tenant
can be found in the upper left under "Project/Region/Scope".
It is often named the same as user
albeit with a -project1
appended.
The password
is of course what you created your account with. The management
UI also has other information such as being able to select US East or US West.
The profile shown below is a know working profile for an Ubuntu instance. The profile configuration file is stored in the following location:
/etc/salt/cloud.profiles.d/hp_ae1_ubuntu.conf
:
hp_ae1_ubuntu:
provider: hp_ae1
image: 9302692b-b787-4b52-a3a6-daebb79cb498
ignore_cidr: 10.0.0.1/24
networks:
- floating: Ext-Net
size: standard.small
ssh_key_file: /root/keys/test.key
ssh_key_name: test
ssh_username: ubuntu
Some important things about the example above:
image
parameter can use either the image name or image ID which you can obtain by running in the example below (this case US East):# salt-cloud --list-images hp_ae1
ignore_cidr
specifies a range of addresses to ignore when trying to connect to the instance. In this case, it's the range of IP addresses used for an private IP of the instance.networks
is very important to include. In previous versions of Salt Cloud, this is what made it possible for salt-cloud to be able to attach a floating IP to the instance in order to connect to the instance and set up the minion. The current version of salt-cloud doesn't require it, though having it is of no harm either. Newer versions of salt-cloud will use this, and without it, will attempt to find a list of floating IP addresses to use regardless.ssh_key_file
and ssh_key_name
are the keys that will make it possible to connect to the instance to set up the minionssh_username
parameter, in this case, being that the image used will be ubuntu, will make it possible to not only log in but install the minionTo instantiate a machine based on this profile (example):
# salt-cloud -p hp_ae1_ubuntu ubuntu_instance_1
After several minutes, this will create an instance named ubuntu_instance_1 running in HP Cloud in the US East region and will set up the minion and then return information about the instance once completed.
Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt:
# salt ubuntu_instance_1 ping
Additionally, the instance can be accessed via SSH using the floating IP assigned to it
# ssh ubuntu@<floating ip>
Alternatively, in the cloud profile, using the private IP to log into the instance to set up the minion is another option, particularly if salt-cloud is running within the cloud on an instance that is on the same network with all the other instances (minions)
The example below is a modified version of the previous example. Note the use of ssh_interface
:
hp_ae1_ubuntu:
provider: hp_ae1
image: 9302692b-b787-4b52-a3a6-daebb79cb498
size: standard.small
ssh_key_file: /root/keys/test.key
ssh_key_name: test
ssh_username: ubuntu
ssh_interface: private_ips
With this setup, salt-cloud will use the private IP address to ssh into the instance and set up the salt-minion
Joyent is a public cloud provider supporting SmartOS, Linux, FreeBSD, and Windows.
This driver requires the Python requests
library to be installed.
The Joyent cloud requires three configuration parameters. The user name and password that are used to log into the Joyent system, and the location of the private ssh key associated with the Joyent account. The ssh key is needed to send the provisioning commands up to the freshly created virtual machine.
# Note: This example is for /etc/salt/cloud.providers or any file in the
# /etc/salt/cloud.providers.d/ directory.
my-joyent-config:
provider: joyent
user: fred
password: saltybacon
private_key: /root/mykey.pem
keyname: mykey
Set up an initial profile at /etc/salt/cloud.profiles
or in the
/etc/salt/cloud.profiles.d/
directory:
joyent_512
provider: my-joyent-config
size: Extra Small 512 MB
image: Arch Linux 2013.06
Sizes can be obtained using the --list-sizes
option for the salt-cloud
command:
# salt-cloud --list-sizes my-joyent-config
my-joyent-config:
----------
joyent:
----------
Extra Small 512 MB:
----------
default:
false
disk:
15360
id:
Extra Small 512 MB
memory:
512
name:
Extra Small 512 MB
swap:
1024
vcpus:
1
...SNIP...
Images can be obtained using the --list-images
option for the salt-cloud
command:
# salt-cloud --list-images my-joyent-config
my-joyent-config:
----------
joyent:
----------
base:
----------
description:
A 32-bit SmartOS image with just essential packages
installed. Ideal for users who are comfortable with setting
up their own environment and tools.
disabled:
False
files:
----------
- compression:
bzip2
- sha1:
40cdc6457c237cf6306103c74b5f45f5bf2d9bbe
- size:
82492182
name:
base
os:
smartos
owner:
352971aa-31ba-496c-9ade-a379feaecd52
public:
True
...SNIP...
This driver can also be used with the Joyent SmartDataCenter project. More details can be found at:
Using SDC requires that an api_host_suffix is set. The default value for this is .api.joyentcloud.com. All characters, including the leading ., should be included:
api_host_suffix: .api.myhostname.com
The following configuration items can be set in either provider
or
profile
confuration files.
When set to True
(the default), attach https://
to any URL that does not
already have http://
or https://
included at the beginning. The best
practice is to leave the protocol out of the URL, and use this setting to manage
it.
When set to True
(the default), the underlying web library will verify the
SSL certificate. This should only be set to False
for debugging.`
The LXC module is designed to install Salt in an LXC container on a controlled and possibly remote minion.
In other words, Salt will connect to a minion, then from that minion:
Provision and configure a container for networking access
Use those modules to deploy salt and re-attach to master.
all
)Warning
On pre 2015.5.2, you need to specify explitly the network bridge
Salt's LXC support does use lxc.init
via the lxc.cloud_init_interface
and seeds the minion via seed.mkconfig
.
You can provide to those lxc VMs a profile and a network profile like if you were directly using the minion module.
Order of operation:
Here is a simple provider configuration:
# Note: This example goes in /etc/salt/cloud.providers or any file in the
# /etc/salt/cloud.providers.d/ directory.
devhost10-lxc:
target: devhost10
provider: lxc
Please read LXC Management with Salt before anything else. And specially Profiles.
Here are the options to configure your containers:
- target
- Host minion id to install the lxc Container into
- lxc_profile
- Name of the profile or inline options for the LXC vm creation/cloning, please see Container Profiles.
- network_profile
- Name of the profile or inline options for the LXC vm network settings, please see Network Profiles.
- nic_opts
Totally optionnal. Per interface new-style configuration options mappings which will override any profile default option:
eth0: {'mac': '00:16:3e:01:29:40', 'gateway': None, (default) 'link': 'br0', (default) 'gateway': None, (default) 'netmask': '', (default) 'ip': '22.1.4.25'}}- password
- password for root and sysadmin users
- dnsservers
- List of DNS servers to use. This is optional.
- minion
- minion configuration (see Minion Configuration in Salt Cloud)
- bootstrap_shell
- shell for bootstraping script (default: /bin/sh)
- script
- defaults to salt-boostrap
- script_args
arguments which are given to the bootstrap script. the {0} placeholder will be replaced by the path which contains the minion config and key files, eg:
script_args="-c {0}"
Using profiles:
# Note: This example would go in /etc/salt/cloud.profiles or any file in the
# /etc/salt/cloud.profiles.d/ directory.
devhost10-lxc:
provider: devhost10-lxc
lxc_profile: foo
network_profile: bar
minion:
master: 10.5.0.1
master_port: 4506
Using inline profiles (eg to override the network bridge):
devhost11-lxc:
provider: devhost10-lxc
lxc_profile:
clone_from: foo
network_profile:
etho:
link: lxcbr0
minion:
master: 10.5.0.1
master_port: 4506
Template instead of a clone:
devhost11-lxc:
provider: devhost10-lxc
lxc_profile:
template: ubuntu
network_profile:
etho:
link: lxcbr0
minion:
master: 10.5.0.1
master_port: 4506
Static ip:
# Note: This example would go in /etc/salt/cloud.profiles or any file in the
# /etc/salt/cloud.profiles.d/ directory.
devhost10-lxc:
provider: devhost10-lxc
nic_opts:
eth0:
ipv4: 10.0.3.9
minion:
master: 10.5.0.1
master_port: 4506
DHCP:
# Note: This example would go in /etc/salt/cloud.profiles or any file in the
# /etc/salt/cloud.profiles.d/ directory.
devhost10-lxc:
provider: devhost10-lxc
minion:
master: 10.5.0.1
master_port: 4506
Linode is a public cloud provider with a focus on Linux instances.
OR
This driver supports accessing Linode via linode-python or Apache Libcloud. Linode-python is recommended, it is more full-featured than Libcloud. In particular using linode-python enables stopping, starting, and cloning machines.
Driver selection is automatic. If linode-python is present it will be used. If it is absent, salt-cloud will fall back to Libcloud. If neither are present salt-cloud will abort.
NOTE: linode-python 1.1.1 or later is recommended. Earlier versions of linode-python should work but leak sensitive information into the debug logs.
Linode-python can be downloaded from https://github.com/tjfontaine/linode-python or installed via pip.
Linode requires a single API key, but the default root password for new instances also needs to be set:
# Note: This example is for /etc/salt/cloud.providers or any file in the
# /etc/salt/cloud.providers.d/ directory.
my-linode-config:
apikey: asldkgfakl;sdfjsjaslfjaklsdjf;askldjfaaklsjdfhasldsadfghdkf
password: F00barbaz
ssh_pubkey: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKHEOLLbeXgaqRQT9NBAopVz366SdYc0KKX33vAnq+2R user@host
ssh_key_file: ~/.ssh/id_ed25519
provider: linode
The password needs to be 8 characters and contain lowercase, uppercase, and numbers.
Set up an initial profile at /etc/salt/cloud.profiles
or in the
/etc/salt/cloud.profiles.d/
directory:
linode_1024:
provider: my-linode-config
size: Linode 1024
image: Arch Linux 2013.06
Sizes can be obtained using the --list-sizes
option for the salt-cloud
command:
# salt-cloud --list-sizes my-linode-config
my-linode-config:
----------
linode:
----------
Linode 1024:
----------
bandwidth:
2000
disk:
49152
driver:
get_uuid:
id:
1
name:
Linode 1024
price:
20.0
ram:
1024
uuid:
03e18728ce4629e2ac07c9cbb48afffb8cb499c4
...SNIP...
Images can be obtained using the --list-images
option for the salt-cloud
command:
# salt-cloud --list-images my-linode-config
my-linode-config:
----------
linode:
----------
Arch Linux 2013.06:
----------
driver:
extra:
----------
64bit:
1
pvops:
1
get_uuid:
id:
112
name:
Arch Linux 2013.06
uuid:
8457f92eaffc92b7666b6734a96ad7abe1a8a6dd
...SNIP...
When salt-cloud accesses Linode via linode-python it can clone machines.
It is safest to clone a stopped machine. To stop a machine run
salt-cloud -a stop machine_to_clone
To create a new machine based on another machine, add an entry to your linode cloud profile that looks like this:
li-clone:
provider: linode
clonefrom: machine_to_clone
script_args: -C
Then run salt-cloud as normal, specifying -p li-clone. The profile name can be anything--it doesn't have to be li-clone.
Clonefrom: is the name of an existing machine in Linode from which to clone. Script_args: -C is necessary to avoid re-deploying Salt via salt-bootstrap. -C will just re-deploy keys so the new minion will not have a duplicate key or minion_id on the master.
OpenStack is one the most popular cloud projects. It's an open source project to build public and/or private clouds. You can use Salt Cloud to launch OpenStack instances.
/etc/salt/cloud.providers
or
/etc/salt/cloud.providers.d/openstack.conf
:my-openstack-config:
# Set the location of the salt-master
#
minion:
master: saltmaster.example.com
# Configure the OpenStack driver
#
identity_url: http://identity.youopenstack.com/v2.0/tokens
compute_name: nova
protocol: ipv4
compute_region: RegionOne
# Configure Openstack authentication credentials
#
user: myname
password: 123456
# tenant is the project name
tenant: myproject
provider: openstack
# skip SSL certificate validation (default false)
insecure: false
One of the best ways to get information about OpenStack is using the novaclient python package (available in pypi as python-novaclient). The client configuration is a set of environment variables that you can get from the Dashboard. Log in and then go to Project -> Access & security -> API Access and download the "OpenStack RC file". Then:
source /path/to/your/rcfile
nova credentials
nova endpoints
In the nova endpoints
output you can see the information about
compute_region
and compute_name
.
It depends on the OpenStack cluster that you are using. Please, have a look at the previous sections.
The user
and password
is the same user as is used to log into the
OpenStack Dashboard.
Here is an example of a profile:
openstack_512:
provider: my-openstack-config
size: m1.tiny
image: cirros-0.3.1-x86_64-uec
ssh_key_file: /tmp/test.pem
ssh_key_name: test
ssh_interface: private_ips
The following list explains some of the important properties.
nova flavor-list
.nova image-list
.For more information concerning cloud profiles, see here.
If no ssh_key_file is provided, and the server already exists, change_password will use the api to change the root password of the server so that it can be bootstrapped.
change_password: True
Use userdata_file to specify the userdata file to upload for use with cloud-init if available.
userdata_file: /etc/salt/cloud-init/packages.yml
Parallels Cloud Server is a product by Parallels that delivers a cloud hosting solution. The PARALLELS module for Salt Cloud enables you to manage instances hosted by a provider using PCS. Further information can be found at:
http://www.parallels.com/products/pcs/
/etc/salt/cloud
:# Set up the location of the salt master
#
minion:
master: saltmaster.example.com
# Set the PARALLELS access credentials (see below)
#
PARALLELS.user: myuser
PARALLELS.password: badpass
# Set the access URL for your PARALLELS provider
#
PARALLELS.url: https://api.cloud.xmission.com:4465/paci/v1.0/
/etc/salt/cloud.providers
or
/etc/salt/cloud.providers.d/parallels.conf
:my-parallels-config:
# Set up the location of the salt master
#
minion:
master: saltmaster.example.com
# Set the PARALLELS access credentials (see below)
#
user: myuser
password: badpass
# Set the access URL for your PARALLELS provider
#
url: https://api.cloud.xmission.com:4465/paci/v1.0/
provider: parallels
The user
, password
, and url
will be provided to you by your cloud
provider. These are all required in order for the PARALLELS driver to work.
Set up an initial profile at /etc/salt/cloud.profiles
or
/etc/salt/cloud.profiles.d/parallels.conf
:
parallels-ubuntu:
provider: parallels
image: ubuntu-12.04-x86_64
parallels-ubuntu:
provider: my-parallels-config
image: ubuntu-12.04-x86_64
The profile can be realized now with a salt command:
# salt-cloud -p parallels-ubuntu myubuntu
This will create an instance named myubuntu
on the cloud provider. The
minion that is installed on this instance will have an id
of myubuntu
.
If the command was executed on the salt-master, its Salt key will automatically
be signed on the master.
Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt:
# salt myubuntu test.ping
The following settings are always required for PARALLELS:
PARALLELS.user: myuser
PARALLELS.password: badpass
PARALLELS.url: https://api.cloud.xmission.com:4465/paci/v1.0/
my-parallels-config:
user: myuser
password: badpass
url: https://api.cloud.xmission.com:4465/paci/v1.0/
provider: parallels
Unlike other cloud providers in Salt Cloud, Parallels does not utilize a
size
setting. This is because Parallels allows the end-user to specify a
more detailed configuration for their instances, than is allowed by many other
cloud providers. The following options are available to be used in a profile,
with their default settings listed.
# Description of the instance. Defaults to the instance name.
desc: <instance_name>
# How many CPU cores, and how fast they are (in MHz)
cpu_number: 1
cpu_power: 1000
# How many megabytes of RAM
ram: 256
# Bandwidth available, in kbps
bandwidth: 100
# How many public IPs will be assigned to this instance
ip_num: 1
# Size of the instance disk (in GiB)
disk_size: 10
# Username and password
ssh_username: root
password: <value from PARALLELS.password>
# The name of the image, from ``salt-cloud --list-images parallels``
image: ubuntu-12.04-x86_64
Proxmox Virtual Environment is a complete server virtualization management solution, based on KVM virtualization and OpenVZ containers. Further information can be found at:
Please note: This module allows you to create both OpenVZ and KVM but installing Salt on it will only be done when the VM is an OpenVZ container rather than a KVM virtual machine.
/etc/salt/cloud.providers
or
/etc/salt/cloud.providers.d/proxmox.conf
:my-proxmox-config:
# Set up the location of the salt master
#
minion:
master: saltmaster.example.com
# Set the PROXMOX access credentials (see below)
#
user: myuser@pve
password: badpass
# Set the access URL for your PROXMOX provider
#
url: your.proxmox.host
provider: proxmox
The user
, password
, and url
will be provided to you by your cloud
provider. These are all required in order for the PROXMOX driver to work.
Set up an initial profile at /etc/salt/cloud.profiles
or
/etc/salt/cloud.profiles.d/proxmox.conf
:
proxmox-ubuntu:
provider: proxmox
image: local:vztmpl/ubuntu-12.04-standard_12.04-1_amd64.tar.gz
technology: openvz
host: myvmhost
ip_address: 192.168.100.155
password: topsecret
The profile can be realized now with a salt command:
# salt-cloud -p proxmox-ubuntu myubuntu
This will create an instance named myubuntu
on the cloud provider. The
minion that is installed on this instance will have a hostname
of myubuntu
.
If the command was executed on the salt-master, its Salt key will automatically
be signed on the master.
Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt:
# salt myubuntu test.ping
The following settings are always required for PROXMOX:
my-proxmox-config:
provider: proxmox
user: saltcloud@pve
password: xyzzy
url: your.proxmox.host
Unlike other cloud providers in Salt Cloud, Proxmox does not utilize a
size
setting. This is because Proxmox allows the end-user to specify a
more detailed configuration for their instances, than is allowed by many other
cloud providers. The following options are available to be used in a profile,
with their default settings listed.
# Description of the instance.
desc: <instance_name>
# How many CPU cores, and how fast they are (in MHz)
cpus: 1
cpuunits: 1000
# How many megabytes of RAM
memory: 256
# How much swap space in MB
swap: 256
# Whether to auto boot the vm after the host reboots
onboot: 1
# Size of the instance disk (in GiB)
disk: 10
# Host to create this vm on
host: myvmhost
# Nameservers. Defaults to host
nameserver: 8.8.8.8 8.8.4.4
# Username and password
ssh_username: root
password: <value from PROXMOX.password>
# The name of the image, from ``salt-cloud --list-images proxmox``
image: local:vztmpl/ubuntu-12.04-standard_12.04-1_amd64.tar.gz
Rackspace is a major public cloud platform which may be configured using either the rackspace or the openstack driver, depending on your needs.
Please note that the rackspace driver is only intended for 1st gen instances, aka, "the old cloud" at Rackspace. It is required for 1st gen instances, but will not work with OpenStack-based instances. Unless you explicitly have a reason to use it, it is highly recommended that you use the openstack driver instead.
/etc/salt/cloud.providers
or
/etc/salt/cloud.providers.d/rackspace.conf
:my-rackspace-config:
# Set the location of the salt-master
#
minion:
master: saltmaster.example.com
# Configure Rackspace using the OpenStack plugin
#
identity_url: 'https://identity.api.rackspacecloud.com/v2.0/tokens'
compute_name: cloudServersOpenStack
protocol: ipv4
# Set the compute region:
#
compute_region: DFW
# Configure Rackspace authentication credentials
#
user: myname
tenant: 123456
apikey: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
provider: openstack
/etc/salt/cloud.providers
or
/etc/salt/cloud.providers.d/rackspace.conf
:my-rackspace-config:
provider: rackspace
# The Rackspace login user
user: fred
# The Rackspace user's apikey
apikey: 901d3f579h23c8v73q9
The settings that follow are for using Rackspace with the openstack driver, and will not work with the rackspace driver.
Rackspace currently has six compute regions which may be used:
DFW -> Dallas/Forth Worth
ORD -> Chicago
SYD -> Sydney
LON -> London
IAD -> Northern Virginia
HKG -> Hong Kong
Note: Currently the LON region is only available with a UK account, and UK accounts cannot access other regions
The user
is the same user as is used to log into the Rackspace Control
Panel. The tenant
and apikey
can be found in the API Keys area of the
Control Panel. The apikey
will be labeled as API Key (and may need to be
generated), and tenant
will be labeled as Cloud Account Number.
An initial profile can be configured in /etc/salt/cloud.profiles
or
/etc/salt/cloud.profiles.d/rackspace.conf
:
openstack_512:
provider: my-rackspace-config
size: 512 MB Standard
image: Ubuntu 12.04 LTS (Precise Pangolin)
To instantiate a machine based on this profile:
# salt-cloud -p openstack_512 myinstance
This will create a virtual machine at Rackspace with the name myinstance
.
This operation may take several minutes to complete, depending on the current
load at the Rackspace data center.
Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt:
# salt myinstance test.ping
Rackspace offers a hybrid hosting configuration option called RackConnect that allows you to use a physical firewall appliance with your cloud servers. When this service is in use the public_ip assigned by nova will be replaced by a NAT ip on the firewall. For salt-cloud to work properly it must use the newly assigned "access ip" instead of the Nova assigned public ip. You can enable that capability by adding this to your profiles:
openstack_512:
provider: my-openstack-config
size: 512 MB Standard
image: Ubuntu 12.04 LTS (Precise Pangolin)
rackconnect: True
Rackspace offers a managed service level of hosting. As part of the managed service level you have the ability to choose from base of lamp installations on cloud server images. The post build process for both the base and the lamp installations used Chef to install things such as the cloud monitoring agent and the cloud backup agent. It also takes care of installing the lamp stack if selected. In order to prevent the post installation process from stomping over the bootstrapping you can add the below to your profiles.
openstack_512:
provider: my-rackspace-config
size: 512 MB Standard
image: Ubuntu 12.04 LTS (Precise Pangolin)
managedcloud: True
Rackspace provides two sets of virtual machine images, first, and next
generation. As of 0.8.9
salt-cloud will default to using the next
generation images. To force the use of first generation images, on the profile
configuration please add:
FreeBSD-9.0-512:
provider: my-rackspace-config
size: 512 MB Standard
image: FreeBSD 9.0
force_first_gen: True
By default salt-cloud will not add Rackspace private networks to new servers. To enable
a private network to a server instantiated by salt cloud, add the following section
to the provider file (typically /etc/salt/cloud.providers.d/rackspace.conf
)
networks:
- fixed:
# This is the private network
- private-network-id
# This is Rackspace's "PublicNet"
- 00000000-0000-0000-0000-000000000000
# This is Rackspace's "ServiceNet"
- 11111111-1111-1111-1111-111111111111
To get the Rackspace private network ID, go to Networking, Networks and hover over the private network name.
The order of the networks in the above code block does not map to the order of the ethernet devices on newly created servers. Public IP will always be first ( eth0 ) followed by servicenet ( eth1 ) and then private networks.
Enabling the private network per above gives the option of using the private subnet for all master-minion communication, including the bootstrap install of salt-minion. To enable the minion to use the private subnet, update the master: line in the minion: section of the providers file. To configure the master to only listen on the private subnet IP, update the interface: line in the /etc/salt/master file to be the private subnet IP of the salt master.
Scaleway is the first IaaS provider worldwide to offer an ARM based cloud. It’s the ideal platform for horizontal scaling with BareMetal SSD servers. The solution provides on demand resources: it comes with on-demand SSD storage, movable IPs , images, security group and an Object Storage solution. https://scaleway.com
Using Salt for Scaleway, requires an access key
and an API token
. API tokens
are unique identifiers associated with your Scaleway account.
To retrieve your access key
and API token
, log-in to the Scaleway control panel, open the pull-down menu on your account name and click on "My Credentials" link.
If you do not have API token you can create one by clicking the "Create New Token" button on the right corner.
# Note: This example is for /etc/salt/cloud.providers or any file in the
# /etc/salt/cloud.providers.d/ directory.
my-scaleway-config:
access_key: 15cf404d-4560-41b1-9a0c-21c3d5c4ff1f
token: a7347ec8-5de1-4024-a5e3-24b77d1ba91d
provider: scaleway
Set up an initial profile at /etc/salt/cloud.profiles or in the /etc/salt/cloud.profiles.d/ directory:
scalewa-ubuntu:
provider: my-scaleway-config
image: Ubuntu Trusty (14.04 LTS)
Images can be obtained using the --list-images
option for the salt-cloud
command:
#salt-cloud --list-images my-scaleway-config
my-scaleway-config:
----------
scaleway:
----------
069fd876-eb04-44ab-a9cd-47e2fa3e5309:
----------
arch:
arm
creation_date:
2015-03-12T09:35:45.764477+00:00
default_bootscript:
{u'kernel': {u'dtb': u'', u'title': u'Pimouss 3.2.34-30-std', u'id': u'cfda4308-cd6f-4e51-9744-905fc0da370f', u'path': u'kernel/pimouss-uImage-3.2.34-30-std'}, u'title': u'3.2.34-std #30 (stable)', u'id': u'c5af0215-2516-4316-befc-5da1cfad609c', u'initrd': {u'path': u'initrd/c1-uInitrd', u'id': u'1be14b1b-e24c-48e5-b0b6-7ba452e42b92', u'title': u'C1 initrd'}, u'bootcmdargs': {u'id': u'd22c4dde-e5a4-47ad-abb9-d23b54d542ff', u'value': u'ip=dhcp boot=local root=/dev/nbd0 USE_XNBD=1 nbd.max_parts=8'}, u'organization': u'11111111-1111-4111-8111-111111111111', u'public': True}
extra_volumes:
[]
id:
069fd876-eb04-44ab-a9cd-47e2fa3e5309
modification_date:
2015-04-24T12:02:16.820256+00:00
name:
Ubuntu Vivid (15.04)
organization:
a283af0b-d13e-42e1-a43f-855ffbf281ab
public:
True
root_volume:
{u'name': u'distrib-ubuntu-vivid-2015-03-12_10:32-snapshot', u'id': u'a6d02e63-8dee-4bce-b627-b21730f35a05', u'volume_type': u'l_ssd', u'size': 50000000000L}
...
Execute a query and return all information about the nodes running on configured cloud providers using the -Q
option for the salt-cloud
command:
# salt-cloud -F
[INFO ] salt-cloud starting
[INFO ] Starting new HTTPS connection (1): api.scaleway.com
my-scaleway-config:
----------
scaleway:
----------
salt-manager:
----------
creation_date:
2015-06-03T08:17:38.818068+00:00
hostname:
salt-manager
...
Note
Additional documentation about Scaleway can be found at https://www.scaleway.com/docs.
SoftLayer is a public cloud provider, and baremetal hardware hosting provider.
The SoftLayer driver for Salt Cloud requires the softlayer package, which is available at PyPI:
https://pypi.python.org/pypi/SoftLayer
This package can be installed using pip or easy_install:
# pip install softlayer
# easy_install softlayer
Set up the cloud config at /etc/salt/cloud.providers
:
# Note: These examples are for /etc/salt/cloud.providers
my-softlayer:
# Set up the location of the salt master
minion:
master: saltmaster.example.com
# Set the SoftLayer access credentials (see below)
user: MYUSER1138
apikey: 'e3b68aa711e6deadc62d5b76355674beef7cc3116062ddbacafe5f7e465bfdc9'
provider: softlayer
my-softlayer-hw:
# Set up the location of the salt master
minion:
master: saltmaster.example.com
# Set the SoftLayer access credentials (see below)
user: MYUSER1138
apikey: 'e3b68aa711e6deadc62d5b76355674beef7cc3116062ddbacafe5f7e465bfdc9'
provider: softlayer_hw
The user setting is the same user as is used to log into the SoftLayer Administration area. The apikey setting is found inside the Admin area after logging in:
Set up an initial profile at /etc/salt/cloud.profiles
:
base_softlayer_ubuntu:
provider: my-softlayer
image: UBUNTU_LATEST
cpu_number: 1
ram: 1024
disk_size: 100
local_disk: True
hourly_billing: True
domain: example.com
location: sjc01
# Optional
max_net_speed: 1000
private_vlan: 396
private_network: True
private_ssh: True
# May be used _instead_of_ image
global_identifier: 320d8be5-46c0-dead-cafe-13e3c51
Most of the above items are required; optional items are specified below.
Images to build an instance can be found using the --list-images option:
# salt-cloud --list-images my-softlayer
The setting used will be labeled as template.
This is the number of CPU cores that will be used for this instance. This number may be dependent upon the image that is used. For instance:
Red Hat Enterprise Linux 6 - Minimal Install (64 bit) (1 - 4 Core):
----------
name:
Red Hat Enterprise Linux 6 - Minimal Install (64 bit) (1 - 4 Core)
template:
REDHAT_6_64
Red Hat Enterprise Linux 6 - Minimal Install (64 bit) (5 - 100 Core):
----------
name:
Red Hat Enterprise Linux 6 - Minimal Install (64 bit) (5 - 100 Core)
template:
REDHAT_6_64
Note that the template (meaning, the image option) for both of these is the same, but the names suggests how many CPU cores are supported.
This is the amount of memory, in megabytes, that will be allocated to this instance.
The amount of disk space that will be allocated to this image, in megabytes.
When true the disks for the computing instance will be provisioned on the host which it runs, otherwise SAN disks will be provisioned.
When true the computing instance will be billed on hourly usage, otherwise it will be billed on a monthly basis.
The domain name that will be used in the FQDN (Fully Qualified Domain Name) for this instance. The domain setting will be used in conjunction with the instance name to form the FQDN.
Images to build an instance can be found using the --list-locations option:
# salt-cloud --list-location my-softlayer
Specifies the connection speed for the instance's network components. This setting is optional. By default, this is set to 10.
If it is necessary for an instance to be created within a specific frontend VLAN, the ID for that VLAN can be specified in either the provider or profile configuration.
This ID can be queried using the list_vlans function, as described below. This setting is optional.
If it is necessary for an instance to be created within a specific backend VLAN, the ID for that VLAN can be specified in either the provider or profile configuration.
This ID can be queried using the list_vlans function, as described below. This setting is optional.
If a server is to only be used internally, meaning it does not have a public VLAN associated with it, this value would be set to True. This setting is optional. The default is False.
Whether to run the deploy script on the server using the public IP address or the private IP address. If set to True, Salt Cloud will attempt to SSH into the new server using the private IP address. The default is False. This settiong is optional.
When creating an instance using a custom template, this option is set to the corresponding value obtained using the list_custom_images function. This option will not be used if an image is set, and if an image is not set, it is required.
The profile can be realized now with a salt command:
# salt-cloud -p base_softlayer_ubuntu myserver
Using the above configuration, this will create myserver.example.com.
Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt:
# salt 'myserver.example.com' test.ping
Set up an initial profile at /etc/salt/cloud.profiles
:
base_softlayer_hw_centos:
provider: my-softlayer-hw
# CentOS 6.0 - Minimal Install (64 bit)
image: 13963
# 2 x 2.0 GHz Core Bare Metal Instance - 2 GB Ram
size: 1921
# 250GB SATA II
hdd: 19
# San Jose 01
location: 168642
domain: example.com
# Optional
vlan: 396
port_speed: 273
banwidth: 248
Most of the above items are required; optional items are specified below.
Images to build an instance can be found using the --list-images option:
# salt-cloud --list-images my-softlayer-hw
A list of id`s and names will be provided. The `name will describe the operating system and architecture. The id will be the setting to be used in the profile.
Sizes to build an instance can be found using the --list-sizes option:
# salt-cloud --list-sizes my-softlayer-hw
A list of id`s and names will be provided. The `name will describe the speed and quantity of CPU cores, and the amount of memory that the hardware will contain. The id will be the setting to be used in the profile.
There are currently two sizes of hard disk drive (HDD) that are available for hardware instances on SoftLayer:
19: 250GB SATA II
1267: 500GB SATA II
The hdd setting in the profile will be either 19 or 1267. Other sizes may be added in the future.
Locations to build an instance can be found using the --list-images option:
# salt-cloud --list-locations my-softlayer-hw
A list of IDs and names will be provided. The location will describe the location in human terms. The id will be the setting to be used in the profile.
The domain name that will be used in the FQDN (Fully Qualified Domain Name) for this instance. The domain setting will be used in conjunction with the instance name to form the FQDN.
If it is necessary for an instance to be created within a specific VLAN, the ID for that VLAN can be specified in either the provider or profile configuration.
This ID can be queried using the list_vlans function, as described below.
Specifies the speed for the instance's network port. This setting refers to an ID within the SoftLayer API, which sets the port speed. This setting is optional. The default is 273, or, 100 Mbps Public & Private Networks. The following settings are available:
Specifies the network bandwidth available for the instance. This setting refers to an ID within the SoftLayer API, which sets the bandwidth. This setting is optional. The default is 248, or, 5000 GB Bandwidth. The following settings are available:
The following actions are currently supported by the SoftLayer Salt Cloud driver.
This action is a thin wrapper around --full-query, which displays details on a single instance only. In an environment with several machines, this will save a user from having to sort through all instance data, just to examine a single instance.
$ salt-cloud -a show_instance myinstance
The following functions are currently supported by the SoftLayer Salt Cloud driver.
This function lists all VLANs associated with the account, and all known data from the SoftLayer API concerning those VLANs.
$ salt-cloud -f list_vlans my-softlayer
$ salt-cloud -f list_vlans my-softlayer-hw
The id returned in this list is necessary for the vlan option when creating an instance.
This function lists any custom templates associated with the account, that can be used to create a new instance.
$ salt-cloud -f list_custom_images my-softlayer
The globalIdentifier returned in this list is necessary for the global_identifier option when creating an image using a custom template.
The softlayer_hw provider supports the ability to add optional products, which are supported by SoftLayer's API. These products each have an ID associated with them, that can be passed into Salt Cloud with the optional_products option:
softlayer_hw_test:
provider: my-softlayer-hw
# CentOS 6.0 - Minimal Install (64 bit)
image: 13963
# 2 x 2.0 GHz Core Bare Metal Instance - 2 GB Ram
size: 1921
# 250GB SATA II
hdd: 19
# San Jose 01
location: 168642
domain: example.com
optional_products:
# MySQL for Linux
- id: 28
# Business Continuance Insurance
- id: 104
These values can be manually obtained by looking at the source of an order page on the SoftLayer web interface. For convenience, many of these values are listed here:
VEXXHOST is an cloud computing provider which provides Canadian cloud computing services which are based in Monteral and uses the libcloud OpenStack driver. VEXXHOST currently runs the Havana release of OpenStack. When provisioning new instances, they automatically get a public IP and private IP address. Therefore, you do not need to assign a floating IP to access your instance once it's booted.
To use the openstack driver for the VEXXHOST public cloud, you will need to set up the cloud provider configuration file as in the example below:
/etc/salt/cloud.providers.d/vexxhost.conf
:
In order to use the VEXXHOST public cloud, you will need to setup a cloud
provider configuration file as in the example below which uses the OpenStack
driver.
vexxhost:
# Set the location of the salt-master
#
minion:
master: saltmaster.example.com
# Configure VEXXHOST using the OpenStack plugin
#
identity_url: http://auth.api.thenebulacloud.com:5000/v2.0/tokens
compute_name: nova
# Set the compute region:
#
compute_region: na-yul-nhs1
# Configure VEXXHOST authentication credentials
#
user: your-tenant-id
password: your-api-key
tenant: your-tenant-name
# keys to allow connection to the instance launched
#
ssh_key_name: yourkey
ssh_key_file: /path/to/key/yourkey.priv
provider: openstack
All of the authentication fields that you need can be found by logging into your VEXXHOST customer center. Once you've logged in, you will need to click on "CloudConsole" and then click on "API Credentials".
In order to get the correct image UUID and the instance type to use in the cloud profile, you can run the following command respectively:
# salt-cloud --list-images=vexxhost-config
# salt-cloud --list-sizes=vexxhost-config
Once you have that, you can go ahead and create a new cloud profile. This profile will build an Ubuntu 12.04 LTS nb.2G instance.
/etc/salt/cloud.profiles.d/vh_ubuntu1204_2G.conf
:
vh_ubuntu1204_2G:
provider: vexxhost
image: 4051139f-750d-4d72-8ef0-074f2ccc7e5a
size: nb.2G
To create an instance based on the sample profile that we created above, you can run the following salt-cloud command.
# salt-cloud -p vh_ubuntu1204_2G vh_instance1
Typically, instances are provisioned in under 30 seconds on the VEXXHOST public cloud. After the instance provisions, it will be set up a minion and then return all the instance information once it's complete.
Once the instance has been setup, you can test connectivity to it by running the following command:
# salt vh_instance1 test.ping
You can now continue to provision new instances and they will all automatically be set up as minions of the master you've defined in the configuration file.
New in version Beryllium.
Author: Nitin Madhok <nmadhok@clemson.edu>
The VMware cloud module allows you to manage VMware ESX, ESXi, and vCenter.
The vmware module for Salt Cloud requires the pyVmomi
package, which is
available at PyPI:
https://pypi.python.org/pypi/pyvmomi
This package can be installed using pip or easy_install:
pip install pyvmomi
easy_install pyvmomi
The VMware cloud module needs the vCenter URL, username and password to be
set up in the cloud configuration at
/etc/salt/cloud.providers
or /etc/salt/cloud.providers.d/vmware.conf
:
my-vmware-config:
provider: vmware
user: "DOMAIN\user"
password: "verybadpass"
url: "vcenter01.domain.com"
vmware-vcenter02:
provider: vmware
user: "DOMAIN\user"
password: "verybadpass"
url: "vcenter02.domain.com"
vmware-vcenter03:
provider: vmware
user: "DOMAIN\user"
password: "verybadpass"
url: "vcenter03.domain.com"
protocol: "http"
port: 80
Note
Optionally, protocol
and port
can be specified if the vCenter
server is not using the defaults. Default is protocol: https
and
port: 443
.
Set up an initial profile at /etc/salt/cloud.profiles
or
/etc/salt/cloud.profiles.d/vmware.conf
:
vmware-centos6.5:
provider: vmware-vcenter01
clonefrom: test-vm
## Optional arguments
num_cpus: 4
memory: 8192
devices:
cd:
CD/DVD drive 1:
device_type: datastore_iso_file
iso_path: "[nap004-1] vmimages/tools-isoimages/linux.iso"
CD/DVD drive 2:
device_type: client_device
mode: atapi
CD/DVD drive 3:
device_type: client_device
mode: passthrough
disk:
Hard disk 1:
size: 30
Hard disk 2:
size: 20
Hard disk 3:
size: 5
network:
Network adapter 1:
name: 10.20.30-400-Test
switch_type: standard
ip: 10.20.30.123
gateway: [10.20.30.110]
subnet_mask: 255.255.255.128
domain: mycompany.com
Network adapter 2:
name: 10.30.40-500-Dev-DHCP
adapter_type: e1000
switch_type: distributed
Network adapter 3:
name: 10.40.50-600-Prod
adapter_type: vmxnet3
switch_type: distributed
ip: 10.40.50.123
gateway: [10.40.50.110]
subnet_mask: 255.255.255.128
domain: mycompany.com
scsi:
SCSI controller 1:
type: lsilogic
SCSI controller 2:
type: lsilogic_sas
bus_sharing: virtual
SCSI controller 3:
type: paravirtual
bus_sharing: physical
domain: mycompany.com
dns_servers:
- 123.127.255.240
- 123.127.255.241
- 123.127.255.242
# If cloning from template, either resourcepool or cluster MUST be specified!
resourcepool: Resources
cluster: Prod
datastore: HUGE-DATASTORE-Cluster
folder: Development
datacenter: DC1
host: c4212n-002.domain.com
template: False
power_on: True
extra_config:
mem.hotadd: 'yes'
guestinfo.foo: bar
guestinfo.domain: foobar.com
guestinfo.customVariable: customValue
deploy: True
private_key: /root/.ssh/mykey.pem
ssh_username: cloud-user
password: veryVeryBadPassword
minion:
master: 123.127.193.105
file_map:
/path/to/local/custom/script: /path/to/remote/script
/path/to/local/file: /path/to/remote/file
/srv/salt/yum/epel.repo: /etc/yum.repos.d/epel.repo
provider
clonefrom
num_cpus
memory
devices
Enter the device specifications here. Currently, the following devices can be created or reconfigured:
Enter the CD/DVD drive specification here. If the CD/DVD drive doesn't exist, it will be created with the specified configuration. If the CD/DVD drive already exists, it will be reconfigured with the specifications. The following options can be specified per CD/DVD drive:
client_device
and datastore_iso_file
. Default is
device_type: client_device
device_type: datastore_iso_file
. The syntax to specify this is
iso_path: "[datastoreName] vmimages/tools-isoimages/linux.iso"
. This
field is ignored if device_type: client_device
device_type: client_device
. Currently
supported modes are passthrough
and atapi
. This field is ignored if
device_type: datastore_iso_file
. Default is mode: passthrough
Enter the network adapter specification here. If the network adapter doesn't exist, a new network adapter will be created with the specified network name, type and other configuration. If the network adapter already exists, it will be reconfigured with the specifications. The following additional options can be specified per network adapter (See example above):
vmxnet
, vmxnet2
, vmxnet3
, e1000
and e1000e
.
If no type is specified, by default vmxnet3
will be used.standard
for standard portgroups and distributed
for
distributed virtual portgroups.Enter the SCSI adapter specification here. If the SCSI adapter doesn't exist, a new SCSI adapter will be created of the specified type. If the SCSI adapter already exists, it will be reconfigured with the specifications. The following additional options can be specified per SCSI adapter:
lsilogic
, lsilogic_sas
and paravirtual
. Type must
be specified when creating a new SCSI adapter.Specify this if sharing of virtual disks between virtual machines is desired. The following can be specified:
domain
domain
is set to the domain from the VM name. Default is local
.dns_servers
resourcepool
Enter the name of the resourcepool to which the new virtual machine should be attached. This determines what compute resources will be available to the clone.
Note
cluster
Enter the name of the cluster whose resource pool the new virtual machine should be attached to.
Note
datastore
Enter the name of the datastore or the datastore cluster where the virtual machine should be located on physical storage. If not specified, the current datastore is used.
Note
folder
Enter the name of the folder that will contain the new virtual machine.
Note
datacenter
Enter the name of the datacenter that will contain the new virtual machine.
Note
host
Enter the name of the target host where the virtual machine should be registered.
If not specified:
Note
template
template: False
.power_on
template: True
is set, this field is ignored. Default is power_on: True
.extra_config
deploy
True
so salt will be installed using the bootstrap script. If template: True
or
power_on: False
is set, this field is ignored and salt will not be installed.private_key
ssh_username
root
password
private_key
is
specified, you do not need to specify this.minion
master
as the IP/DNS name of the master.file_map
This page describes various miscellaneous options available in Salt Cloud
Custom deploy scripts are unlikely to need custom arguments to be passed to them, but salt-bootstrap has been extended quite a bit, and this may be necessary. script_args can be specified in either the profile or the map file, to pass arguments to the deploy script:
ec2-amazon:
provider: ec2
image: ami-1624987f
size: t1.micro
ssh_username: ec2-user
script: bootstrap-salt
script_args: -c /tmp/
This has also been tested to work with pipes, if needed:
script_args: | head
By default, Salt Cloud uses SFTP to transfer files to Linux hosts. However, if SFTP is not available, or specific SCP functionality is needed, Salt Cloud can be configured to use SCP instead.
file_transport: sftp
file_transport: scp
Salt allows users to create custom modules, grains, and states which can be synchronised to minions to extend Salt with further functionality.
This option will inform Salt Cloud to synchronise your custom modules, grains, states or all these to the minion just after it has been created. For this to happen, the following line needs to be added to the main cloud configuration file:
sync_after_install: all
The available options for this setting are:
modules
grains
states
all
It has become increasingly common for users to set up multi-hierarchal infrastructures using Salt Cloud. This sometimes involves setting up an instance to be a master in addition to a minion. With that in mind, you can now lay down master configuration on a machine by specifying master options in the profile or map file.
make_master: True
This will cause Salt Cloud to generate master keys for the instance, and tell salt-bootstrap to install the salt-master package, in addition to the salt-minion package.
The default master configuration is usually appropriate for most users, and will not be changed unless specific master configuration has been added to the profile or map:
master:
user: root
interface: 0.0.0.0
When Salt Cloud deploys an instance, the SSH pub key for the instance is added to the known_hosts file for the user that ran the salt-cloud command. When an instance is deployed, a cloud provider generally recycles the IP address for the instance. When Salt Cloud attempts to deploy an instance using a recycled IP address that has previously been accessed from the same machine, the old key in the known_hosts file will cause a conflict.
In order to mitigate this issue, Salt Cloud can be configured to remove old keys from the known_hosts file when destroying the node. In order to do this, the following line needs to be added to the main cloud configuration file:
delete_sshkeys: True
When Salt Cloud deploys an instance, it uploads temporary files to /tmp/ for salt-bootstrap to put in place. After the script has run, they are deleted. To keep these files around (mostly for debugging purposes), the --keep-tmp option can be added:
salt-cloud -p myprofile mymachine --keep-tmp
For those wondering why /tmp/ was used instead of /root/, this had to be done for images which require the use of sudo, and therefore do not allow remote root logins, even for file transfers (which makes /root/ unavailable).
By default Salt Cloud will stream the output from the minion deploy script directly to STDOUT. Although this can been very useful, in certain cases you may wish to switch this off. The following config option is there to enable or disable this output:
display_ssh_output: False
There are several stages when deploying Salt where Salt Cloud needs to wait for something to happen. The VM getting it's IP address, the VM's SSH port is available, etc.
If you find that the Salt Cloud defaults are not enough and your deployment fails because Salt Cloud did not wait log enough, there are some settings you can tweak.
Note
All values should be provided in seconds
You can tweak these settings globally, per cloud provider, or event per profile definition.
The amount of time Salt Cloud should wait for a VM to start and get an IP back from the cloud provider. Default: varies by cloud provider ( between 5 and 25 minutes)
The amount of time Salt Cloud should sleep while querying for the VM's IP. Default: varies by cloud provider ( between .5 and 10 seconds)
The amount of time Salt Cloud should wait for a successful SSH connection to the VM. Default: varies by cloud provider (between 5 and 15 minutes)
The amount of time until an ssh connection can be established via password or ssh key. Default: varies by cloud provider (mostly 15 seconds)
The number of attempts to connect to the VM until we abandon. Default: 15 attempts
Some cloud drivers check for an available IP or a successful SSH connection using a function, namely, SoftLayer, and SoftLayer-HW. So, the amount of time Salt Cloud should retry such functions before failing. Default: 15 minutes.
The amount of time Salt Cloud should wait before an EC2 Spot instance is available. This setting is only available for the EC2 cloud driver. Default: 10 minutes
Salt Cloud can maintain a cache of node data, for supported providers. The following options manage this functionality.
On supported cloud providers, whether or not to maintain a cache of nodes
returned from a --full-query. The data will be stored in msgpack
format
under <SALT_CACHEDIR>/cloud/active/<DRIVER>/<PROVIDER>/<NODE_NAME>.p
. This
setting can be True or False.
When the cloud cachedir is being managed, if differences are encountered between the data that is returned live from the cloud provider and the data in the cache, fire events which describe the changes. This setting can be True or False.
Some of these events will contain data which describe a node. Because some of
the fields returned may contain sensitive data, the cache_event_strip_fields
configuration option exists to strip those fields from the event return.
cache_event_strip_fields:
- password
- priv_key
The following are events that can be fired based on this data.
A new node was found on the cloud provider which was not listed in the cloud cachedir. A dict describing the new node will be contained in the event.
A node that was previously listed in the cloud cachedir is no longer available on the cloud provider.
One or more pieces of data in the cloud cachedir has changed on the cloud provider. A dict containing both the old and the new data will be contained in the event.
Normally when bootstrapping a VM, salt-cloud will ignore the SSH host key. This
is because it does not know what the host key is before starting (because it
doesn't exist yet). If strict host key checking is turned on without the key
in the known_hosts
file, then the host will never be available, and cannot
be bootstrapped.
If a provider is able to determine the host key before trying to bootstrap it,
that provider's driver can add it to the known_hosts
file, and then turn on
strict host key checking. This can be set up in the main cloud configuration
file (normally /etc/salt/cloud
) or in the provider-specific configuration
file:
known_hosts_file: /path/to/.ssh/known_hosts
If this is not set, it will default to /dev/null
, and strict host key
checking will be turned off.
It is highly recommended that this option is not set, unless the user has verified that the provider supports this functionality, and that the image being used is capable of providing the necessary information. At this time, only the EC2 driver supports this functionality.
New in version 2015.5.0.
If the ssh key is not stored on the server salt-cloud is being run on, set ssh_agent, and salt-cloud will use the forwarded ssh-agent to authenticate.
ssh_agent: True
New in version 2014.7.0.
The file_map
option allows an arbitrary group of files to be uploaded to the
target system before running the deploy script. This functionality requires a
provider uses salt.utils.cloud.bootstrap(), which is currently limited to the ec2,
gce, openstack and nova drivers.
The file_map
can be configured globally in /etc/salt/cloud
, or in any cloud
provider or profile file. For example, to upload an extra package or a custom deploy
script, a cloud profile using file_map
might look like:
ubuntu14:
provider: ec2-config
image: ami-98aa1cf0
size: t1.micro
ssh_username: root
securitygroup: default
file_map:
/local/path/to/custom/script: /remote/path/to/use/custom/script
/local/path/to/package: /remote/path/to/store/package
This page describes various steps for troubleshooting problems that may arise while using Salt Cloud.
Are TCP ports 4505 and 4506 open on the master? This is easy to overlook on new masters. Information on how to open firewall ports on various platforms can be found here.
This section describes a set of instructions that are useful to a large number of situations, and are likely to solve most issues that arise.
Version Compatibility
One of the most common issues that Salt Cloud users run into is import errors. These are often caused by version compatibility issues with Salt.
Salt 0.16.x works with Salt Cloud 0.8.9 or greater.
Salt 0.17.x requires Salt Cloud 0.8.11.
Releases after 0.17.x (0.18 or greater) should not encounter issues as Salt Cloud has been merged into Salt itself.
Frequently, running Salt Cloud in debug mode will reveal information about a deployment which would otherwise not be obvious:
salt-cloud -p myprofile myinstance -l debug
Keep in mind that a number of messages will appear that look at first like errors, but are in fact intended to give developers factual information to assist in debugging. A number of messages that appear will be for cloud providers that you do not have configured; in these cases, the message usually is intended to confirm that they are not configured.
By default, Salt Cloud uses the Salt Bootstrap script to provision instances:
This script is packaged with Salt Cloud, but may be updated without updating the Salt package:
salt-cloud -u
If the default deploy script was used, there should be a file in the /tmp/
directory called bootstrap-salt.log
. This file contains the full output from
the deployment, including any errors that may have occurred.
Salt Cloud uploads minion-specific files to instances once they are available
via SSH, and then executes a deploy script to put them into the correct place
and install Salt. The --keep-tmp
option will instruct Salt Cloud not to
remove those files when finished with them, so that the user may inspect them
for problems:
salt-cloud -p myprofile myinstance --keep-tmp
By default, Salt Cloud will create a directory on the target instance called
/tmp/.saltcloud/
. This directory should be owned by the user that is to
execute the deploy script, and should have permissions of 0700
.
Most cloud providers are configured to use root
as the default initial user
for deployment, and as such, this directory and all files in it should be owned
by the root
user.
The /tmp/.saltcloud/
directory should the following files:
deploy.sh
script. This script should have permissions of 0755
..pem
and .pub
key named after the minion. The .pem
file should
have permissions of 0600
. Ensure that the .pem
and .pub
files have
been properly copied to the /etc/salt/pki/minion/
directory.minion
. This file should have been copied to the
/etc/salt/
directory.grains
. This file, if present, should have been
copied to the /etc/salt/
directory.Some providers, most notably EC2, are configured with a different primary user.
Some common examples are ec2-user
, ubuntu
, fedora
, and bitnami
.
In these cases, the /tmp/.saltcloud/
directory and all files in it should
be owned by this user.
Some providers, such as EC2, are configured to not require these users to
provide a password when using the sudo
command. Because it is more secure
to require sudo
users to provide a password, other providers are configured
that way.
If this instance is required to provide a password, it needs to be configured in Salt Cloud. A password for sudo to use may be added to either the provider configuration or the profile configuration:
sudo_password: mypassword
/tmp/
is Mounted as noexec
¶It is more secure to mount the /tmp/
directory with a noexec
option.
This is uncommon on most cloud providers, but very common in private
environments. To see if the /tmp/
directory is mounted this way, run the
following command:
mount | grep tmp
The if the output of this command includes a line that looks like this, then
the /tmp/
directory is mounted as noexec
:
tmpfs on /tmp type tmpfs (rw,noexec)
If this is the case, then the deploy_command
will need to be changed
in order to run the deploy script through the sh
command, rather than trying
to execute it directly. This may be specified in either the provider or the
profile config:
deploy_command: sh /tmp/.saltcloud/deploy.sh
Please note that by default, Salt Cloud will place its files in a directory
called /tmp/.saltcloud/
. This may be also be changed in the provider or
profile configuration:
tmp_dir: /tmp/.saltcloud/
If this directory is changed, then the deploy_command
need to be changed
in order to reflect the tmp_dir
configuration.
If all of the files needed for deployment were successfully uploaded to the correct locations, and contain the correct permissions and ownerships, the deploy script may be executed manually in order to check for other issues:
cd /tmp/.saltcloud/
./deploy.sh
Salt Cloud runs on a module system similar to the main Salt project. The
modules inside saltcloud exist in the salt/cloud/clouds
directory of the
salt source.
There are two basic types of cloud modules. If a cloud provider is supported by libcloud, then using it is the fastest route to getting a module written. The Apache Libcloud project is located at:
Not every cloud provider is supported by libcloud. Additionally, not every feature in a supported cloud provider is necessary supported by libcloud. In either of these cases, a module can be created which does not rely on libcloud.
The following functions are required by all modules, whether or not they are based on libcloud.
This function determines whether or not to make this cloud module available
upon execution. Most often, it uses get_configured_provider()
to determine
if the necessary configuration has been set up. It may also check for necessary
imports, to decide whether to load the module. In most cases, it will return a
True
or False
value. If the name of the driver used does not match the
filename, then that name should be returned instead of True
. An example of
this may be seen in the Azure module:
https://github.com/saltstack/salt/tree/develop/salt/cloud/clouds/msazure.py
This function uses config.is_provider_configured()
to determine wither
all required information for this driver has been configured. The last value
in the list of required settings should be followed by a comma.
Writing a cloud module based on libcloud has two major advantages. First of all, much of the work has already been done by the libcloud project. Second, most of the functions necessary to Salt have already been added to the Salt Cloud project.
The most important function that does need to be manually written is the
create()
function. This is what is used to request a virtual machine to be
created by the cloud provider, wait for it to become available, and then
(optionally) log in and install Salt on it.
A good example to follow for writing a cloud provider module based on libcloud is the module provided for Linode:
https://github.com/saltstack/salt/tree/develop/salt/cloud/clouds/linode.py
The basic flow of a create()
function is as follows:
At various points throughout this function, events may be fired on the Salt event bus. Four of these events, which are described below, are required. Other events may be added by the user, where appropriate.
When the create()
function is called, it is passed a data structure called
vm_
. This dict contains a composite of information describing the virtual
machine to be created. A dict called __opts__
is also provided by Salt,
which contains the options used to run Salt Cloud, as well as a set of
configuration and environment variables.
The first thing the create()
function must do is fire an event stating that
it has started the create process. This event is tagged
salt/cloud/<vm name>/creating
. The payload contains the names of the VM,
profile and provider.
A set of kwargs is then usually created, to describe the parameters required by the cloud provider to request the virtual machine.
An event is then fired to state that a virtual machine is about to be requested.
It is tagged as salt/cloud/<vm name>/requesting
. The payload contains most
or all of the parameters that will be sent to the cloud provider. Any private
information (such as passwords) should not be sent in the event.
After a request is made, a set of deploy kwargs will be generated. These will be used to install Salt on the target machine. Windows options are supported at this point, and should be generated, even if the cloud provider does not currently support Windows. This will save time in the future if the provider does eventually decide to support Windows.
An event is then fired to state that the deploy process is about to begin. This
event is tagged salt/cloud/<vm name>/deploying
. The payload for the event
will contain a set of deploy kwargs, useful for debugging purposed. Any private
data, including passwords and keys (including public keys) should be stripped
from the deploy kwargs before the event is fired.
If any Windows options have been passed in, the
salt.utils.cloud.deploy_windows()
function will be called. Otherwise, it
will be assumed that the target is a Linux or Unix machine, and the
salt.utils.cloud.deploy_script()
will be called.
Both of these functions will wait for the target machine to become available, then the necessary port to log in, then a successful login that can be used to install Salt. Minion configuration and keys will then be uploaded to a temporary directory on the target by the appropriate function. On a Windows target, the Windows Minion Installer will be run in silent mode. On a Linux/Unix target, a deploy script (bootstrap-salt.sh, by default) will be run, which will auto-detect the operating system, and install Salt using its native package manager. These do not need to be handled by the developer in the cloud module.
The salt.utils.cloud.validate_windows_cred()
function has been extended to
take the number of retries and retry_delay parameters in case a specific cloud
provider has a delay between providing the Windows credentials and the
credentials being available for use. In their create()
function, or as a
a sub-function called during the creation process, developers should use the
win_deploy_auth_retries
and win_deploy_auth_retry_delay
parameters from
the provider configuration to allow the end-user the ability to customize the
number of tries and delay between tries for their particular provider.
After the appropriate deploy function completes, a final event is fired
which describes the virtual machine that has just been created. This event is
tagged salt/cloud/<vm name>/created
. The payload contains the names of the
VM, profile, and provider.
Finally, a dict (queried from the provider) which describes the new virtual machine is returned to the user. Because this data is not fired on the event bus it can, and should, return any passwords that were returned by the cloud provider. In some cases (for example, Rackspace), this is the only time that the password can be queried by the user; post-creation queries may not contain password information (depending upon the provider).
A number of other functions are required for all cloud providers. However, with libcloud-based modules, these are all provided for free by the libcloudfuncs library. The following two lines set up the imports:
from salt.cloud.libcloudfuncs import * # pylint: disable=W0614,W0401
from salt.utils import namespaced_function
And then a series of declarations will make the necessary functions available within the cloud module.
get_size = namespaced_function(get_size, globals())
get_image = namespaced_function(get_image, globals())
avail_locations = namespaced_function(avail_locations, globals())
avail_images = namespaced_function(avail_images, globals())
avail_sizes = namespaced_function(avail_sizes, globals())
script = namespaced_function(script, globals())
destroy = namespaced_function(destroy, globals())
list_nodes = namespaced_function(list_nodes, globals())
list_nodes_full = namespaced_function(list_nodes_full, globals())
list_nodes_select = namespaced_function(list_nodes_select, globals())
show_instance = namespaced_function(show_instance, globals())
If necessary, these functions may be replaced by removing the appropriate declaration line, and then adding the function as normal.
These functions are required for all cloud modules, and are described in detail in the next section.
In some cases, using libcloud is not an option. This may be because libcloud has
not yet included the necessary driver itself, or it may be that the driver that
is included with libcloud does not contain all of the necessary features
required by the developer. When this is the case, some or all of the functions
in libcloudfuncs
may be replaced. If they are all replaced, the libcloud
imports should be absent from the Salt Cloud module.
A good example of a non-libcloud provider is the DigitalOcean module:
https://github.com/saltstack/salt/tree/develop/salt/cloud/clouds/digital_ocean.py
create()
Function¶The create()
function must be created as described in the libcloud-based
module documentation.
This function is only necessary for libcloud-based modules, and does not need to exist otherwise.
This function is only necessary for libcloud-based modules, and does not need to exist otherwise.
This function returns a list of locations available, if the cloud provider uses
multiple data centers. It is not necessary if the cloud provider only uses one
data center. It is normally called using the --list-locations
option.
salt-cloud --list-locations my-cloud-provider
This function returns a list of images available for this cloud provider. There
are not currently any known cloud providers that do not provide this
functionality, though they may refer to images by a different name (for example,
"templates"). It is normally called using the --list-images
option.
salt-cloud --list-images my-cloud-provider
This function returns a list of sizes available for this cloud provider.
Generally, this refers to a combination of RAM, CPU, and/or disk space. This
functionality may not be present on some cloud providers. For example, the
Parallels module breaks down RAM, CPU, and disk space into separate options,
whereas in other providers, these options are baked into the image. It is
normally called using the --list-sizes
option.
salt-cloud --list-sizes my-cloud-provider
This function builds the deploy script to be used on the remote machine. It is
likely to be moved into the salt.utils.cloud
library in the near future, as
it is very generic and can usually be copied wholesale from another module. An
excellent example is in the Azure driver.
This function irreversibly destroys a virtual machine on the cloud provider.
Before doing so, it should fire an event on the Salt event bus. The tag for this
event is salt/cloud/<vm name>/destroying
. Once the virtual machine has been
destroyed, another event is fired. The tag for that event is
salt/cloud/<vm name>/destroyed
.
This function is normally called with the -d
options:
salt-cloud -d myinstance
This function returns a list of nodes available on this cloud provider, using the following fields:
No other fields should be returned in this function, and all of these fields
should be returned, even if empty. The private_ips and public_ips fields should
always be of a list type, even if empty, and the other fields should always be
of a str type. This function is normally called with the -Q
option:
salt-cloud -Q
All information available about all nodes should be returned in this function.
The fields in the list_nodes() function should also be returned, even if they
would not normally be provided by the cloud provider. This is because some
functions both within Salt and 3rd party will break if an expected field is not
present. This function is normally called with the -F
option:
salt-cloud -F
This function returns only the fields specified in the query.selection
option in /etc/salt/cloud
. Because this function is so generic, all of the
heavy lifting has been moved into the salt.utils.cloud
library.
A function to call list_nodes_select()
still needs to be present. In
general, the following code can be used as-is:
def list_nodes_select(call=None):
'''
Return a list of the VMs that are on the provider, with select fields
'''
return salt.utils.cloud.list_nodes_select(
list_nodes_full('function'), __opts__['query.selection'], call,
)
However, depending on the cloud provider, additional variables may be required.
For instance, some modules use a conn
object, or may need to pass other
options into list_nodes_full()
. In this case, be sure to update the function
appropriately:
def list_nodes_select(conn=None, call=None):
'''
Return a list of the VMs that are on the provider, with select fields
'''
if not conn:
conn = get_conn() # pylint: disable=E0602
return salt.utils.cloud.list_nodes_select(
list_nodes_full(conn, 'function'),
__opts__['query.selection'],
call,
)
This function is normally called with the -S
option:
salt-cloud -S
This function is used to display all of the information about a single node
that is available from the cloud provider. The simplest way to provide this is
usually to call list_nodes_full()
, and return just the data for the
requested node. It is normally called as an action:
salt-cloud -a show_instance myinstance
Extra functionality may be added to a cloud provider in the form of an
--action
or a --function
. Actions are performed against a cloud
instance/virtual machine, and functions are performed against a cloud provider.
Actions are calls that are performed against a specific instance or virtual
machine. The show_instance
action should be available in all cloud modules.
Actions are normally called with the -a
option:
salt-cloud -a show_instance myinstance
Actions must accept a name
as a first argument, may optionally support any
number of kwargs as appropriate, and must accept an argument of call
, with
a default of None
.
Before performing any other work, an action should normally verify that it has been called correctly. It may then perform the desired feature, and return useful information to the user. A basic action looks like:
def show_instance(name, call=None):
'''
Show the details from EC2 concerning an AMI
'''
if call != 'action':
raise SaltCloudSystemExit(
'The show_instance action must be called with -a or --action.'
)
return _get_node(name)
Please note that generic kwargs, if used, are passed through to actions as
kwargs
and not **kwargs
. An example of this is seen in the Functions
section.
Functions are called that are performed against a specific cloud provider. An
optional function that is often useful is show_image
, which describes an
image in detail. Functions are normally called with the -f
option:
salt-cloud -f show_image my-cloud-provider image='Ubuntu 13.10 64-bit'
A function may accept any number of kwargs as appropriate, and must accept an
argument of call
with a default of None
.
Before performing any other work, a function should normally verify that it has been called correctly. It may then perform the desired feature, and return useful information to the user. A basic function looks like:
def show_image(kwargs, call=None):
'''
Show the details from EC2 concerning an AMI
'''
if call != 'function':
raise SaltCloudSystemExit(
'The show_image action must be called with -f or --function.'
)
params = {'ImageId.1': kwargs['image'],
'Action': 'DescribeImages'}
result = query(params)
log.info(result)
return result
Take note that generic kwargs are passed through to functions as kwargs
and
not **kwargs
.
Salt Cloud works primarily by executing a script on the virtual machines as
soon as they become available. The script that is executed is referenced in the
cloud profile as the script
. In older versions, this was the os
argument. This was changed in 0.8.2.
A number of legacy scripts exist in the deploy directory in the saltcloud source tree. The preferred method is currently to use the salt-bootstrap script. A stable version is included with each release tarball starting with 0.8.4. The most updated version can be found at:
https://github.com/saltstack/salt-bootstrap
If you do not specify a script argument, this script will be used at the default.
If the Salt Bootstrap script does not meet your needs, you may write your own. The script should be written in bash and is a Jinja template. Deploy scripts need to execute a number of functions to do a complete salt setup. These functions include:
A good, well commented, example of this process is the Fedora deployment script:
https://github.com/saltstack/salt-cloud/blob/master/saltcloud/deploy/Fedora.sh
A number of legacy deploy scripts are included with the release tarball. None of them are as functional or complete as Salt Bootstrap, and are still included for academic purposes.
If you want to be assured of always using the latest Salt Bootstrap script, there are a few generic templates available in the deploy directory of your saltcloud source tree:
curl-bootstrap
curl-bootstrap-git
python-bootstrap
wget-bootstrap
wget-bootstrap-git
These are example scripts which were designed to be customized, adapted, and refit to meet your needs. One important use of them is to pass options to the salt-bootstrap script, such as updating to specific git tags.
Once a minion has been deployed, it has the option to run a salt command. Normally, this would be the state.highstate command, which would finish provisioning the VM. Another common option is state.sls, or for just testing, test.ping. This is configured in the main cloud config file:
start_action: state.highstate
This is currently considered to be experimental functionality, and may not work well with all providers. If you experience problems with Salt Cloud hanging after Salt is deployed, consider using Startup States instead:
For whatever reason, you may want to skip the deploy script altogether. This results in a VM being spun up much faster, with absolutely no configuration. This can be set from the command line:
salt-cloud --no-deploy -p micro_aws my_instance
Or it can be set from the main cloud config file:
deploy: False
Or it can be set from the provider's configuration:
RACKSPACE.user: example_user
RACKSPACE.apikey: 123984bjjas87034
RACKSPACE.deploy: False
Or even on the VM's profile settings:
ubuntu_aws:
provider: aws
image: ami-7e2da54e
size: t1.micro
deploy: False
The default for deploy is True.
In the profile, you may also set the script option to None
:
script: None
This is the slowest option, since it still uploads the None deploy script and executes it.
Salt Bootstrap can be updated automatically with salt-cloud:
salt-cloud -u
salt-cloud --update-bootstrap
Bear in mind that this updates to the latest (unstable) version, so use with caution.
When Salt Cloud deploys an instance, it uploads temporary files to /tmp/ for salt-bootstrap to put in place. After the script has run, they are deleted. To keep these files around (mostly for debugging purposes), the --keep-tmp option can be added:
salt-cloud -p myprofile mymachine --keep-tmp
For those wondering why /tmp/ was used instead of /root/, this had to be done for images which require the use of sudo, and therefore do not allow remote root logins, even for file transfers (which makes /root/ unavailable).
Custom deploy scripts are unlikely to need custom arguments to be passed to them, but salt-bootstrap has been extended quite a bit, and this may be necessary. script_args can be specified in either the profile or the map file, to pass arguments to the deploy script:
aws-amazon:
provider: aws
image: ami-1624987f
size: t1.micro
ssh_username: ec2-user
script: bootstrap-salt
script_args: -c /tmp/
This has also been tested to work with pipes, if needed:
script_args: | head
In addition to the salt-cloud
command, Salt Cloud can be called from Salt,
in a variety of different ways. Most users will be interested in either the
execution module or the state module, but it is also possible to call Salt Cloud
as a runner.
Because the actual work will be performed on a remote minion, the normal Salt
Cloud configuration must exist on any target minion that needs to execute a Salt
Cloud command. Because Salt Cloud now supports breaking out configuration into
individual files, the configuration is easily managed using Salt's own
file.managed
state function. For example, the following directories allow
this configuration to be managed easily:
/etc/salt/cloud.providers.d/
/etc/salt/cloud.profiles.d/
Keep in mind that when creating minions, Salt Cloud will create public and private minion keys, upload them to the minion, and place the public key on the machine that created the minion. It will not attempt to place any public minion keys on the master, unless the minion which was used to create the instance is also the Salt Master. This is because granting arbitrary minions access to modify keys on the master is a serious security risk, and must be avoided.
The cloud
module is available to use from the command line. At the moment,
almost every standard Salt Cloud feature is available to use. The following
commands are available:
This command is designed to show images that are available to be used to create an instance using Salt Cloud. In general they are used in the creation of profiles, but may also be used to create an instance directly (see below). Listing images requires a provider to be configured, and specified:
salt myminion cloud.list_images my-cloud-provider
This command is designed to show sizes that are available to be used to create an instance using Salt Cloud. In general they are used in the creation of profiles, but may also be used to create an instance directly (see below). This command is not available for all cloud providers; see the provider-specific documentation for details. Listing sizes requires a provider to be configured, and specified:
salt myminion cloud.list_sizes my-cloud-provider
This command is designed to show locations that are available to be used to create an instance using Salt Cloud. In general they are used in the creation of profiles, but may also be used to create an instance directly (see below). This command is not available for all cloud providers; see the provider-specific documentation for details. Listing locations requires a provider to be configured, and specified:
salt myminion cloud.list_locations my-cloud-provider
This command is used to query all configured cloud providers, and display all instances associated with those accounts. By default, it will run a standard query, returning the following fields:
id
image
private_ips
public_ips
size
state
running
, stopped
,
pending
, etc. This state is dependent upon the provider.This command may also be used to perform a full query or a select query, as described below. The following usages are available:
salt myminion cloud.query
salt myminion cloud.query list_nodes
salt myminion cloud.query list_nodes_full
This command behaves like the query
command, but lists all information
concerning each instance as provided by the cloud provider, in addition to the
fields returned by the query
command.
salt myminion cloud.full_query
This command behaves like the query
command, but only returned select
fields as defined in the /etc/salt/cloud
configuration file. A sample
configuration for this section of the file might look like:
query.selection:
- id
- key_name
This configuration would only return the id
and key_name
fields, for
those cloud providers that support those two fields. This would be called using
the following command:
salt myminion cloud.select_query
This command is used to create an instance using a profile that is configured on the target minion. Please note that the profile must be configured before this command can be used with it.
salt myminion cloud.profile ec2-centos64-x64 my-new-instance
Please note that the execution module does not run in parallel mode. Using multiple minions to create instances can effectively perform parallel instance creation.
This command is similar to the profile
command, in that it is used to create
a new instance. However, it does not require a profile to be pre-configured.
Instead, all of the options that are normally configured in a profile are passed
directly to Salt Cloud to create the instance:
salt myminion cloud.create my-ec2-config my-new-instance \
image=ami-1624987f size='t1.micro' ssh_username=ec2-user \
securitygroup=default delvol_on_destroy=True
Please note that the execution module does not run in parallel mode. Using multiple minions to create instances can effectively perform parallel instance creation.
This command is used to destroy an instance or instances. This command will search all configured providers and remove any instance(s) which matches the name(s) passed in here. The results of this command are non-reversable and should be used with caution.
salt myminion cloud.destroy myinstance
salt myminion cloud.destroy myinstance1,myinstance2
This command implements both the action
and the function
commands
used in the standard salt-cloud
command. If one of the standard action
commands is used, an instance name must be provided. If one of the standard
function
commands is used, a provider configuration must be named.
salt myminion cloud.action start instance=myinstance
salt myminion cloud.action show_image provider=my-ec2-config \
image=ami-1624987f
The actions available are largely dependent upon the module for the specific cloud provider. The following actions are available for all cloud providers:
list_nodes
query
function as described above, but is
only performed against a single cloud provider. A provider configuration
must be included.list_nodes_select
full_query
function as described above, but
is only performed against a single cloud provider. A provider configuration
must be included.list_nodes_select
select_query
function as described above,
but is only performed against a single cloud provider. A provider
configuration must be included.show_instance
list_nodes
, which returns the full
information about a single instance. An instance name must be provided.A subset of the execution module is available through the cloud
state
module. Not all functions are currently included, because there is currently
insufficient code for them to perform statefully. For example, a command to
create an instance may be issued with a series of options, but those options
cannot currently be statefully managed. Additional states to manage these
options will be released at a later time.
This state will ensure that an instance is present inside a particular cloud
provider. Any option that is normally specified in the cloud.create
execution module and function may be declared here, but only the actual
presence of the instance will be managed statefully.
my-instance-name:
cloud.present:
- provider: my-ec2-config
- image: ami-1624987f
- size: 't1.micro'
- ssh_username: ec2-user
- securitygroup: default
- delvol_on_destroy: True
This state will ensure that an instance is present inside a particular cloud
provider. This function calls the cloud.profile
execution module and
function, but as with cloud.present
, only the actual presence of the
instance will be managed statefully.
my-instance-name:
cloud.profile:
- profile: ec2-centos64-x64
This state will ensure that an instance (identified by name) does not exist in any of the cloud providers configured on the target minion. Please note that this state is non-reversable and may be considered especially destructive when issued as a cloud state.
my-instance-name:
cloud.absent
The cloud
runner module is executed on the master, and performs actions
using the configuration and Salt modules on the master itself. This means that
any public minion keys will also be properly accepted by the master.
Using the functions in the runner module is no different than using those in the execution module, outside of the behavior described in the above paragraph. The following functions are available inside the runner:
Outside of the standard usage of salt-run
itself, commands are executed as
usual:
salt-run cloud.profile ec2-centos64-x86_64 my-instance-name
The execution, state, and runner modules ultimately all use the CloudClient library that ships with Salt. To use the CloudClient library locally (either on the master or a minion), create a client object and issue a command against it:
import salt.cloud
import pprint
client = salt.cloud.CloudClient('/etc/salt/cloud')
nodes = client.query()
pprint.pprint(nodes)
A number of features are available in most cloud providers, but not all are available everywhere. This may be because the feature isn't supported by the cloud provider itself, or it may only be that the feature has not yet been added to Salt Cloud. In a handful of cases, it is because the feature does not make sense for a particular cloud provider (Saltify, for instance).
This matrix shows which features are available in which cloud providers, as far as Salt Cloud is concerned. This is not a comprehensive list of all features available in all cloud providers, and should not be used to make business decisions concerning choosing a cloud provider. In most cases, adding support for a feature to Salt Cloud requires only a little effort.
Both AWS and Rackspace are listed as "Legacy". This is because those drivers have been replaced by other drivers, which are generally the preferred method for working with those providers.
The EC2 driver should be used instead of the AWS driver, when possible. The OpenStack driver should be used instead of the Rackspace driver, unless the user is dealing with instances in "the old cloud" in Rackspace.
When adding new features to a particular cloud provider, please make sure to add the feature to this table. Additionally, if you notice a feature that is not properly listed here, pull requests to fix them is appreciated.
These are features that are available for almost every provider.
AWS (Legacy) | CloudStack | Digital Ocean | EC2 | GoGrid | JoyEnt | Linode | OpenStack | Parallels | Rackspace (Legacy) | Saltify | Softlayer | Softlayer Hardware | Aliyun | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Query | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | |
Full Query | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | |
Selective Query | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | |
List Sizes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | |
List Images | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | |
List Locations | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | |
create | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
destroy | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
These are features that are performed on a specific instance, and require an instance name to be passed in. For example:
# salt-cloud -a attach_volume ami.example.com
Actions | AWS (Legacy) | CloudStack | Digital Ocean | EC2 | GoGrid | JoyEnt | Linode | OpenStack | Parallels | Rackspace (Legacy) | Saltify | Softlayer | Softlayer Hardware | Aliyun |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
attach_volume | Yes | |||||||||||||
create_attach_volumes | Yes | Yes | ||||||||||||
del_tags | Yes | Yes | ||||||||||||
delvol_on_destroy | Yes | |||||||||||||
detach_volume | Yes | |||||||||||||
disable_term_protect | Yes | Yes | ||||||||||||
enable_term_protect | Yes | Yes | ||||||||||||
get_tags | Yes | Yes | ||||||||||||
keepvol_on_destroy | Yes | |||||||||||||
list_keypairs | Yes | |||||||||||||
rename | Yes | Yes | ||||||||||||
set_tags | Yes | Yes | ||||||||||||
show_delvol_on_destroy | Yes | |||||||||||||
show_instance | Yes | Yes | Yes | Yes | Yes | Yes | ||||||||
show_term_protect | Yes | |||||||||||||
start | Yes | Yes | Yes | Yes | Yes | |||||||||
stop | Yes | Yes | Yes | Yes | Yes | |||||||||
take_action | Yes |
These are features that are performed against a specific cloud provider, and require the name of the provider to be passed in. For example:
# salt-cloud -f list_images my_digitalocean
Functions | AWS (Legacy) | CloudStack | Digital Ocean | EC2 | GoGrid | JoyEnt | Linode | OpenStack | Parallels | Rackspace (Legacy) | Saltify | Softlayer | Softlayer Hardware | Aliyun |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
block_device_mappings | Yes | |||||||||||||
create_keypair | Yes | |||||||||||||
create_volume | Yes | |||||||||||||
delete_key | Yes | |||||||||||||
delete_keypair | Yes | |||||||||||||
delete_volume | Yes | |||||||||||||
get_image | Yes | Yes | Yes | Yes | ||||||||||
get_ip | Yes | |||||||||||||
get_key | Yes | |||||||||||||
get_keyid | Yes | |||||||||||||
get_keypair | Yes | |||||||||||||
get_networkid | Yes | |||||||||||||
get_node | Yes | |||||||||||||
get_password | Yes | |||||||||||||
get_size | Yes | Yes | Yes | |||||||||||
get_spot_config | Yes | |||||||||||||
get_subnetid | Yes | |||||||||||||
iam_profile | Yes | Yes | Yes | |||||||||||
import_key | Yes | |||||||||||||
key_list | Yes | |||||||||||||
keyname | Yes | Yes | ||||||||||||
list_availability_zones | Yes | Yes | ||||||||||||
list_custom_images | Yes | |||||||||||||
list_keys | Yes | |||||||||||||
list_nodes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
list_nodes_full | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
list_nodes_select | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
list_vlans | Yes | Yes | ||||||||||||
rackconnect | Yes | |||||||||||||
reboot | Yes | Yes | Yes | |||||||||||
reformat_node | Yes | |||||||||||||
securitygroup | Yes | Yes | ||||||||||||
securitygroupid | Yes | Yes | ||||||||||||
show_image | Yes | Yes | Yes | |||||||||||
show_key | Yes | |||||||||||||
show_keypair | Yes | Yes | ||||||||||||
show_volume | Yes | Yes |
One of the most powerful features of the Salt framework is the Event Reactor. As the Reactor was in development, Salt Cloud was regularly updated to take advantage of the Reactor upon completion. As such, various aspects of both the creation and destruction of instances with Salt Cloud fire events to the Salt Master, which can be used by the Event Reactor.
As of this writing, all events in Salt Cloud have a tag, which includes the ID of the instance being managed, and a payload which describes the task that is currently being handled. A Salt Cloud tag looks like:
salt/cloud/<minion_id>/<task>
For instance, the first event fired when creating an instance named web1
would look like:
salt/cloud/web1/creating
Assuming this instance is using the ec2-centos
profile, which is in turn
using the ec2-config
provider, the payload for this tag would look like:
{'name': 'web1',
'profile': 'ec2-centos',
'provider': 'ec2-config'}
When an instance is created in Salt Cloud, whether by map, profile, or directly through an API, a minimum of five events are normally fired. More may be available, depending upon the cloud provider being used. Some of the common events are described below.
This event states simply that the process to create an instance has begun. At this point in time, no actual work has begun. The payload for this event includes:
name profile provider
Salt Cloud is about to make a request to the cloud provider to create an instance. At this point, all of the variables required to make the request have been gathered, and the payload of the event will reflect those variables which do not normally pose a security risk. What is returned here is dependent upon the cloud provider. Some common variables are:
name image size location
The instance has been successfully requested, but the necessary information to log into the instance (such as IP address) is not yet available. This event marks the beginning of the process to wait for this information.
The payload for this event normally only includes the instance_id
.
The information required to log into the instance has been retrieved, but the
instance is not necessarily ready to be accessed. Following this event, Salt
Cloud will wait for the IP address to respond to a ping, then wait for the
specified port (usually 22) to respond to a connection, and on Linux systems,
for SSH to become available. Salt Cloud will attempt to issue the date
command on the remote system, as a means to check for availability. If no
ssh_username
has been specified, a list of usernames (starting with
root
) will be attempted. If one or more usernames was configured for
ssh_username
, they will be added to the beginning of the list, in order.
The payload for this event normally only includes the ip_address
.
The necessary port has been detected as available, and now Salt Cloud can log into the instance, upload any files used for deployment, and run the deploy script. Once the script has completed, Salt Cloud will log back into the instance and remove any remaining files.
A number of variables are used to deploy instances, and the majority of these will be available in the payload. Any keys, passwords or other sensitive data will be scraped from the payload. Most of the variables returned will be related to the profile or provider config, and any default values that could have been changed in the profile or provider, but weren't.
The deploy sequence has completed, and the instance is now available, Salted, and ready for use. This event is the final task for Salt Cloud, before returning instance information to the user and exiting.
The payload for this event contains little more than the initial creating
event. This event is required in all cloud providers.
The Event Reactor is built into the Salt Master process, and as such is
configured via the master configuration file. Normally this will will be a YAML
file located at /etc/salt/master
. Additionally, master configuration items
can be stored, in YAML format, inside the /etc/salt/master.d/
directory.
These configuration items may be stored in either location; however, they may
only be stored in one location. For organizational and security purposes, it
may be best to create a single configuration file, which contains only Event
Reactor configuration, at /etc/salt/master.d/reactor
.
The Event Reactor uses a top-level configuration item called reactor
. This
block contains a list of tags to be watched for, each of which also includes a
list of sls
files. For instance:
reactor:
- 'salt/minion/*/start':
- '/srv/reactor/custom-reactor.sls'
- 'salt/cloud/*/created':
- '/srv/reactor/cloud-alert.sls'
- 'salt/cloud/*/destroyed':
- '/srv/reactor/cloud-destroy-alert.sls'
The above configuration configures reactors for three different tags: one which is fired when a minion process has started and is available to receive commands, one which is fired when a cloud instance has been created, and one which is fired when a cloud instance is destroyed.
Note that each tag contains a wildcard (*
) in it. For each of these tags,
this will normally refer to a minion_id
. This is not required of event tags,
but is very common.
Reactor sls
files should be placed in the /srv/reactor/
directory for
consistency between environments, but this is not currently enforced by Salt.
Reactor sls
files follow a similar format to other sls
files in
Salt. By default they are written in YAML and can be templated using Jinja, but
since they are processed through Salt's rendering system, any available
renderer (JSON, Mako, Cheetah, etc.) can be used.
As with other sls
files, each stanza will start with a declaration ID,
followed by the function to run, and then any arguments for that function. For
example:
# /srv/reactor/cloud-alert.sls
new_instance_alert:
cmd.pagerduty.create_event:
- tgt: alertserver
- kwarg:
description: "New instance: {{ data['name'] }}"
details: "New cloud instance created on {{ data['provider'] }}"
service_key: 1626dead5ecafe46231e968eb1be29c4
profile: my-pagerduty-account
When the Event Reactor receives an event notifying it that a new instance has
been created, this sls
will create a new incident in PagerDuty, using the
configured PagerDuty account.
The declaration ID in this example is new_instance_alert
. The function
called is cmd.pagerduty.create_event
. The cmd
portion of this function
specifies that an execution module and function will be called, in this case,
the pagerduty.create_event
function.
Because an execution module is specified, a target (tgt
) must be specified
on which to call the function. In this case, a minion called alertserver
has been used. Any arguments passed through to the function are declared in the
kwarg
block.
When Salt Cloud creates an instance, by default it will install the Salt Minion
onto the instance, along with any specified minion configuration, and
automatically accept that minion's keys on the master. One of the configuration
options that can be specified is startup_states
, which is commonly set to
highstate
. This will tell the minion to immediately apply a highstate, as
soon as it is able to do so.
This can present a problem with some system images on some cloud providers. For
instance, Salt Cloud can be configured to log in as either the root
user, or
a user with sudo
access. While some providers commonly use images that
lock out remote root
access and require a user with sudo
privileges to
log in (notably EC2, with their ec2-user
login), most cloud providers fall
back to root
as the default login on all images, including for operating
systems (such as Ubuntu) which normally disallow remote root
login.
For users of these operating systems, it is understandable that a highstate
would include configuration to block remote root
logins again. However,
Salt Cloud may not have finished cleaning up its deployment files by the time
the minion process has started, and kicked off a highstate run. Users have
reported errors from Salt Cloud getting locked out while trying to clean up
after itself.
The goal of a startup state may be achieved using the Event Reactor. Because a
minion fires an event when it is able to receive commands, this event can
effectively be used inside the reactor system instead. The following will point
the reactor system to the right sls
file:
reactor:
- 'salt/cloud/*/created':
- '/srv/reactor/startup_highstate.sls'
And the following sls
file will start a highstate run on the target minion:
# /srv/reactor/startup_highstate.sls
reactor_highstate:
cmd.state.highstate:
- tgt: {{ data['name'] }}
Because this event will not be fired until Salt Cloud has cleaned up after
itself, the highstate run will not step on Salt Cloud's toes. And because every
file on the minion is configurable, including /etc/salt/minion
, the
startup_states
can still be configured for future minion restarts, if
desired.
netapi
modules¶netapi
modules, put simply, bind a port and start a service.
They are purposefully open-ended and can be used to present a variety of
external interfaces to Salt, and even present multiple interfaces at once.
See also
All netapi
configuration is done in the Salt master
config and takes a form similar to the following:
rest_cherrypy:
port: 8000
debug: True
ssl_crt: /etc/pki/tls/certs/localhost.crt
ssl_key: /etc/pki/tls/certs/localhost.key
__virtual__
function¶Like all module types in Salt, netapi
modules go through
Salt's loader interface to determine if they should be loaded into memory and
then executed.
The __virtual__
function in the module makes this determination and should
return False
or a string that will serve as the name of the module. If the
module raises an ImportError
or any other errors, it will not be loaded.
start
function¶The start()
function will be called for each netapi
module that is loaded. This function should contain the server loop that
actually starts the service. This is started in a multiprocess.
As with the rest of Salt, it is a best-practice to include liberal inline
documentation in the form of a module docstring and docstrings on any classes,
methods, and functions in your netapi
module.
The loader makes the __opts__
data structure available to any function in
a netapi
module.
netapi modules provide API-centric access to Salt. Usually externally-facing services such as REST or WebSockets, XMPP, XMLRPC, etc.
In general netapi modules bind to a port and start a service. They are purposefully open-ended. A single module can be configured to run as well as multiple modules simultaneously.
netapi modules are enabled by adding configuration to your Salt Master config file and then starting the salt-api daemon. Check the docs for each module to see external requirements and configuration settings.
Communication with Salt and Salt satellite projects is done using Salt's own Python API. A list of available client interfaces is below.
salt-api
Prior to Salt's 2014.7.0 release, netapi modules lived in the separate sister
projected salt-api
. That project has been merged into the main Salt
project.
See also
Salt's client interfaces expose executing functions by crafting a dictionary of values that are mapped to function arguments. This allows calling functions simply by creating a data structure. (And this is exactly how much of Salt's own internals work!)
salt.netapi.
NetapiClient
(opts)¶Provide a uniform method of accessing the various client interfaces in Salt in the form of low-data data structures. For example:
>>> client = NetapiClient(__opts__)
>>> lowstate = {'client': 'local', 'tgt': '*', 'fun': 'test.ping', 'arg': ''}
>>> client.run(lowstate)
local
(*args, **kwargs)¶Run execution modules synchronously
See salt.client.LocalClient.cmd()
for all available
parameters.
Sends a command from the master to the targeted minions. This is the
same interface that Salt's own CLI uses. Note the arg
and kwarg
parameters are sent down to the minion(s) and the given function,
fun
, is called with those parameters.
Returns: | Returns the result from the execution module |
---|
local_async
(*args, **kwargs)¶Run execution modules asynchronously
Wraps salt.client.LocalClient.run_job()
.
Returns: | job ID |
---|
local_batch
(*args, **kwargs)¶Run execution modules against batches of minions
New in version 0.8.4.
Wraps salt.client.LocalClient.cmd_batch()
Returns: | Returns the result from the exeuction module for each batch of returns |
---|
runner
(fun, timeout=None, **kwargs)¶Run runner modules <all-salt.runners> synchronously
Wraps salt.runner.RunnerClient.cmd_sync()
.
Note that runner functions must be called using keyword arguments. Positional arguments are not supported.
Returns: | Returns the result from the runner module |
---|
wheel
(fun, **kwargs)¶Run wheel modules synchronously
Wraps salt.wheel.WheelClient.master_call()
.
Note that wheel functions must be called using keyword arguments. Positional arguments are not supported.
Returns: | Returns the result from the wheel module |
---|
The Salt Virt cloud controller capability was initially added to Salt in version 0.14.0 as an alpha technology.
The initial Salt Virt system supports core cloud operations:
Many features are currently under development to enhance the capabilities of the Salt Virt systems.
Note
It is noteworthy that Salt was originally developed with the intent of using the Salt communication system as the backbone to a cloud controller. This means that the Salt Virt system is not an afterthought, simply a system that took the back seat to other development. The original attempt to develop the cloud control aspects of Salt was a project called butter. This project never took off, but was functional and proves the early viability of Salt to be a cloud controller.
A tutorial about how to get Salt Virt up and running has been added to the tutorial section:
The point of interaction with the cloud controller is the virt runner. The virt runner comes with routines to execute specific virtual machine routines.
Reference documentation for the virt runner is available with the runner module documentation:
The Salt Virt system is based on using Salt to query live data about hypervisors and then using the data gathered to make decisions about cloud operations. This means that no external resources are required to run Salt Virt, and that the information gathered about the cloud is live and accurate.
Salt Virt allows for the disks created for deployed virtual machines
to be finely configured. The configuration is a simple data structure which is
read from the config.option
function, meaning that the configuration can be
stored in the minion config file, the master config file, or the minion's
pillar.
This configuration option is called virt.disk
. The default virt.disk
data structure looks like this:
virt.disk:
default:
- system:
size: 8192
format: qcow2
model: virtio
Note
The format and model does not need to be defined, Salt will default to the optimal format used by the underlying hypervisor, in the case of kvm this it is qcow2 and virtio.
This configuration sets up a disk profile called default. The default profile creates a single system disk on the virtual machine.
Many environments will require more complex disk profiles and may require more than one profile, this can be easily accomplished:
virt.disk:
default:
- system:
size: 8192
database:
- system:
size: 8192
- data:
size: 30720
web:
- system:
size: 1024
- logs:
size: 5120
This configuration allows for one of three profiles to be selected, allowing virtual machines to be created with different storage needs of the deployed vm.
Salt Virt allows for the network devices created for deployed virtual machines
to be finely configured. The configuration is a simple data structure which is
read from the config.option
function, meaning that the configuration can be
stored in the minion config file, the master config file, or the minion's
pillar.
This configuration option is called virt.nic
. By default the virt.nic
option is empty but defaults to a data structure which looks like this:
virt.nic:
default:
eth0:
bridge: br0
model: virtio
Note
The model does not need to be defined, Salt will default to the optimal model used by the underlying hypervisor, in the case of kvm this model is virtio
This configuration sets up a network profile called default. The default
profile creates a single Ethernet device on the virtual machine that is bridged
to the hypervisor's br0 interface. This default setup does not
require setting up the virt.nic
configuration, and is the reason why a
default install only requires setting up the br0 bridge device on the
hypervisor.
Many environments will require more complex network profiles and may require more than one profile, this can be easily accomplished:
virt.nic:
dual:
eth0:
bridge: service_br
eth1:
bridge: storage_br
single:
eth0:
bridge: service_br
triple:
eth0:
bridge: service_br
eth1:
bridge: storage_br
eth2:
bridge: dmz_br
all:
eth0:
bridge: service_br
eth1:
bridge: storage_br
eth2:
bridge: dmz_br
eth3:
bridge: database_br
dmz:
eth0:
bridge: service_br
eth1:
bridge: dmz_br
database:
eth0:
bridge: service_br
eth1:
bridge: database_br
This configuration allows for one of six profiles to be selected, allowing virtual machines to be created which attach to different network depending on the needs of the deployed vm.
The default renderer for SLS files is the YAML renderer. YAML is a markup language with many powerful features. However, Salt uses a small subset of YAML that maps over very commonly used data structures, like lists and dictionaries. It is the job of the YAML renderer to take the YAML data structure and compile it into a Python data structure for use by Salt.
Though YAML syntax may seem daunting and terse at first, there are only three very simple rules to remember when writing YAML for SLS files.
YAML uses a fixed indentation scheme to represent relationships between data layers. Salt requires that the indentation for each level consists of exactly two spaces. Do not use tabs.
Python dictionaries are, of course, simply key-value pairs. Users from other languages may recognize this data type as hashes or associative arrays.
Dictionary keys are represented in YAML as strings terminated by a trailing colon. Values are represented by either a string following the colon, separated by a space:
my_key: my_value
In Python, the above maps to:
{'my_key': 'my_value'}
Alternatively, a value can be associated with a key through indentation.
my_key:
my_value
Note
The above syntax is valid YAML but is uncommon in SLS files because most often, the value for a key is not singular but instead is a list of values.
In Python, the above maps to:
{'my_key': 'my_value'}
Dictionaries can be nested:
first_level_dict_key:
second_level_dict_key: value_in_second_level_dict
And in Python:
{
'first_level_dict_key': {
'second_level_dict_key': 'value_in_second_level_dict'
}
}
To represent lists of items, a single dash followed by a space is used. Multiple items are a part of the same list as a function of their having the same level of indentation.
- list_value_one
- list_value_two
- list_value_three
Lists can be the value of a key-value pair. This is quite common in Salt:
my_dictionary:
- list_value_one
- list_value_two
- list_value_three
In Python, the above maps to:
{'my_dictionary': ['list_value_one', 'list_value_two', 'list_value_three']}
One easy way to learn more about how YAML gets rendered into Python data structures is to use an online YAML parser to see the Python output.
One excellent choice for experimenting with YAML parsing is: http://yaml-online-parser.appspot.com/
In 0.10.4 the external_nodes system was upgraded to allow for modular subsystems to be used to generate the top file data for a highstate run on the master.
The old external_nodes option has been removed. The master tops system contains a number of subsystems that are loaded via the Salt loader interfaces like modules, states, returners, runners, etc.
Using the new master_tops option is simple:
master_tops:
ext_nodes: cobbler-external-nodes
for Cobbler or:
master_tops:
reclass:
inventory_base_uri: /etc/reclass
classes_uri: roles
for Reclass.
It's also possible to create custom master_tops modules. These modules must go in a subdirectory called tops in the extension_modules directory. The extension_modules directory is not defined by default (the default /srv/salt/_modules will NOT work as of this release)
Custom tops modules are written like any other execution module, see the source for the two modules above for examples of fully functional ones. Below is a degenerate example:
/etc/salt/master:
extension_modules: /srv/salt/modules
master_tops:
customtop: True
/srv/salt/modules/tops/customtop.py:
import logging
import sys
# Define the module's virtual name
__virtualname__ = 'customtop'
log = logging.getLogger(__name__)
def __virtual__():
return __virtualname__
def top(**kwargs):
log.debug('Calling top in customtop')
return {'base': ['test']}
salt minion state.show_top should then display something like:
$ salt minion state.show_top
minion
----------
base:
- test
Note
Salt ssh is considered production ready in version 2014.7.0
Note
On many systems, the salt-ssh
executable will be in its own package, usually named
salt-ssh
.
In version 0.17.0 of Salt a new transport system was introduced, the ability to use SSH for Salt communication. This addition allows for Salt routines to be executed on remote systems entirely through ssh, bypassing the need for a Salt Minion to be running on the remote systems and the need for a Salt Master.
Note
The Salt SSH system does not supercede the standard Salt communication systems, it simply offers an SSH based alternative that does not require ZeroMQ and a remote agent. Be aware that since all communication with Salt SSH is executed via SSH it is substantially slower than standard Salt with ZeroMQ.
Salt SSH is very easy to use, simply set up a basic roster file of the
systems to connect to and run salt-ssh
commands in a similar way as
standard salt
commands.
Note
The Salt SSH eventually is supposed to support the same set of commands and
functionality as standard salt
command.
At the moment fileserver operations must be wrapped to ensure that the
relevant files are delivered with the salt-ssh
commands.
The state module is an exception, which compiles the state run on the
master, and in the process finds all the references to salt://
paths and
copies those files down in the same tarball as the state run.
However, needed fileserver wrappers are still under development.
The roster system in Salt allows for remote minions to be easily defined.
Note
See the Roster documentation for more details.
Simply create the roster file, the default location is /etc/salt/roster:
web1: 192.168.42.1
This is a very basic roster file where a Salt ID is being assigned to an IP address. A more elaborate roster can be created:
web1:
host: 192.168.42.1 # The IP addr or DNS hostname
user: fred # Remote executions will be executed as user fred
passwd: foobarbaz # The password to use for login, if omitted, keys are used
sudo: True # Whether to sudo to root, not enabled by default
web2:
host: 192.168.42.2
Note
sudo works only if NOPASSWD is set for user in /etc/sudoers:
fred ALL=(ALL) NOPASSWD: ALL
The salt-ssh
command can be easily executed in the same way as a salt
command:
salt-ssh '*' test.ping
Commands with salt-ssh
follow the same syntax as the salt
command.
The standard salt functions are available! The output is the same as salt
and many of the same flags are available. Please see
http://docs.saltstack.com/ref/cli/salt-ssh.html for all of the available
options.
By default salt-ssh
runs Salt execution modules on the remote system,
but salt-ssh
can also execute raw shell commands:
salt-ssh '*' -r 'ifconfig'
The Salt State system can also be used with salt-ssh
. The state system
abstracts the same interface to the user in salt-ssh
as it does when using
standard salt
. The intent is that Salt Formulas defined for standard
salt
will work seamlessly with salt-ssh
and vice-versa.
The standard Salt States walkthroughs function by simply replacing salt
commands with salt-ssh
.
Due to the fact that the targeting approach differs in salt-ssh, only glob and regex targets are supported as of this writing, the remaining target systems still need to be implemented.
Note
By default, Grains are settable through salt-ssh
. By
default, these grains will not be persisted across reboots.
See the "thin_dir" setting in Roster documentation for more details.
Salt SSH takes its configuration from a master configuration file. Normally, this
file is in /etc/salt/master
. If one wishes to use a customized configuration file,
the -c
option to Salt SSH facilitates passing in a directory to look inside for a
configuration file named master
.
New in version 2015.5.1.
Minion config options can be defined globally using the master configuration
option ssh_minion_opts
. It can also be defined on a per-minion basis with
the minion_opts
entry in the roster.
By default, Salt read all the configuration from /etc/salt/. If you are running
Salt SSH with a regular user you have to modify some paths or you will get
"Permission denied" messages. You have to modify two parameters: pki_dir
and cachedir
. Those should point to a full path writable for the user.
It's recommed not to modify /etc/salt for this purpose. Create a private copy
of /etc/salt for the user and run the command with -c /new/config/path
.
If you are commonly passing in CLI options to salt-ssh
, you can create
a Saltfile
to automatically use these options. This is common if you're
managing several different salt projects on the same server.
So you can cd
into a directory that has a Saltfile
with the following
YAML contents:
salt-ssh:
config_dir: path/to/config/dir
max_procs: 30
wipe_ssh: True
Instead of having to call
salt-ssh --config-dir=path/to/config/dir --max-procs=30 --wipe \* test.ping
you
can call salt-ssh \* test.ping
.
Boolean-style options should be specified in their YAML representation.
Note
The option keys specified must match the destination attributes for the
options specified in the parser
salt.utils.parsers.SaltSSHOptionParser
. For example, in the
case of the --wipe
command line option, its dest
is configured to
be wipe_ssh
and thus this is what should be configured in the
Saltfile
. Using the names of flags for this option, being wipe:
True
or w: True
, will not work.
Salt rosters are pluggable systems added in Salt 0.17.0 to facilitate the
salt-ssh
system.
The roster system was created because salt-ssh
needs a means to
identify which systems need to be targeted for execution.
See also
Note
The Roster System is not needed or used in standard Salt because the master does not need to be initially aware of target systems, since the Salt Minion checks itself into the master.
Since the roster system is pluggable, it can be easily augmented to attach to
any existing systems to gather information about what servers are presently
available and should be attached to by salt-ssh
. By default the roster
file is located at /etc/salt/roster.
The roster system compiles a data structure internally referred to as targets. The targets is a list of target systems and attributes about how to connect to said systems. The only requirement for a roster module in Salt is to return the targets data structure.
The information which can be stored in a roster target is the following:
<Salt ID>: # The id to reference the target system with
host: # The IP address or DNS name of the remote host
user: # The user to log in as
passwd: # The password to log in with
# Optional parameters
port: # The target system's ssh port number
sudo: # Boolean to run command via sudo
priv: # File path to ssh private key, defaults to salt-ssh.rsa
timeout: # Number of seconds to wait for response when establishing
# an SSH connection
timeout: # Number of seconds to wait for response
minion_opts: # Dictionary of minion opts
thin_dir: # The target system's storage directory for Salt
# components. Defaults to /tmp/salt-<hash>.
Salt needs to upload a standalone environment to the target system, and this defaults to /tmp/salt-<hash>. This directory will be cleaned up per normal systems operation.
If you need a persistent Salt environment, for instance to set persistent grains, this value will need to be changed.
auto |
An "Always Approved" eauth interface to test against, not intended for |
django |
Provide authentication using Django Web Framework |
keystone |
Provide authentication using OpenStack Keystone |
ldap |
Provide authentication using simple LDAP binds |
mysql |
Provide authentication using MySQL. |
pam |
Authenticate against PAM |
pki |
Authenticate via a PKI certificate. |
stormpath |
Provide authentication using Stormpath. |
yubico |
Provide authentication using YubiKey. |
Salt can be controlled by a command line client by the root user on the Salt master. The Salt command line client uses the Salt client API to communicate with the Salt master server. The Salt client is straightforward and simple to use.
Using the Salt client commands can be easily sent to the minions.
Each of these commands accepts an explicit --config option to point to either
the master or minion configuration file. If this option is not provided and
the default configuration file does not exist then Salt falls back to use the
environment variables SALT_MASTER_CONFIG
and SALT_MINION_CONFIG
.
See also
The Salt command needs a few components to send information to the Salt minions. The target minions need to be defined, the function to call and any arguments the function requires.
The first argument passed to salt, defines the target minions, the target minions are accessed via their hostname. The default target type is a bash glob:
salt '*foo.com' sys.doc
Salt can also define the target minions with regular expressions:
salt -E '.*' cmd.run 'ls -l | grep foo'
Or to explicitly list hosts, salt can take a list:
salt -L foo.bar.baz,quo.qux cmd.run 'ps aux | grep foo'
The simple target specifications, glob, regex, and list will cover many use cases, and for some will cover all use cases, but more powerful options exist.
The Grains interface was built into Salt to allow minions to be targeted by system properties. So minions running on a particular operating system can be called to execute a function, or a specific kernel.
Calling via a grain is done by passing the -G option to salt, specifying a grain and a glob expression to match the value of the grain. The syntax for the target is the grain key followed by a globexpression: "os:Arch*".
salt -G 'os:Fedora' test.ping
Will return True from all of the minions running Fedora.
To discover what grains are available and what the values are, execute the grains.item salt function:
salt '*' grains.items
more info on using targeting with grains can be found here.
As of 0.8.8 targeting with executions is still under heavy development and this documentation is written to reference the behavior of execution matching in the future.
Execution matching allows for a primary function to be executed, and then based on the return of the primary function the main function is executed.
Execution matching allows for matching minions based on any arbitrary running data on the minions.
New in version 0.9.5.
Multiple target interfaces can be used in conjunction to determine the command targets. These targets can then be combined using and or or statements. This is well defined with an example:
salt -C 'G@os:Debian and webser* or E@db.*' test.ping
In this example any minion who's id starts with webser
and is running
Debian, or any minion who's id starts with db will be matched.
The type of matcher defaults to glob, but can be specified with the
corresponding letter followed by the @
symbol. In the above example a grain
is used with G@
as well as a regular expression with E@
. The
webser*
target does not need to be prefaced with a target type specifier
because it is a glob.
more info on using compound targeting can be found here.
New in version 0.9.5.
For certain cases, it can be convenient to have a predefined group of minions on which to execute commands. This can be accomplished using what are called nodegroups. Nodegroups allow for predefined compound targets to be declared in the master configuration file, as a sort of shorthand for having to type out complicated compound expressions.
nodegroups:
group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com and bl*.domain.com'
group2: 'G@os:Debian and foo.domain.com'
group3: 'G@os:Debian and N@group1'
The function to call on the specified target is placed after the target specification.
New in version 0.9.8.
Functions may also accept arguments, space-delimited:
salt '*' cmd.exec_code python 'import sys; print sys.version'
Optional, keyword arguments are also supported:
salt '*' pip.install salt timeout=5 upgrade=True
They are always in the form of kwarg=argument
.
Arguments are formatted as YAML:
salt '*' cmd.run 'echo "Hello: $FIRST_NAME"' env='{FIRST_NAME: "Joe"}'
Note: dictionaries must have curly braces around them (like the env
keyword argument above). This was changed in 0.15.1: in the above example,
the first argument used to be parsed as the dictionary
{'echo "Hello': '$FIRST_NAME"'}
. This was generally not the expected
behavior.
If you want to test what parameters are actually passed to a module, use the
test.arg_repr
command:
salt '*' test.arg_repr 'echo "Hello: $FIRST_NAME"' env='{FIRST_NAME: "Joe"}'
The Salt functions are self documenting, all of the function documentation can
be retried from the minions via the sys.doc()
function:
salt '*' sys.doc
If a series of commands needs to be sent to a single target specification then the commands can be sent in a single publish. This can make gathering groups of information faster, and lowers the stress on the network for repeated commands.
Compound command execution works by sending a list of functions and arguments instead of sending a single function and argument. The functions are executed on the minion in the order they are defined on the command line, and then the data from all of the commands are returned in a dictionary. This means that the set of commands are called in a predictable way, and the returned data can be easily interpreted.
Executing compound commands if done by passing a comma delimited list of functions, followed by a comma delimited list of arguments:
salt '*' cmd.run,test.ping,test.echo 'cat /proc/cpuinfo',,foo
The trick to look out for here, is that if a function is being passed no
arguments, then there needs to be a placeholder for the absent arguments. This
is why in the above example, there are two commas right next to each other.
test.ping
takes no arguments, so we need to add another comma, otherwise
Salt would attempt to pass "foo" to test.ping
.
If you need to pass arguments that include commas, then make sure you add spaces around the commas that separate arguments. For example:
salt '*' cmd.run,test.ping,test.echo 'echo "1,2,3"' , , foo
You may change the arguments separator using the --args-separator
option:
salt --args-separator=:: '*' some.fun,test.echo params with , comma :: foo
salt-call
¶salt-call [options]
The salt-call command is used to run module functions locally on a minion instead of executing them from the master.
--version
¶Print the version of Salt that is running.
--versions-report
¶Show program's dependencies and version number, and then exit
-h
,
--help
¶Show the help message and exit
-c
CONFIG_DIR
,
--config-dir
=CONFIG_dir
¶The location of the Salt configuration directory. This directory contains
the configuration files for Salt master and minions. The default location
on most systems is /etc/salt
.
--hard-crash
¶Raise any original exception rather than exiting gracefully Default: False
-g
,
--grains
¶Return the information generated by the Salt grains
-m
MODULE_DIRS
,
--module-dirs
=MODULE_DIRS
¶Specify an additional directory to pull modules from. Multiple directories can be provided by passing -m /--module-dirs multiple times.
-d
,
--doc
,
--documentation
¶Return the documentation for the specified module or for all modules if none are specified
--master
=MASTER
¶Specify the master to use. The minion must be authenticated with the master. If this option is omitted, the master options from the minion config will be used. If multi masters are set up the first listed master that responds will be used.
--return
RETURNER
¶Set salt-call to pass the return data to one or many returner interfaces. To use many returner interfaces specify a comma delimited list of returners.
--local
¶Run salt-call locally, as if there was no master running.
--file-root
=FILE_ROOT
¶Set this directory as the base file root.
--pillar-root
=PILLAR_ROOT
¶Set this directory as the base pillar root.
--retcode-passthrough
¶Exit with the salt call retcode and not the salt binary retcode
--metadata
¶Print out the execution metadata as well as the return. This will print out the outputter data, the return code, etc.
--id
=ID
¶Specify the minion id to use. If this option is omitted, the id option from the minion config will be used.
--skip-grains
¶Do not load grains.
--refresh-grains-cache
¶Force a refresh of the grains cache
Logging options which override any settings defined on the configuration files.
-l
LOG_LEVEL
,
--log-level
=LOG_LEVEL
¶Console logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
info
.
--log-file
=LOG_FILE
¶Log file path. Default: /var/log/salt/minion.
--log-file-level
=LOG_LEVEL_LOGFILE
¶Logfile logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
info
.
--out
¶Pass in an alternative outputter to display the return of data. This outputter can be any of the available outputters:
grains
,highstate
,json
,key
,overstatestage
,pprint
,raw
,txt
,yaml
Some outputters are formatted only for data returned from specific
functions; for instance, the grains
outputter will not work for non-grains
data.
If an outputter is used that does not support the data passed into it, then
Salt will fall back on the pprint
outputter and display the return data
using the Python pprint
standard library module.
Note
If using --out=json
, you will probably want --static
as well.
Without the static option, you will get a JSON string for each minion.
This is due to using an iterative outputter. So if you want to feed it
to a JSON parser, use --static
as well.
--out-indent
OUTPUT_INDENT
,
--output-indent
OUTPUT_INDENT
¶Print the output indented by the provided value in spaces. Negative values disable indentation. Only applicable in outputters that support indentation.
--out-file
=OUTPUT_FILE
,
--output-file
=OUTPUT_FILE
¶Write the output to the specified file.
--no-color
¶Disable all colored output
--force-color
¶Force colored output
Note
When using colored output the color codes are as follows:
green
denotes success, red
denotes failure, blue
denotes
changes and success and yellow
denotes a expected future change in configuration.
salt(1) salt-master(1) salt-minion(1)
salt
¶Salt allows for commands to be executed across a swath of remote systems in parallel. This means that remote systems can be both controlled and queried with ease.
--version
¶Print the version of Salt that is running.
--versions-report
¶Show program's dependencies and version number, and then exit
-h
,
--help
¶Show the help message and exit
-c
CONFIG_DIR
,
--config-dir
=CONFIG_dir
¶The location of the Salt configuration directory. This directory contains
the configuration files for Salt master and minions. The default location
on most systems is /etc/salt
.
-t
TIMEOUT
,
--timeout
=TIMEOUT
¶The timeout in seconds to wait for replies from the Salt minions. The timeout number specifies how long the command line client will wait to query the minions and check on running jobs. Default: 5
-s
,
--static
¶By default as of version 0.9.8 the salt command returns data to the console as it is received from minions, but previous releases would return data only after all data was received. To only return the data with a hard timeout and after all minions have returned then use the static option.
--async
¶Instead of waiting for the job to run on minions only print the job id of the started execution and complete.
--state-output
=STATE_OUTPUT
¶New in version 0.17.
Override the configured state_output
value for minion output. One of
full
, terse
, mixed
, changes
or filter
. Default:
full
.
--subset
=SUBSET
¶Execute the routine on a random subset of the targeted minions. The minions will be verified that they have the named function before executing.
-v
VERBOSE
,
--verbose
¶Turn on verbosity for the salt call, this will cause the salt command to print out extra data like the job id.
--hide-timeout
¶Instead of showing the return data for all minions. This option prints only the online minions which could be reached.
-b
BATCH
,
--batch-size
=BATCH
¶Instead of executing on all targeted minions at once, execute on a progressive set of minions. This option takes an argument in the form of an explicit number of minions to execute at once, or a percentage of minions to execute on.
-a
EAUTH
,
--auth
=EAUTH
¶Pass in an external authentication medium to validate against. The credentials will be prompted for. The options are auto, keystone, ldap, pam, and stormpath. Can be used with the -T option.
-T
,
--make-token
¶Used in conjunction with the -a option. This creates a token that allows for the authenticated user to send commands without needing to re-authenticate.
--return
=RETURNER
¶Choose an alternative returner to call on the minion, if an alternative returner is used then the return will not come back to the command line but will be sent to the specified return system. The options are carbon, cassandra, couchbase, couchdb, elasticsearch, etcd, hipchat, local, local_cache, memcache, mongo, mysql, odbc, postgres, redis, sentry, slack, sms, smtp, sqlite3, syslog, and xmpp.
-d
,
--doc
,
--documentation
¶Return the documentation for the module functions available on the minions
--args-separator
=ARGS_SEPARATOR
¶Set the special argument used as a delimiter between command arguments of compound commands. This is useful when one wants to pass commas as arguments to some of the commands in a compound command.
Logging options which override any settings defined on the configuration files.
-l
LOG_LEVEL
,
--log-level
=LOG_LEVEL
¶Console logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
--log-file
=LOG_FILE
¶Log file path. Default: /var/log/salt/master.
--log-file-level
=LOG_LEVEL_LOGFILE
¶Logfile logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
-E
,
--pcre
¶The target expression will be interpreted as a PCRE regular expression rather than a shell glob.
-L
,
--list
¶The target expression will be interpreted as a comma-delimited list; example: server1.foo.bar,server2.foo.bar,example7.quo.qux
-G
,
--grain
¶The target expression matches values returned by the Salt grains system on the minions. The target expression is in the format of '<grain value>:<glob expression>'; example: 'os:Arch*'
This was changed in version 0.9.8 to accept glob expressions instead of regular expression. To use regular expression matching with grains, use the --grain-pcre option.
--grain-pcre
¶The target expression matches values returned by the Salt grains system on the minions. The target expression is in the format of '<grain value>:< regular expression>'; example: 'os:Arch.*'
-N
,
--nodegroup
¶Use a predefined compound target defined in the Salt master configuration file.
-R
,
--range
¶Instead of using shell globs to evaluate the target, use a range expression to identify targets. Range expressions look like %cluster.
Using the Range option requires that a range server is set up and the location of the range server is referenced in the master configuration file.
-C
,
--compound
¶Utilize many target definitions to make the call very granular. This option
takes a group of targets separated by and
or or
. The default matcher is a
glob as usual. If something other than a glob is used, preface it with the
letter denoting the type; example: 'webserv* and G@os:Debian or E@db*'
Make sure that the compound target is encapsulated in quotes.
-I
,
--pillar
¶Instead of using shell globs to evaluate the target, use a pillar value to identify targets. The syntax for the target is the pillar key followed by a glob expression: "role:production*"
-S
,
--ipcidr
¶Match based on Subnet (CIDR notation) or IPv4 address.
--out
¶Pass in an alternative outputter to display the return of data. This outputter can be any of the available outputters:
grains
,highstate
,json
,key
,overstatestage
,pprint
,raw
,txt
,yaml
Some outputters are formatted only for data returned from specific
functions; for instance, the grains
outputter will not work for non-grains
data.
If an outputter is used that does not support the data passed into it, then
Salt will fall back on the pprint
outputter and display the return data
using the Python pprint
standard library module.
Note
If using --out=json
, you will probably want --static
as well.
Without the static option, you will get a JSON string for each minion.
This is due to using an iterative outputter. So if you want to feed it
to a JSON parser, use --static
as well.
--out-indent
OUTPUT_INDENT
,
--output-indent
OUTPUT_INDENT
¶Print the output indented by the provided value in spaces. Negative values disable indentation. Only applicable in outputters that support indentation.
--out-file
=OUTPUT_FILE
,
--output-file
=OUTPUT_FILE
¶Write the output to the specified file.
--no-color
¶Disable all colored output
--force-color
¶Force colored output
Note
When using colored output the color codes are as follows:
green
denotes success, red
denotes failure, blue
denotes
changes and success and yellow
denotes a expected future change in configuration.
salt(7) salt-master(1) salt-minion(1)
salt-cloud
¶Provision virtual machines in the cloud with Salt
salt-cloud -m /etc/salt/cloud.map
salt-cloud -m /etc/salt/cloud.map NAME
salt-cloud -m /etc/salt/cloud.map NAME1 NAME2
salt-cloud -p PROFILE NAME
salt-cloud -p PROFILE NAME1 NAME2 NAME3 NAME4 NAME5 NAME6
Salt Cloud is the system used to provision virtual machines on various public clouds via a cleanly controlled profile and mapping system.
--version
¶Print the version of Salt that is running.
--versions-report
¶Show program's dependencies and version number, and then exit
-h
,
--help
¶Show the help message and exit
-c
CONFIG_DIR
,
--config-dir
=CONFIG_dir
¶The location of the Salt configuration directory. This directory contains
the configuration files for Salt master and minions. The default location
on most systems is /etc/salt
.
-L
LOCATION
,
--location
=LOCATION
¶Specify which region to connect to.
-a
ACTION
,
--action
=ACTION
¶Perform an action that may be specific to this cloud provider. This argument requires one or more instance names to be specified.
-f
<FUNC-NAME> <PROVIDER>
,
--function
=<FUNC-NAME> <PROVIDER>
¶Perform an function that may be specific to this cloud provider, that does not apply to an instance. This argument requires a provider to be specified (i.e.: nova).
-p
PROFILE
,
--profile
=PROFILE
¶Select a single profile to build the named cloud VMs from. The profile must be defined in the specified profiles file.
-m
MAP
,
--map
=MAP
¶Specify a map file to use. If used without any other options, this option will ensure that all of the mapped VMs are created. If the named VM already exists then it will be skipped.
-H
,
--hard
¶When specifying a map file, the default behavior is to ensure that all of the VMs specified in the map file are created. If the --hard option is set, then any VMs that exist on configured cloud providers that are not specified in the map file will be destroyed. Be advised that this can be a destructive operation and should be used with care.
-d
,
--destroy
¶Pass in the name(s) of VMs to destroy, salt-cloud will search the configured cloud providers for the specified names and destroy the VMs. Be advised that this is a destructive operation and should be used with care. Can be used in conjunction with the -m option to specify a map of VMs to be deleted.
-P
,
--parallel
¶Normally when building many cloud VMs they are executed serially. The -P option will run each cloud vm build in a separate process allowing for large groups of VMs to be build at once.
Be advised that some cloud provider's systems don't seem to be well suited for this influx of vm creation. When creating large groups of VMs watch the cloud provider carefully.
-Q
,
--query
¶Execute a query and print out information about all cloud VMs. Can be used in conjunction with -m to display only information about the specified map.
-u
,
--update-bootstrap
¶Update salt-bootstrap to the latest develop version on GitHub.
-y
,
--assume-yes
¶Default yes in answer to all confirmation questions.
-k
,
--keep-tmp
¶Do not remove files from /tmp/ after deploy.sh finishes.
--show-deploy-args
¶Include the options used to deploy the minion in the data returned.
--script-args
=SCRIPT_ARGS
¶Script arguments to be fed to the bootstrap script when deploying the VM.
-Q
,
--query
¶Execute a query and return some information about the nodes running on configured cloud providers
-F
,
--full-query
¶Execute a query and print out all available information about all cloud VMs. Can be used in conjunction with -m to display only information about the specified map.
-S
,
--select-query
¶Execute a query and print out selected information about all cloud VMs. Can be used in conjunction with -m to display only information about the specified map.
--list-providers
¶Display a list of configured providers.
--list-profiles
¶New in version 2014.7.0.
Display a list of configured profiles. Pass in a cloud provider to view
the provider's associated profiles, such as digital_ocean
, or pass in
all
to list all the configured profiles.
--list-locations
=LIST_LOCATIONS
¶Display a list of locations available in configured cloud providers. Pass the cloud provider that available locations are desired on, aka "linode", or pass "all" to list locations for all configured cloud providers
--list-images
=LIST_IMAGES
¶Display a list of images available in configured cloud providers. Pass the cloud provider that available images are desired on, aka "linode", or pass "all" to list images for all configured cloud providers
--list-sizes
=LIST_SIZES
¶Display a list of sizes available in configured cloud providers. Pass the cloud provider that available sizes are desired on, aka "AWS", or pass "all" to list sizes for all configured cloud providers
--set-password
=<USERNAME> <PROVIDER>
¶Configure password for a cloud provider and save it to the keyring. PROVIDER can be specified with or without a driver, for example: "--set-password bob rackspace" or more specific "--set-password bob rackspace:openstack" DEPRECATED!
--out
¶Pass in an alternative outputter to display the return of data. This outputter can be any of the available outputters:
grains
,highstate
,json
,key
,overstatestage
,pprint
,raw
,txt
,yaml
Some outputters are formatted only for data returned from specific
functions; for instance, the grains
outputter will not work for non-grains
data.
If an outputter is used that does not support the data passed into it, then
Salt will fall back on the pprint
outputter and display the return data
using the Python pprint
standard library module.
Note
If using --out=json
, you will probably want --static
as well.
Without the static option, you will get a JSON string for each minion.
This is due to using an iterative outputter. So if you want to feed it
to a JSON parser, use --static
as well.
--out-indent
OUTPUT_INDENT
,
--output-indent
OUTPUT_INDENT
¶Print the output indented by the provided value in spaces. Negative values disable indentation. Only applicable in outputters that support indentation.
--out-file
=OUTPUT_FILE
,
--output-file
=OUTPUT_FILE
¶Write the output to the specified file.
--no-color
¶Disable all colored output
--force-color
¶Force colored output
Note
When using colored output the color codes are as follows:
green
denotes success, red
denotes failure, blue
denotes
changes and success and yellow
denotes a expected future change in configuration.
To create 4 VMs named web1, web2, db1, and db2 from specified profiles:
salt-cloud -p fedora_rackspace web1 web2 db1 db2
To read in a map file and create all VMs specified therein:
salt-cloud -m /path/to/cloud.map
To read in a map file and create all VMs specified therein in parallel:
salt-cloud -m /path/to/cloud.map -P
To delete any VMs specified in the map file:
salt-cloud -m /path/to/cloud.map -d
To delete any VMs NOT specified in the map file:
salt-cloud -m /path/to/cloud.map -H
To display the status of all VMs specified in the map file:
salt-cloud -m /path/to/cloud.map -Q
salt-cloud(7) salt(7) salt-master(1) salt-minion(1)
salt-cp
¶Copy a file to a set of systems
salt-cp '*' [ options ] SOURCE DEST
salt-cp -E '.*' [ options ] SOURCE DEST
salt-cp -G 'os:Arch.*' [ options ] SOURCE DEST
Salt copy copies a local file out to all of the Salt minions matched by the given target.
Note: salt-cp uses salt's publishing mechanism. This means the privacy of the contents of the file on the wire are completely dependant upon the transport in use. In addition, if the salt-master is running with debug logging it is possible that the contents of the file will be logged to disk.
--version
¶Print the version of Salt that is running.
--versions-report
¶Show program's dependencies and version number, and then exit
-h
,
--help
¶Show the help message and exit
-c
CONFIG_DIR
,
--config-dir
=CONFIG_dir
¶The location of the Salt configuration directory. This directory contains
the configuration files for Salt master and minions. The default location
on most systems is /etc/salt
.
-t
TIMEOUT
,
--timeout
=TIMEOUT
¶The timeout in seconds to wait for replies from the Salt minions. The timeout number specifies how long the command line client will wait to query the minions and check on running jobs. Default: 5
Logging options which override any settings defined on the configuration files.
-l
LOG_LEVEL
,
--log-level
=LOG_LEVEL
¶Console logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
--log-file
=LOG_FILE
¶Log file path. Default: /var/log/salt/master.
--log-file-level
=LOG_LEVEL_LOGFILE
¶Logfile logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
-E
,
--pcre
¶The target expression will be interpreted as a PCRE regular expression rather than a shell glob.
-L
,
--list
¶The target expression will be interpreted as a comma-delimited list; example: server1.foo.bar,server2.foo.bar,example7.quo.qux
-G
,
--grain
¶The target expression matches values returned by the Salt grains system on the minions. The target expression is in the format of '<grain value>:<glob expression>'; example: 'os:Arch*'
This was changed in version 0.9.8 to accept glob expressions instead of regular expression. To use regular expression matching with grains, use the --grain-pcre option.
--grain-pcre
¶The target expression matches values returned by the Salt grains system on the minions. The target expression is in the format of '<grain value>:< regular expression>'; example: 'os:Arch.*'
-N
,
--nodegroup
¶Use a predefined compound target defined in the Salt master configuration file.
-R
,
--range
¶Instead of using shell globs to evaluate the target, use a range expression to identify targets. Range expressions look like %cluster.
Using the Range option requires that a range server is set up and the location of the range server is referenced in the master configuration file.
salt(1) salt-master(1) salt-minion(1)
salt-key
¶salt-key [ options ]
Salt-key executes simple management of Salt server public keys used for authentication.
--version
¶Print the version of Salt that is running.
--versions-report
¶Show program's dependencies and version number, and then exit
-h
,
--help
¶Show the help message and exit
-c
CONFIG_DIR
,
--config-dir
=CONFIG_dir
¶The location of the Salt configuration directory. This directory contains
the configuration files for Salt master and minions. The default location
on most systems is /etc/salt
.
-u
USER
,
--user
=USER
¶Specify user to run salt-key
--hard-crash
¶Raise any original exception rather than exiting gracefully. Default is False.
-q
,
--quiet
¶Suppress output
-y
,
--yes
¶Answer 'Yes' to all questions presented, defaults to False
--rotate-aes-key
=ROTATE_AES_KEY
¶Setting this to False prevents the master from refreshing the key session when keys are deleted or rejected, this lowers the security of the key deletion/rejection operation. Default is True.
Logging options which override any settings defined on the configuration files.
--log-file
=LOG_FILE
¶Log file path. Default: /var/log/salt/minion.
--log-file-level
=LOG_LEVEL_LOGFILE
¶Logfile logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
--out
¶Pass in an alternative outputter to display the return of data. This outputter can be any of the available outputters:
grains
,highstate
,json
,key
,overstatestage
,pprint
,raw
,txt
,yaml
Some outputters are formatted only for data returned from specific
functions; for instance, the grains
outputter will not work for non-grains
data.
If an outputter is used that does not support the data passed into it, then
Salt will fall back on the pprint
outputter and display the return data
using the Python pprint
standard library module.
Note
If using --out=json
, you will probably want --static
as well.
Without the static option, you will get a JSON string for each minion.
This is due to using an iterative outputter. So if you want to feed it
to a JSON parser, use --static
as well.
--out-indent
OUTPUT_INDENT
,
--output-indent
OUTPUT_INDENT
¶Print the output indented by the provided value in spaces. Negative values disable indentation. Only applicable in outputters that support indentation.
--out-file
=OUTPUT_FILE
,
--output-file
=OUTPUT_FILE
¶Write the output to the specified file.
--no-color
¶Disable all colored output
--force-color
¶Force colored output
Note
When using colored output the color codes are as follows:
green
denotes success, red
denotes failure, blue
denotes
changes and success and yellow
denotes a expected future change in configuration.
-l
ARG
,
--list
=ARG
¶List the public keys. The args pre
, un
, and unaccepted
will
list unaccepted/unsigned keys. acc
or accepted
will list
accepted/signed keys. rej
or rejected
will list rejected keys.
Finally, all
will list all keys.
-L
,
--list-all
¶List all public keys. (Deprecated: use --list all
)
-a
ACCEPT
,
--accept
=ACCEPT
¶Accept the specified public key (use --include-all to match rejected keys in addition to pending keys). Globs are supported.
-A
,
--accept-all
¶Accepts all pending keys.
-r
REJECT
,
--reject
=REJECT
¶Reject the specified public key (use --include-all to match accepted keys in addition to pending keys). Globs are supported.
-R
,
--reject-all
¶Rejects all pending keys.
--include-all
¶Include non-pending keys when accepting/rejecting.
-p
PRINT
,
--print
=PRINT
¶Print the specified public key.
-P
,
--print-all
¶Print all public keys
-d
DELETE
,
--delete
=DELETE
¶Delete the specified key. Globs are supported.
-D
,
--delete-all
¶Delete all keys.
-f
FINGER
,
--finger
=FINGER
¶Print the specified key's fingerprint.
-F
,
--finger-all
¶Print all keys' fingerprints.
--gen-keys
=GEN_KEYS
¶Set a name to generate a keypair for use with salt
--gen-keys-dir
=GEN_KEYS_DIR
¶Set the directory to save the generated keypair. Only works with 'gen_keys_dir' option; default is the current directory.
--keysize
=KEYSIZE
¶Set the keysize for the generated key, only works with the '--gen-keys' option, the key size must be 2048 or higher, otherwise it will be rounded up to 2048. The default is 2048.
--gen-signature
¶Create a signature file of the masters public-key named master_pubkey_signature. The signature can be send to a minion in the masters auth-reply and enables the minion to verify the masters public-key cryptographically. This requires a new signing-key- pair which can be auto-created with the --auto-create parameter.
--priv
=PRIV
¶The private-key file to create a signature with
--signature-path
=SIGNATURE_PATH
¶The path where the signature file should be written
--pub
=PUB
¶The public-key file to create a signature for
--auto-create
¶Auto-create a signing key-pair if it does not yet exist
salt(7) salt-master(1) salt-minion(1)
salt-master
¶The Salt master daemon, used to control the Salt minions
salt-master [ options ]
The master daemon controls the Salt minions
--version
¶Print the version of Salt that is running.
--versions-report
¶Show program's dependencies and version number, and then exit
-h
,
--help
¶Show the help message and exit
-c
CONFIG_DIR
,
--config-dir
=CONFIG_dir
¶The location of the Salt configuration directory. This directory contains
the configuration files for Salt master and minions. The default location
on most systems is /etc/salt
.
-u
USER
,
--user
=USER
¶Specify user to run salt-master
-d
,
--daemon
¶Run salt-master as a daemon
--pid-file
PIDFILE
¶Specify the location of the pidfile. Default: /var/run/salt-master.pid
Logging options which override any settings defined on the configuration files.
-l
LOG_LEVEL
,
--log-level
=LOG_LEVEL
¶Console logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
--log-file
=LOG_FILE
¶Log file path. Default: /var/log/salt/master.
--log-file-level
=LOG_LEVEL_LOGFILE
¶Logfile logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
salt(1) salt(7) salt-minion(1)
salt-minion
¶The Salt minion daemon, receives commands from a remote Salt master.
salt-minion [ options ]
The Salt minion receives commands from the central Salt master and replies with the results of said commands.
--version
¶Print the version of Salt that is running.
--versions-report
¶Show program's dependencies and version number, and then exit
-h
,
--help
¶Show the help message and exit
-c
CONFIG_DIR
,
--config-dir
=CONFIG_dir
¶The location of the Salt configuration directory. This directory contains
the configuration files for Salt master and minions. The default location
on most systems is /etc/salt
.
-u
USER
,
--user
=USER
¶Specify user to run salt-minion
-d
,
--daemon
¶Run salt-minion as a daemon
--pid-file
PIDFILE
¶Specify the location of the pidfile. Default: /var/run/salt-minion.pid
Logging options which override any settings defined on the configuration files.
-l
LOG_LEVEL
,
--log-level
=LOG_LEVEL
¶Console logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
--log-file
=LOG_FILE
¶Log file path. Default: /var/log/salt/minion.
--log-file-level
=LOG_LEVEL_LOGFILE
¶Logfile logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
salt(1) salt(7) salt-master(1)
salt-run
¶Execute a Salt runner
salt-run RUNNER
salt-run is the frontend command for executing Salt Runners
.
Salt runners are simple modules used to execute convenience functions on the
master
--version
¶Print the version of Salt that is running.
--versions-report
¶Show program's dependencies and version number, and then exit
-h
,
--help
¶Show the help message and exit
-c
CONFIG_DIR
,
--config-dir
=CONFIG_dir
¶The location of the Salt configuration directory. This directory contains
the configuration files for Salt master and minions. The default location
on most systems is /etc/salt
.
-t
TIMEOUT
,
--timeout
=TIMEOUT
¶The timeout in seconds to wait for replies from the Salt minions. The timeout number specifies how long the command line client will wait to query the minions and check on running jobs. Default: 1
--hard-crash
¶Raise any original exception rather than exiting gracefully. Default is False.
-d
,
--doc
,
--documentation
¶Display documentation for runners, pass a module or a runner to see documentation on only that module/runner.
Logging options which override any settings defined on the configuration files.
-l
LOG_LEVEL
,
--log-level
=LOG_LEVEL
¶Console logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
--log-file
=LOG_FILE
¶Log file path. Default: /var/log/salt/master.
--log-file-level
=LOG_LEVEL_LOGFILE
¶Logfile logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
salt(1) salt-master(1) salt-minion(1)
salt-ssh
¶salt-ssh '*' [ options ] sys.doc
salt-ssh -E '.*' [ options ] sys.doc cmd
Salt SSH allows for salt routines to be executed using only SSH for transport
-r
,
--raw
,
--raw-shell
¶Execute a raw shell command.
--priv
¶Specify the SSH private key file to be used for authentication.
--roster
¶Define which roster system to use, this defines if a database backend, scanner, or custom roster system is used. Default is the flat file roster.
--roster-file
¶Define an alternative location for the default roster file location. The
default roster file is called roster
and is found in the same directory
as the master config file.
New in version 2014.1.0.
--refresh
,
--refresh-cache
¶Force a refresh of the master side data cache of the target's data. This is needed if a target's grains have been changed and the auto refresh timeframe has not been reached.
--max-procs
¶Set the number of concurrent minions to communicate with. This value defines how many processes are opened up at a time to manage connections, the more running process the faster communication should be, default is 25.
-i
,
--ignore-host-keys
¶Ignore the ssh host keys which by default are honored and connections would ask for approval.
--passwd
¶Set the default password to attempt to use when authenticating.
--key-deploy
¶Set this flag to attempt to deploy the authorized ssh key with all minions. This combined with --passwd can make initial deployment of keys very fast and easy.
--version
¶Print the version of Salt that is running.
--versions-report
¶Show program's dependencies and version number, and then exit
-h
,
--help
¶Show the help message and exit
-c
CONFIG_DIR
,
--config-dir
=CONFIG_dir
¶The location of the Salt configuration directory. This directory contains
the configuration files for Salt master and minions. The default location
on most systems is /etc/salt
.
-E
,
--pcre
¶The target expression will be interpreted as a PCRE regular expression rather than a shell glob.
-L
,
--list
¶The target expression will be interpreted as a comma-delimited list; example: server1.foo.bar,server2.foo.bar,example7.quo.qux
-G
,
--grain
¶The target expression matches values returned by the Salt grains system on the minions. The target expression is in the format of '<grain value>:<glob expression>'; example: 'os:Arch*'
This was changed in version 0.9.8 to accept glob expressions instead of regular expression. To use regular expression matching with grains, use the --grain-pcre option.
--grain-pcre
¶The target expression matches values returned by the Salt grains system on the minions. The target expression is in the format of '<grain value>:< regular expression>'; example: 'os:Arch.*'
-N
,
--nodegroup
¶Use a predefined compound target defined in the Salt master configuration file.
-R
,
--range
¶Instead of using shell globs to evaluate the target, use a range expression to identify targets. Range expressions look like %cluster.
Using the Range option requires that a range server is set up and the location of the range server is referenced in the master configuration file.
Logging options which override any settings defined on the configuration files.
-l
LOG_LEVEL
,
--log-level
=LOG_LEVEL
¶Console logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
--log-file
=LOG_FILE
¶Log file path. Default: /var/log/salt/ssh.
--log-file-level
=LOG_LEVEL_LOGFILE
¶Logfile logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
--out
¶Pass in an alternative outputter to display the return of data. This outputter can be any of the available outputters:
grains
,highstate
,json
,key
,overstatestage
,pprint
,raw
,txt
,yaml
Some outputters are formatted only for data returned from specific
functions; for instance, the grains
outputter will not work for non-grains
data.
If an outputter is used that does not support the data passed into it, then
Salt will fall back on the pprint
outputter and display the return data
using the Python pprint
standard library module.
Note
If using --out=json
, you will probably want --static
as well.
Without the static option, you will get a JSON string for each minion.
This is due to using an iterative outputter. So if you want to feed it
to a JSON parser, use --static
as well.
--out-indent
OUTPUT_INDENT
,
--output-indent
OUTPUT_INDENT
¶Print the output indented by the provided value in spaces. Negative values disable indentation. Only applicable in outputters that support indentation.
--out-file
=OUTPUT_FILE
,
--output-file
=OUTPUT_FILE
¶Write the output to the specified file.
--no-color
¶Disable all colored output
--force-color
¶Force colored output
Note
When using colored output the color codes are as follows:
green
denotes success, red
denotes failure, blue
denotes
changes and success and yellow
denotes a expected future change in configuration.
salt(7) salt-master(1) salt-minion(1)
salt-syndic
¶The Salt syndic daemon, a special minion that passes through commands from a higher master
salt-syndic [ options ]
The Salt syndic daemon, a special minion that passes through commands from a higher master.
--version
¶Print the version of Salt that is running.
--versions-report
¶Show program's dependencies and version number, and then exit
-h
,
--help
¶Show the help message and exit
-c
CONFIG_DIR
,
--config-dir
=CONFIG_dir
¶The location of the Salt configuration directory. This directory contains
the configuration files for Salt master and minions. The default location
on most systems is /etc/salt
.
-u
USER
,
--user
=USER
¶Specify user to run salt-syndic
-d
,
--daemon
¶Run salt-syndic as a daemon
--pid-file
PIDFILE
¶Specify the location of the pidfile. Default: /var/run/salt-syndic.pid
Logging options which override any settings defined on the configuration files.
-l
LOG_LEVEL
,
--log-level
=LOG_LEVEL
¶Console logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
--log-file
=LOG_FILE
¶Log file path. Default: /var/log/salt/master.
--log-file-level
=LOG_LEVEL_LOGFILE
¶Logfile logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
salt(1) salt-master(1) salt-minion(1)
salt-api
¶Start interfaces used to remotely connect to the salt master
salt-api
The Salt API system manages network api connectors for the Salt Master
--version
¶Print the version of Salt that is running.
--versions-report
¶Show program's dependencies and version number, and then exit
-h
,
--help
¶Show the help message and exit
-c
CONFIG_DIR
,
--config-dir
=CONFIG_dir
¶The location of the Salt configuration directory. This directory contains
the configuration files for Salt master and minions. The default location
on most systems is /etc/salt
.
-d
,
--daemon
¶Run the salt-api as a daemon
--pid-file
=PIDFILE
¶Specify the location of the pidfile. Default: /var/run/salt-api.pid
Logging options which override any settings defined on the configuration files.
-l
LOG_LEVEL
,
--log-level
=LOG_LEVEL
¶Console logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
--log-file
=LOG_FILE
¶Log file path. Default: /var/log/salt/api.
--log-file-level
=LOG_LEVEL_LOGFILE
¶Logfile logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
salt-api(7) salt(7) salt-master(1)
The salt client ACL system is a means to allow system users other than root to have access to execute select salt commands on minions from the master.
The client ACL system is configured in the master configuration file via the
client_acl
configuration option. Under the client_acl
configuration
option the users open to send commands are specified and then a list of regular
expressions which specify the minion functions which will be made available to
specified user. This configuration is much like the peer
configuration:
# Allow thatch to execute anything and allow fred to use ping and pkg
client_acl:
thatch:
- .*
fred:
- test.*
- pkg.*
Directories required for client_acl
must be modified to be readable by the
users specified:
chmod 755 /var/cache/salt /var/cache/salt/master /var/cache/salt/master/jobs /var/run/salt /var/run/salt/master
Note
In addition to the changes above you will also need to modify the permissions of /var/log/salt and the existing log file to be writable by the user(s) which will be running the commands. If you do not wish to do this then you must disable logging or Salt will generate errors as it cannot write to the logs as the system users.
If you are upgrading from earlier versions of salt you must also remove any existing user keys and re-start the Salt master:
rm /var/cache/salt/.*key
service salt-master restart
Salt provides several entry points for interfacing with Python applications.
These entry points are often referred to as *Client()
APIs. Each client
accesses different parts of Salt, either from the master or from a minion. Each
client is detailed below.
See also
There are many ways to access Salt programmatically.
Salt can be used from CLI scripts as well as via a REST interface.
See Salt's outputter system to retrieve structured data from Salt as JSON, or as shell-friendly text, or many other formats.
See the state.event
runner to utilize
Salt's event bus from shell scripts.
Salt's netapi module provides access to Salt externally via a REST interface. Review the netapi module documentation for more information.
opts
dictionary¶Some clients require access to Salt's opts
dictionary. (The dictionary
representation of the master or
minion config files.)
A common pattern for fetching the opts
dictionary is to defer to
environment variables if they exist or otherwise fetch the config from the
default location.
salt.config.
client_config
(path, env_var='SALT_CLIENT_CONFIG', defaults=None)¶Load Master configuration data
Usage:
import salt.config
master_opts = salt.config.client_config('/etc/salt/master')
Returns a dictionary of the Salt Master configuration file with necessary options needed to communicate with a locally-running Salt Master daemon. This function searches for client specific configurations and adds them to the data from the master configuration.
This is useful for master-side operations like
LocalClient
.
salt.config.
minion_config
(path, env_var='SALT_MINION_CONFIG', defaults=None, cache_minion_id=False)¶Reads in the minion configuration file and sets up special options
This is useful for Minion-side operations, such as the
Caller
class, and manually running the loader
interface.
import salt.client
minion_opts = salt.config.minion_config('/etc/salt/minion')
Modules in the Salt ecosystem are loaded into memory using a custom loader
system. This allows modules to have conditional requirements (OS, OS version,
installed libraries, etc) and allows Salt to inject special variables
(__salt__
, __opts
, etc).
Most modules can be manually loaded. This is often useful in third-party Python
apps or when writing tests. However some modules require and expect a full,
running Salt system underneath. Notably modules that facilitate
master-to-minion communication such as the mine
,
publish
, and peer
execution
modules. The error KeyError: 'master_uri'
is a likely indicator for this
situation. In those instances use the Caller
class
to execute those modules instead.
Each module type has a corresponding loader function.
salt.loader.
minion_mods
(opts, context=None, utils=None, whitelist=None, include_errors=False, initial_load=False, loaded_base_name=None, notify=False)¶Load execution modules
Returns a dictionary of execution modules appropriate for the current system by evaluating the __virtual__() function in each module.
Parameters: |
|
---|
import salt.config
import salt.loader
__opts__ = salt.config.minion_config('/etc/salt/minion')
__grains__ = salt.loader.grains(__opts__)
__opts__['grains'] = __grains__
__salt__ = salt.loader.minion_mods(__opts__)
__salt__['test.ping']()
salt.loader.
raw_mod
(opts, name, functions, mod='modules')¶Returns a single module loaded raw and bypassing the __virtual__ function
import salt.config
import salt.loader
__opts__ = salt.config.minion_config('/etc/salt/minion')
testmod = salt.loader.raw_mod(__opts__, 'test', None)
testmod['test.ping']()
salt.loader.
states
(opts, functions, whitelist=None)¶Returns the state modules
Parameters: |
---|
import salt.config
import salt.loader
__opts__ = salt.config.minion_config('/etc/salt/minion')
statemods = salt.loader.states(__opts__, None)
salt.loader.
grains
(opts, force_refresh=False)¶Return the functions for the dynamic grains and the values for the static grains.
import salt.config
import salt.loader
__opts__ = salt.config.minion_config('/etc/salt/minion')
__grains__ = salt.loader.grains(__opts__)
print __grains__['id']
salt.loader.
grain_funcs
(opts)¶Returns the grain functions
import salt.config import salt.loader __opts__ = salt.config.minion_config('/etc/salt/minion') grainfuncs = salt.loader.grain_funcs(__opts__)
salt.client.
LocalClient
(c_path='/etc/salt/master', mopts=None, skip_perm_errors=False)¶The interface used by the salt CLI tool on the Salt Master
LocalClient
is used to send a command to Salt minions to execute
execution modules and return the results to the
Salt Master.
Importing and using LocalClient
must be done on the same machine as the
Salt Master and it must be done using the same user that the Salt Master is
running as. (Unless external_auth
is configured and
authentication credentials are included in the execution).
import salt.client
local = salt.client.LocalClient()
local.cmd('*', 'test.fib', [10])
cmd
(tgt, fun, arg=(), timeout=None, expr_form='glob', ret='', jid='', kwarg=None, **kwargs)¶Synchronously execute a command on targeted minions
The cmd method will execute and wait for the timeout period for all minions to reply, then it will return all minion data at once.
>>> import salt.client
>>> local = salt.client.LocalClient()
>>> local.cmd('*', 'cmd.run', ['whoami'])
{'jerry': 'root'}
With extra keyword arguments for the command function to be run:
local.cmd('*', 'test.arg', ['arg1', 'arg2'], kwarg={'foo': 'bar'})
Compound commands can be used for multiple executions in a single publish. Function names and function arguments are provided in separate lists but the index values must correlate and an empty list must be used if no arguments are required.
>>> local.cmd('*', [
'grains.items',
'sys.doc',
'cmd.run',
],
[
[],
[],
['uptime'],
])
Parameters: |
|
---|---|
Returns: | A dictionary with the result of the execution, keyed by minion ID. A compound command will return a sub-dictionary keyed by function name. |
cmd_async
(tgt, fun, arg=(), expr_form='glob', ret='', jid='', kwarg=None, **kwargs)¶Asynchronously send a command to connected minions
The function signature is the same as cmd()
with the
following exceptions.
Returns: | A job ID or 0 on failure. |
---|
>>> local.cmd_async('*', 'test.sleep', [300])
'20131219215921857715'
cmd_batch
(tgt, fun, arg=(), expr_form='glob', ret='', kwarg=None, batch='10%', **kwargs)¶Iteratively execute a command on subsets of minions at a time
The function signature is the same as cmd()
with the
following exceptions.
Parameters: | batch -- The batch identifier of systems to execute on |
---|---|
Returns: | A generator of minion returns |
>>> returns = local.cmd_batch('*', 'state.highstate', bat='10%')
>>> for ret in returns:
... print(ret)
{'jerry': {...}}
{'dave': {...}}
{'stewart': {...}}
cmd_iter
(tgt, fun, arg=(), timeout=None, expr_form='glob', ret='', kwarg=None, **kwargs)¶Yields the individual minion returns as they come in
The function signature is the same as cmd()
with the
following exceptions.
Returns: | A generator yielding the individual minion returns |
---|
>>> ret = local.cmd_iter('*', 'test.ping')
>>> for i in ret:
... print(i)
{'jerry': {'ret': True}}
{'dave': {'ret': True}}
{'stewart': {'ret': True}}
cmd_iter_no_block
(tgt, fun, arg=(), timeout=None, expr_form='glob', ret='', kwarg=None, **kwargs)¶The function signature is the same as cmd()
with the
following exceptions.
Returns: | A generator yielding the individual minion returns, or None when no returns are available. This allows for actions to be injected in between minion returns. |
---|
>>> ret = local.cmd_iter_no_block('*', 'test.ping')
>>> for i in ret:
... print(i)
None
{'jerry': {'ret': True}}
{'dave': {'ret': True}}
None
{'stewart': {'ret': True}}
cmd_subset
(tgt, fun, arg=(), expr_form='glob', ret='', kwarg=None, sub=3, cli=False, progress=False, **kwargs)¶Execute a command on a random subset of the targeted systems
The function signature is the same as cmd()
with the
following exceptions.
Parameters: | sub -- The number of systems to execute on |
---|
>>> SLC.cmd_subset('*', 'test.ping', sub=1)
{'jerry': True}
get_cli_returns
(jid, minions, timeout=None, tgt='*', tgt_type='glob', verbose=False, show_jid=False, **kwargs)¶Starts a watcher looking at the return data for a specified JID
Returns: | all of the information for the JID |
---|
get_event_iter_returns
(jid, minions, timeout=None)¶Gather the return data from the event system, break hard when timeout is reached.
run_job
(tgt, fun, arg=(), expr_form='glob', ret='', timeout=None, jid='', kwarg=None, **kwargs)¶Asynchronously send a command to connected minions
Prep the job directory and publish a command to any targeted minions.
Returns: | A dictionary of (validated) pub_data or an empty
dictionary on failure. The pub_data contains the job ID and a
list of all minions that are expected to return data. |
---|
>>> local.run_job('*', 'test.sleep', [300])
{'jid': '20131219215650131543', 'minions': ['jerry']}
salt.client.
Caller
(c_path='/etc/salt/minion', mopts=None)¶Caller
is the same interface used by the salt-call
command-line tool on the Salt Minion.
Changed in version Beryllium: Added the cmd
method for consistency with the other Salt clients.
The existing function
and sminion.functions
interfaces still
exist but have been removed from the docs.
Importing and using Caller
must be done on the same machine as a
Salt Minion and it must be done using the same user that the Salt Minion is
running as.
Usage:
import salt.client
caller = salt.client.Caller()
caller.cmd('test.ping')
Note, a running master or minion daemon is not required to use this class.
Running salt-call --local
simply sets file_client
to
'local'
. The same can be achieved at the Python level by including that
setting in a minion config file.
New in version 2014.7.0: Pass the minion config as the mopts
dictionary.
import salt.client
import salt.config
__opts__ = salt.config.minion_config('/etc/salt/minion')
__opts__['file_client'] = 'local'
caller = salt.client.Caller(mopts=__opts__)
cmd
(fun, *args, **kwargs)¶Call an execution module with the given arguments and keword arguments
Changed in version Beryllium: Added the cmd
method for consistency with the other Salt clients.
The existing function
and sminion.functions
interfaces still
exist but have been removed from the docs.
caller.cmd('test.arg', 'Foo', 'Bar', baz='Baz')
caller.cmd('event.send', 'myco/myevent/something',
data={'foo': 'Foo'}, with_env=['GIT_COMMIT'], with_grains=True)
salt.runner.
RunnerClient
(opts)¶The interface used by the salt-run CLI tool on the Salt Master
It executes runner modules which run on the Salt Master.
Importing and using RunnerClient
must be done on the same machine as
the Salt Master and it must be done using the same user that the Salt
Master is running as.
Salt's external_auth
can be used to authenticate calls. The
eauth user must be authorized to execute runner modules: (@runner
).
Only the master_call()
below supports eauth.
async
(fun, low, user='UNKNOWN')¶Execute the function in a multiprocess and return the event tag to use to watch for the return
cmd
(fun, arg=None, pub_data=None, kwarg=None)¶Execute a function
>>> opts = salt.config.master_config('/etc/salt/master')
>>> runner = salt.runner.RunnerClient(opts)
>>> runner.cmd('jobs.list_jobs', [])
{
'20131219215650131543': {
'Arguments': [300],
'Function': 'test.sleep',
'StartTime': '2013, Dec 19 21:56:50.131543',
'Target': '*',
'Target-type': 'glob',
'User': 'saltdev'
},
'20131219215921857715': {
'Arguments': [300],
'Function': 'test.sleep',
'StartTime': '2013, Dec 19 21:59:21.857715',
'Target': '*',
'Target-type': 'glob',
'User': 'saltdev'
},
}
cmd_async
(low)¶Execute a runner function asynchronously; eauth is respected
This function requires that external_auth
is configured
and the user is authorized to execute runner functions: (@runner
).
runner.eauth_async({
'fun': 'jobs.list_jobs',
'username': 'saltdev',
'password': 'saltdev',
'eauth': 'pam',
})
cmd_sync
(low, timeout=None)¶Execute a runner function synchronously; eauth is respected
This function requires that external_auth
is configured
and the user is authorized to execute runner functions: (@runner
).
runner.eauth_sync({
'fun': 'jobs.list_jobs',
'username': 'saltdev',
'password': 'saltdev',
'eauth': 'pam',
})
salt.wheel.
WheelClient
(opts=None)¶An interface to Salt's wheel modules
Wheel modules interact with various parts of the Salt Master.
Importing and using WheelClient
must be done on the same machine as the
Salt Master and it must be done using the same user that the Salt Master is
running as. Unless external_auth
is configured and the user
is authorized to execute wheel functions: (@wheel
).
Usage:
import salt.config
import salt.wheel
opts = salt.config.master_config('/etc/salt/master')
wheel = salt.wheel.WheelClient(opts)
async
(fun, low, user='UNKNOWN')¶Execute the function in a multiprocess and return the event tag to use to watch for the return
cmd
(fun, arg=None, pub_data=None, kwarg=None)¶Execute a function
>>> wheel.cmd('key.finger', ['jerry'])
{'minions': {'jerry': '5d:f6:79:43:5e:d4:42:3f:57:b8:45:a8:7e:a4:6e:ca'}}
cmd_async
(low)¶Execute a function asynchronously; eauth is respected
This function requires that external_auth
is configured
and the user is authorized
>>> wheel.cmd_async({
'fun': 'key.finger',
'match': 'jerry',
'eauth': 'auto',
'username': 'saltdev',
'password': 'saltdev',
})
{'jid': '20131219224744416681', 'tag': 'salt/wheel/20131219224744416681'}
cmd_sync
(low, timeout=None)¶Execute a wheel function synchronously; eauth is respected
This function requires that external_auth
is configured
and the user is authorized to execute runner functions: (@wheel
).
>>> wheel.cmd_sync({
'fun': 'key.finger',
'match': 'jerry',
'eauth': 'auto',
'username': 'saltdev',
'password': 'saltdev',
})
{'minions': {'jerry': '5d:f6:79:43:5e:d4:42:3f:57:b8:45:a8:7e:a4:6e:ca'}}
salt.cloud.
CloudClient
(path=None, opts=None, config_dir=None, pillars=None)¶The client class to wrap cloud interactions
action
(fun=None, cloudmap=None, names=None, provider=None, instance=None, kwargs=None)¶Execute a single action via the cloud plugin backend
Examples:
client.action(fun='show_instance', names=['myinstance'])
client.action(fun='show_image', provider='my-ec2-config',
kwargs={'image': 'ami-10314d79'}
)
create
(provider, names, **kwargs)¶Create the named VMs, without using a profile
Example:
client.create(names=['myinstance'], provider='my-ec2-config',
kwargs={'image': 'ami-1624987f', 'size': 't1.micro',
'ssh_username': 'ec2-user', 'securitygroup': 'default',
'delvol_on_destroy': True})
destroy
(names)¶Destroy the named VMs
extra_action
(names, provider, action, **kwargs)¶Perform actions with block storage devices
Example:
client.extra_action(names=['myblock'], action='volume_create',
provider='my-nova', kwargs={'voltype': 'SSD', 'size': 1000}
)
client.extra_action(names=['salt-net'], action='network_create',
provider='my-nova', kwargs={'cidr': '192.168.100.0/24'}
)
full_query
(query_type='list_nodes_full')¶Query all instance information
list_images
(provider=None)¶List all available images in configured cloud systems
list_locations
(provider=None)¶List all available locations in configured cloud systems
list_sizes
(provider=None)¶List all available sizes in configured cloud systems
low
(fun, low)¶Pass the cloud function and low data structure to run
map_run
(path, **kwargs)¶Pass in a location for a map to execute
min_query
(query_type='list_nodes_min')¶Query select instance information
profile
(profile, names, vm_overrides=None, **kwargs)¶Pass in a profile to create, names is a list of vm names to allocate
vm_overrides is a special dict that will be per node options overrides
Example:
>>> client= salt.cloud.CloudClient(path='/etc/salt/cloud')
>>> client.profile('do_512_git', names=['minion01',])
{'minion01': {u'backups_active': 'False',
u'created_at': '2014-09-04T18:10:15Z',
u'droplet': {u'event_id': 31000502,
u'id': 2530006,
u'image_id': 5140006,
u'name': u'minion01',
u'size_id': 66},
u'id': '2530006',
u'image_id': '5140006',
u'ip_address': '107.XXX.XXX.XXX',
u'locked': 'True',
u'name': 'minion01',
u'private_ip_address': None,
u'region_id': '4',
u'size_id': '66',
u'status': 'new'}}
query
(query_type='list_nodes')¶Query basic instance information
select_query
(query_type='list_nodes_select')¶Query select instance information
salt.client.ssh.client.
SSHClient
(c_path='/etc/salt/master', mopts=None)¶Create a client object for executing routines via the salt-ssh backend
New in version 2015.5.0.
cmd
(tgt, fun, arg=(), timeout=None, expr_form='glob', kwarg=None, **kwargs)¶Execute a single command via the salt-ssh subsystem and return all routines at once
New in version 2015.5.0.
cmd_iter
(tgt, fun, arg=(), timeout=None, expr_form='glob', ret='', kwarg=None, **kwargs)¶Execute a single command via the salt-ssh subsystem and return a generator
New in version 2015.5.0.
aliyun |
AliYun ECS Cloud Module |
botocore_aws |
The AWS Cloud Module |
cloudstack |
CloudStack Cloud Module |
digital_ocean |
DigitalOcean Cloud Module |
digital_ocean_v2 |
|
ec2 |
The EC2 Cloud Module |
gce |
Copyright 2013 Google Inc. |
gogrid |
GoGrid Cloud Module |
joyent |
Joyent Cloud Module |
libcloud_aws |
The AWS Cloud Module |
linode |
Linode Cloud Module using Apache Libcloud OR linode-python bindings |
lxc |
Install Salt on an LXC Container |
msazure |
Azure Cloud Module |
nova |
OpenStack Nova Cloud Module |
opennebula |
OpenNebula Cloud Module |
openstack |
OpenStack Cloud Module |
parallels |
Parallels Cloud Module |
proxmox |
Proxmox Cloud Module |
pyrax |
Pyrax Cloud Module |
rackspace |
Rackspace Cloud Module |
saltify |
Saltify Module ============== The Saltify module is designed to install Salt on a remote machine, virtual or bare metal, using SSH. |
softlayer |
SoftLayer Cloud Module |
softlayer_hw |
SoftLayer HW Cloud Module |
vmware |
VMware Cloud Module |
vsphere |
vSphere Cloud Module |
##### Primary configuration settings #####
##########################################
# This configuration file is used to manage the behavior of the Salt Master.
# Values that are commented out but have an empty line after the comment are
# defaults that do not need to be set in the config. If there is no blank line
# after the comment then the value is presented as an example and is not the
# default.
# Per default, the master will automatically include all config files
# from master.d/*.conf (master.d is a directory in the same directory
# as the main master config file).
#default_include: master.d/*.conf
# The address of the interface to bind to:
#interface: 0.0.0.0
# Whether the master should listen for IPv6 connections. If this is set to True,
# the interface option must be adjusted, too. (For example: "interface: '::'")
#ipv6: False
# The tcp port used by the publisher:
#publish_port: 4505
# The user under which the salt master will run. Salt will update all
# permissions to allow the specified user to run the master. The exception is
# the job cache, which must be deleted if this user is changed. If the
# modified files cause conflicts, set verify_env to False.
#user: root
# Max open files
#
# Each minion connecting to the master uses AT LEAST one file descriptor, the
# master subscription connection. If enough minions connect you might start
# seeing on the console (and then salt-master crashes):
# Too many open files (tcp_listener.cpp:335)
# Aborted (core dumped)
#
# By default this value will be the one of `ulimit -Hn`, ie, the hard limit for
# max open files.
#
# If you wish to set a different value than the default one, uncomment and
# configure this setting. Remember that this value CANNOT be higher than the
# hard limit. Raising the hard limit depends on your OS and/or distribution,
# a good way to find the limit is to search the internet. For example:
# raise max open files hard limit debian
#
#max_open_files: 100000
# The number of worker threads to start. These threads are used to manage
# return calls made from minions to the master. If the master seems to be
# running slowly, increase the number of threads.
#worker_threads: 5
# The port used by the communication interface. The ret (return) port is the
# interface used for the file server, authentication, job returns, etc.
#ret_port: 4506
# Specify the location of the daemon process ID file:
#pidfile: /var/run/salt-master.pid
# The root directory prepended to these options: pki_dir, cachedir,
# sock_dir, log_file, autosign_file, autoreject_file, extension_modules,
# key_logfile, pidfile:
#root_dir: /
# Directory used to store public key data:
#pki_dir: /etc/salt/pki/master
# Directory to store job and cache data:
# This directory may contain sensitive data and should be protected accordingly.
#
#cachedir: /var/cache/salt/master
# Directory for custom modules. This directory can contain subdirectories for
# each of Salt's module types such as "runners", "output", "wheel", "modules",
# "states", "returners", etc.
#extension_modules: <no default>
# Directory for custom modules. This directory can contain subdirectories for
# each of Salt's module types such as "runners", "output", "wheel", "modules",
# "states", "returners", etc.
# Like 'extension_modules' but can take an array of paths
#module_dirs: <no default>
# - /var/cache/salt/minion/extmods
# Verify and set permissions on configuration directories at startup:
#verify_env: True
# Set the number of hours to keep old job information in the job cache:
#keep_jobs: 24
# Set the default timeout for the salt command and api. The default is 5
# seconds.
#timeout: 5
# The loop_interval option controls the seconds for the master's maintenance
# process check cycle. This process updates file server backends, cleans the
# job cache and executes the scheduler.
#loop_interval: 60
# Set the default outputter used by the salt command. The default is "nested".
#output: nested
# Return minions that timeout when running commands like test.ping
#show_timeout: True
# By default, output is colored. To disable colored output, set the color value
# to False.
#color: True
# Do not strip off the colored output from nested results and state outputs
# (true by default).
# strip_colors: False
# Set the directory used to hold unix sockets:
#sock_dir: /var/run/salt/master
# The master can take a while to start up when lspci and/or dmidecode is used
# to populate the grains for the master. Enable if you want to see GPU hardware
# data for your master.
# enable_gpu_grains: False
# The master maintains a job cache. While this is a great addition, it can be
# a burden on the master for larger deployments (over 5000 minions).
# Disabling the job cache will make previously executed jobs unavailable to
# the jobs system and is not generally recommended.
#job_cache: True
# Cache minion grains and pillar data in the cachedir.
#minion_data_cache: True
# Store all returns in the given returner.
# Setting this option requires that any returner-specific configuration also
# be set. See various returners in salt/returners for details on required
# configuration values. (See also, event_return_queue below.)
#
#event_return: mysql
# On busy systems, enabling event_returns can cause a considerable load on
# the storage system for returners. Events can be queued on the master and
# stored in a batched fashion using a single transaction for multiple events.
# By default, events are not queued.
#event_return_queue: 0
# Only events returns matching tags in a whitelist
# event_return_whitelist:
# - salt/master/a_tag
# - salt/master/another_tag
# Store all event returns _except_ the tags in a blacklist
# event_return_blacklist:
# - salt/master/not_this_tag
# - salt/master/or_this_one
# Passing very large events can cause the minion to consume large amounts of
# memory. This value tunes the maximum size of a message allowed onto the
# master event bus. The value is expressed in bytes.
#max_event_size: 1048576
# By default, the master AES key rotates every 24 hours. The next command
# following a key rotation will trigger a key refresh from the minion which may
# result in minions which do not respond to the first command after a key refresh.
#
# To tell the master to ping all minions immediately after an AES key refresh, set
# ping_on_rotate to True. This should mitigate the issue where a minion does not
# appear to initially respond after a key is rotated.
#
# Note that ping_on_rotate may cause high load on the master immediately after
# the key rotation event as minions reconnect. Consider this carefully if this
# salt master is managing a large number of minions.
#
# If disabled, it is recommended to handle this event by listening for the
# 'aes_key_rotate' event with the 'key' tag and acting appropriately.
# ping_on_rotate: False
# By default, the master deletes its cache of minion data when the key for that
# minion is removed. To preserve the cache after key deletion, set
# 'preserve_minion_cache' to True.
#
# WARNING: This may have security implications if compromised minions auth with
# a previous deleted minion ID.
#preserve_minion_cache: False
# If max_minions is used in large installations, the master might experience
# high-load situations because of having to check the number of connected
# minions for every authentication. This cache provides the minion-ids of
# all connected minions to all MWorker-processes and greatly improves the
# performance of max_minions.
# con_cache: False
# The master can include configuration from other files. To enable this,
# pass a list of paths to this option. The paths can be either relative or
# absolute; if relative, they are considered to be relative to the directory
# the main master configuration file lives in (this file). Paths can make use
# of shell-style globbing. If no files are matched by a path passed to this
# option, then the master will log a warning message.
#
# Include a config file from some other path:
# include: /etc/salt/extra_config
#
# Include config from several files and directories:
# include:
# - /etc/salt/extra_config
##### Security settings #####
##########################################
# Enable "open mode", this mode still maintains encryption, but turns off
# authentication, this is only intended for highly secure environments or for
# the situation where your keys end up in a bad state. If you run in open mode
# you do so at your own risk!
#open_mode: False
# Enable auto_accept, this setting will automatically accept all incoming
# public keys from the minions. Note that this is insecure.
#auto_accept: False
# Time in minutes that a incoming public key with a matching name found in
# pki_dir/minion_autosign/keyid is automatically accepted. Expired autosign keys
# are removed when the master checks the minion_autosign directory.
# 0 equals no timeout
# autosign_timeout: 120
# If the autosign_file is specified, incoming keys specified in the
# autosign_file will be automatically accepted. This is insecure. Regular
# expressions as well as globing lines are supported.
#autosign_file: /etc/salt/autosign.conf
# Works like autosign_file, but instead allows you to specify minion IDs for
# which keys will automatically be rejected. Will override both membership in
# the autosign_file and the auto_accept setting.
#autoreject_file: /etc/salt/autoreject.conf
# Enable permissive access to the salt keys. This allows you to run the
# master or minion as root, but have a non-root group be given access to
# your pki_dir. To make the access explicit, root must belong to the group
# you've given access to. This is potentially quite insecure. If an autosign_file
# is specified, enabling permissive_pki_access will allow group access to that
# specific file.
#permissive_pki_access: False
# Allow users on the master access to execute specific commands on minions.
# This setting should be treated with care since it opens up execution
# capabilities to non root users. By default this capability is completely
# disabled.
#client_acl:
# larry:
# - test.ping
# - network.*
#
# Blacklist any of the following users or modules
#
# This example would blacklist all non sudo users, including root from
# running any commands. It would also blacklist any use of the "cmd"
# module. This is completely disabled by default.
#
#client_acl_blacklist:
# users:
# - root
# - '^(?!sudo_).*$' # all non sudo users
# modules:
# - cmd
# Enforce client_acl & client_acl_blacklist when users have sudo
# access to the salt command.
#
#sudo_acl: False
# The external auth system uses the Salt auth modules to authenticate and
# validate users to access areas of the Salt system.
#external_auth:
# pam:
# fred:
# - test.*
#
# Time (in seconds) for a newly generated token to live. Default: 12 hours
#token_expire: 43200
# Allow minions to push files to the master. This is disabled by default, for
# security purposes.
#file_recv: False
# Set a hard-limit on the size of the files that can be pushed to the master.
# It will be interpreted as megabytes. Default: 100
#file_recv_max_size: 100
# Signature verification on messages published from the master.
# This causes the master to cryptographically sign all messages published to its event
# bus, and minions then verify that signature before acting on the message.
#
# This is False by default.
#
# Note that to facilitate interoperability with masters and minions that are different
# versions, if sign_pub_messages is True but a message is received by a minion with
# no signature, it will still be accepted, and a warning message will be logged.
# Conversely, if sign_pub_messages is False, but a minion receives a signed
# message it will be accepted, the signature will not be checked, and a warning message
# will be logged. This behavior went away in Salt 2014.1.0 and these two situations
# will cause minion to throw an exception and drop the message.
# sign_pub_messages: False
##### Salt-SSH Configuration #####
##########################################
# Pass in an alternative location for the salt-ssh roster file
#roster_file: /etc/salt/roster
# Pass in minion option overrides that will be inserted into the SHIM for
# salt-ssh calls. The local minion config is not used for salt-ssh. Can be
# overridden on a per-minion basis in the roster (`minion_opts`)
#ssh_minion_opts:
# gpg_keydir: /root/gpg
##### Master Module Management #####
##########################################
# Manage how master side modules are loaded.
# Add any additional locations to look for master runners:
#runner_dirs: []
# Enable Cython for master side modules:
#cython_enable: False
##### State System settings #####
##########################################
# The state system uses a "top" file to tell the minions what environment to
# use and what modules to use. The state_top file is defined relative to the
# root of the base environment as defined in "File Server settings" below.
#state_top: top.sls
# The master_tops option replaces the external_nodes option by creating
# a plugable system for the generation of external top data. The external_nodes
# option is deprecated by the master_tops option.
#
# To gain the capabilities of the classic external_nodes system, use the
# following configuration:
# master_tops:
# ext_nodes: <Shell command which returns yaml>
#
#master_tops: {}
# The external_nodes option allows Salt to gather data that would normally be
# placed in a top file. The external_nodes option is the executable that will
# return the ENC data. Remember that Salt will look for external nodes AND top
# files and combine the results if both are enabled!
#external_nodes: None
# The renderer to use on the minions to render the state data
#renderer: yaml_jinja
# The Jinja renderer can strip extra carriage returns and whitespace
# See http://jinja.pocoo.org/docs/api/#high-level-api
#
# If this is set to True the first newline after a Jinja block is removed
# (block, not variable tag!). Defaults to False, corresponds to the Jinja
# environment init variable "trim_blocks".
# jinja_trim_blocks: False
#
# If this is set to True leading spaces and tabs are stripped from the start
# of a line to a block. Defaults to False, corresponds to the Jinja
# environment init variable "lstrip_blocks".
# jinja_lstrip_blocks: False
# The failhard option tells the minions to stop immediately after the first
# failure detected in the state execution, defaults to False
#failhard: False
# The state_verbose and state_output settings can be used to change the way
# state system data is printed to the display. By default all data is printed.
# The state_verbose setting can be set to True or False, when set to False
# all data that has a result of True and no changes will be suppressed.
#state_verbose: True
# The state_output setting changes if the output is the full multi line
# output for each changed state if set to 'full', but if set to 'terse'
# the output will be shortened to a single line. If set to 'mixed', the output
# will be terse unless a state failed, in which case that output will be full.
# If set to 'changes', the output will be full unless the state didn't change.
#state_output: full
# Automatically aggregate all states that have support for mod_aggregate by
# setting to True. Or pass a list of state module names to automatically
# aggregate just those types.
#
# state_aggregate:
# - pkg
#
#state_aggregate: False
##### File Server settings #####
##########################################
# Salt runs a lightweight file server written in zeromq to deliver files to
# minions. This file server is built into the master daemon and does not
# require a dedicated port.
# The file server works on environments passed to the master, each environment
# can have multiple root directories, the subdirectories in the multiple file
# roots cannot match, otherwise the downloaded files will not be able to be
# reliably ensured. A base environment is required to house the top file.
# Example:
# file_roots:
# base:
# - /srv/salt/
# dev:
# - /srv/salt/dev/services
# - /srv/salt/dev/states
# prod:
# - /srv/salt/prod/services
# - /srv/salt/prod/states
#
#file_roots:
# base:
# - /srv/salt
# The hash_type is the hash to use when discovering the hash of a file on
# the master server. The default is md5, but sha1, sha224, sha256, sha384
# and sha512 are also supported.
#
# Prior to changing this value, the master should be stopped and all Salt
# caches should be cleared.
#hash_type: md5
# The buffer size in the file server can be adjusted here:
#file_buffer_size: 1048576
# A regular expression (or a list of expressions) that will be matched
# against the file path before syncing the modules and states to the minions.
# This includes files affected by the file.recurse state.
# For example, if you manage your custom modules and states in subversion
# and don't want all the '.svn' folders and content synced to your minions,
# you could set this to '/\.svn($|/)'. By default nothing is ignored.
#file_ignore_regex:
# - '/\.svn($|/)'
# - '/\.git($|/)'
# A file glob (or list of file globs) that will be matched against the file
# path before syncing the modules and states to the minions. This is similar
# to file_ignore_regex above, but works on globs instead of regex. By default
# nothing is ignored.
# file_ignore_glob:
# - '*.pyc'
# - '*/somefolder/*.bak'
# - '*.swp'
# File Server Backend
#
# Salt supports a modular fileserver backend system, this system allows
# the salt master to link directly to third party systems to gather and
# manage the files available to minions. Multiple backends can be
# configured and will be searched for the requested file in the order in which
# they are defined here. The default setting only enables the standard backend
# "roots" which uses the "file_roots" option.
#fileserver_backend:
# - roots
#
# To use multiple backends list them in the order they are searched:
#fileserver_backend:
# - git
# - roots
#
# Uncomment the line below if you do not want the file_server to follow
# symlinks when walking the filesystem tree. This is set to True
# by default. Currently this only applies to the default roots
# fileserver_backend.
#fileserver_followsymlinks: False
#
# Uncomment the line below if you do not want symlinks to be
# treated as the files they are pointing to. By default this is set to
# False. By uncommenting the line below, any detected symlink while listing
# files on the Master will not be returned to the Minion.
#fileserver_ignoresymlinks: True
#
# By default, the Salt fileserver recurses fully into all defined environments
# to attempt to find files. To limit this behavior so that the fileserver only
# traverses directories with SLS files and special Salt directories like _modules,
# enable the option below. This might be useful for installations where a file root
# has a very large number of files and performance is impacted. Default is False.
# fileserver_limit_traversal: False
#
# The fileserver can fire events off every time the fileserver is updated,
# these are disabled by default, but can be easily turned on by setting this
# flag to True
#fileserver_events: False
# Git File Server Backend Configuration
#
# Gitfs can be provided by one of two python modules: GitPython or pygit2. If
# using pygit2, both libgit2 and git must also be installed.
#gitfs_provider: gitpython
#
# When using the git fileserver backend at least one git remote needs to be
# defined. The user running the salt master will need read access to the repo.
#
# The repos will be searched in order to find the file requested by a client
# and the first repo to have the file will return it.
# When using the git backend branches and tags are translated into salt
# environments.
# Note: file:// repos will be treated as a remote, so refs you want used must
# exist in that repo as *local* refs.
#gitfs_remotes:
# - git://github.com/saltstack/salt-states.git
# - file:///var/git/saltmaster
#
# The gitfs_ssl_verify option specifies whether to ignore ssl certificate
# errors when contacting the gitfs backend. You might want to set this to
# false if you're using a git backend that uses a self-signed certificate but
# keep in mind that setting this flag to anything other than the default of True
# is a security concern, you may want to try using the ssh transport.
#gitfs_ssl_verify: True
#
# The gitfs_root option gives the ability to serve files from a subdirectory
# within the repository. The path is defined relative to the root of the
# repository and defaults to the repository root.
#gitfs_root: somefolder/otherfolder
#
#
##### Pillar settings #####
##########################################
# Salt Pillars allow for the building of global data that can be made selectively
# available to different minions based on minion grain filtering. The Salt
# Pillar is laid out in the same fashion as the file server, with environments,
# a top file and sls files. However, pillar data does not need to be in the
# highstate format, and is generally just key/value pairs.
#pillar_roots:
# base:
# - /srv/pillar
#
#ext_pillar:
# - hiera: /etc/hiera.yaml
# - cmd_yaml: cat /etc/salt/yaml
# The ext_pillar_first option allows for external pillar sources to populate
# before file system pillar. This allows for targeting file system pillar from
# ext_pillar.
#ext_pillar_first: False
# The pillar_gitfs_ssl_verify option specifies whether to ignore ssl certificate
# errors when contacting the pillar gitfs backend. You might want to set this to
# false if you're using a git backend that uses a self-signed certificate but
# keep in mind that setting this flag to anything other than the default of True
# is a security concern, you may want to try using the ssh transport.
#pillar_gitfs_ssl_verify: True
# The pillar_opts option adds the master configuration file data to a dict in
# the pillar called "master". This is used to set simple configurations in the
# master config file that can then be used on minions.
#pillar_opts: False
# The pillar_safe_render_error option prevents the master from passing piller
# render errors to the minion. This is set on by default because the error could
# contain templating data which would give that minion information it shouldn't
# have, like a password! When set true the error message will only show:
# Rendering SLS 'my.sls' failed. Please see master log for details.
#pillar_safe_render_error: True
# The pillar_source_merging_strategy option allows you to configure merging strategy
# between different sources. It accepts four values: recurse, aggregate, overwrite,
# or smart. Recurse will merge recursively mapping of data. Aggregate instructs
# aggregation of elements between sources that use the #!yamlex renderer. Overwrite
# will verwrite elements according the order in which they are processed. This is
# behavior of the 2014.1 branch and earlier. Smart guesses the best strategy based
# on the "renderer" setting and is the default value.
#pillar_source_merging_strategy: smart
##### Syndic settings #####
##########################################
# The Salt syndic is used to pass commands through a master from a higher
# master. Using the syndic is simple. If this is a master that will have
# syndic servers(s) below it, then set the "order_masters" setting to True.
#
# If this is a master that will be running a syndic daemon for passthrough, then
# the "syndic_master" setting needs to be set to the location of the master server
# to receive commands from.
# Set the order_masters setting to True if this master will command lower
# masters' syndic interfaces.
#order_masters: False
# If this master will be running a salt syndic daemon, syndic_master tells
# this master where to receive commands from.
#syndic_master: masterofmaster
# This is the 'ret_port' of the MasterOfMaster:
#syndic_master_port: 4506
# PID file of the syndic daemon:
#syndic_pidfile: /var/run/salt-syndic.pid
# LOG file of the syndic daemon:
#syndic_log_file: syndic.log
##### Peer Publish settings #####
##########################################
# Salt minions can send commands to other minions, but only if the minion is
# allowed to. By default "Peer Publication" is disabled, and when enabled it
# is enabled for specific minions and specific commands. This allows secure
# compartmentalization of commands based on individual minions.
# The configuration uses regular expressions to match minions and then a list
# of regular expressions to match functions. The following will allow the
# minion authenticated as foo.example.com to execute functions from the test
# and pkg modules.
#peer:
# foo.example.com:
# - test.*
# - pkg.*
#
# This will allow all minions to execute all commands:
#peer:
# .*:
# - .*
#
# This is not recommended, since it would allow anyone who gets root on any
# single minion to instantly have root on all of the minions!
# Minions can also be allowed to execute runners from the salt master.
# Since executing a runner from the minion could be considered a security risk,
# it needs to be enabled. This setting functions just like the peer setting
# except that it opens up runners instead of module functions.
#
# All peer runner support is turned off by default and must be enabled before
# using. This will enable all peer runners for all minions:
#peer_run:
# .*:
# - .*
#
# To enable just the manage.up runner for the minion foo.example.com:
#peer_run:
# foo.example.com:
# - manage.up
#
#
##### Mine settings #####
##########################################
# Restrict mine.get access from minions. By default any minion has a full access
# to get all mine data from master cache. In acl definion below, only pcre matches
# are allowed.
# mine_get:
# .*:
# - .*
#
# The example below enables minion foo.example.com to get 'network.interfaces' mine
# data only, minions web* to get all network.* and disk.* mine data and all other
# minions won't get any mine data.
# mine_get:
# foo.example.com:
# - network.interfaces
# web.*:
# - network.*
# - disk.*
##### Logging settings #####
##########################################
# The location of the master log file
# The master log can be sent to a regular file, local path name, or network
# location. Remote logging works best when configured to use rsyslogd(8) (e.g.:
# ``file:///dev/log``), with rsyslogd(8) configured for network logging. The URI
# format is: <file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility>
#log_file: /var/log/salt/master
#log_file: file:///dev/log
#log_file: udp://loghost:10514
#log_file: /var/log/salt/master
#key_logfile: /var/log/salt/key
# The level of messages to send to the console.
# One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
#log_level: warning
# The level of messages to send to the log file.
# One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
# If using 'log_granular_levels' this must be set to the highest desired level.
#log_level_logfile: warning
# The date and time format used in log messages. Allowed date/time formating
# can be seen here: http://docs.python.org/library/time.html#time.strftime
#log_datefmt: '%H:%M:%S'
#log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'
# The format of the console logging messages. Allowed formatting options can
# be seen here: http://docs.python.org/library/logging.html#logrecord-attributes
#
# Console log colors are specified by these additional formatters:
#
# %(colorlevel)s
# %(colorname)s
# %(colorprocess)s
# %(colormsg)s
#
# Since it is desirable to include the surrounding brackets, '[' and ']', in
# the coloring of the messages, these color formatters also include padding as
# well. Color LogRecord attributes are only available for console logging.
#
#log_fmt_console: '[%(levelname)-8s] %(message)s'
#log_fmt_logfile: '%(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s'
# This can be used to control logging levels more specificically. This
# example sets the main salt library at the 'warning' level, but sets
# 'salt.modules' to log at the 'debug' level:
# log_granular_levels:
# 'salt': 'warning'
# 'salt.modules': 'debug'
#
#log_granular_levels: {}
##### Node Groups #####
##########################################
# Node groups allow for logical groupings of minion nodes. A group consists of a group
# name and a compound target.
#nodegroups:
# group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com and bl*.domain.com'
# group2: 'G@os:Debian and foo.domain.com'
##### Range Cluster settings #####
##########################################
# The range server (and optional port) that serves your cluster information
# https://github.com/ytoolshed/range/wiki/%22yamlfile%22-module-file-spec
#
#range_server: range:80
##### Windows Software Repo settings #####
##############################################
# Location of the repo on the master:
#win_repo: '/srv/salt/win/repo'
#
# Location of the master's repo cache file:
#win_repo_mastercachefile: '/srv/salt/win/repo/winrepo.p'
#
# List of git repositories to include with the local repo:
#win_gitrepos:
# - 'https://github.com/saltstack/salt-winrepo.git'
##### Returner settings ######
############################################
# Which returner(s) will be used for minion's result:
#return: mysql
##### Primary configuration settings #####
##########################################
# This configuration file is used to manage the behavior of the Salt Minion.
# With the exception of the location of the Salt Master Server, values that are
# commented out but have an empty line after the comment are defaults that need
# not be set in the config. If there is no blank line after the comment, the
# value is presented as an example and is not the default.
# Per default the minion will automatically include all config files
# from minion.d/*.conf (minion.d is a directory in the same directory
# as the main minion config file).
#default_include: minion.d/*.conf
# Set the location of the salt master server. If the master server cannot be
# resolved, then the minion will fail to start.
#master: salt
# If multiple masters are specified in the 'master' setting, the default behavior
# is to always try to connect to them in the order they are listed. If random_master is
# set to True, the order will be randomized instead. This can be helpful in distributing
# the load of many minions executing salt-call requests, for example, from a cron job.
# If only one master is listed, this setting is ignored and a warning will be logged.
#random_master: False
# Set whether the minion should connect to the master via IPv6:
#ipv6: False
# Set the number of seconds to wait before attempting to resolve
# the master hostname if name resolution fails. Defaults to 30 seconds.
# Set to zero if the minion should shutdown and not retry.
# retry_dns: 30
# Set the port used by the master reply and authentication server.
#master_port: 4506
# The user to run salt.
#user: root
# Specify the location of the daemon process ID file.
#pidfile: /var/run/salt-minion.pid
# The root directory prepended to these options: pki_dir, cachedir, log_file,
# sock_dir, pidfile.
#root_dir: /
# The directory to store the pki information in
#pki_dir: /etc/salt/pki/minion
# Explicitly declare the id for this minion to use, if left commented the id
# will be the hostname as returned by the python call: socket.getfqdn()
# Since salt uses detached ids it is possible to run multiple minions on the
# same machine but with different ids, this can be useful for salt compute
# clusters.
#id:
# Append a domain to a hostname in the event that it does not exist. This is
# useful for systems where socket.getfqdn() does not actually result in a
# FQDN (for instance, Solaris).
#append_domain:
# Custom static grains for this minion can be specified here and used in SLS
# files just like all other grains. This example sets 4 custom grains, with
# the 'roles' grain having two values that can be matched against.
#grains:
# roles:
# - webserver
# - memcache
# deployment: datacenter4
# cabinet: 13
# cab_u: 14-15
#
# Where cache data goes.
# This data may contain sensitive data and should be protected accordingly.
#cachedir: /var/cache/salt/minion
# Verify and set permissions on configuration directories at startup.
#verify_env: True
# The minion can locally cache the return data from jobs sent to it, this
# can be a good way to keep track of jobs the minion has executed
# (on the minion side). By default this feature is disabled, to enable, set
# cache_jobs to True.
#cache_jobs: False
# Set the directory used to hold unix sockets.
#sock_dir: /var/run/salt/minion
# Set the default outputter used by the salt-call command. The default is
# "nested".
#output: nested
#
# By default output is colored. To disable colored output, set the color value
# to False.
#color: True
# Do not strip off the colored output from nested results and state outputs
# (true by default).
# strip_colors: False
# Backup files that are replaced by file.managed and file.recurse under
# 'cachedir'/file_backups relative to their original location and appended
# with a timestamp. The only valid setting is "minion". Disabled by default.
#
# Alternatively this can be specified for each file in state files:
# /etc/ssh/sshd_config:
# file.managed:
# - source: salt://ssh/sshd_config
# - backup: minion
#
#backup_mode: minion
# When waiting for a master to accept the minion's public key, salt will
# continuously attempt to reconnect until successful. This is the time, in
# seconds, between those reconnection attempts.
#acceptance_wait_time: 10
# If this is nonzero, the time between reconnection attempts will increase by
# acceptance_wait_time seconds per iteration, up to this maximum. If this is
# set to zero, the time between reconnection attempts will stay constant.
#acceptance_wait_time_max: 0
# If the master rejects the minion's public key, retry instead of exiting.
# Rejected keys will be handled the same as waiting on acceptance.
#rejected_retry: False
# When the master key changes, the minion will try to re-auth itself to receive
# the new master key. In larger environments this can cause a SYN flood on the
# master because all minions try to re-auth immediately. To prevent this and
# have a minion wait for a random amount of time, use this optional parameter.
# The wait-time will be a random number of seconds between 0 and the defined value.
#random_reauth_delay: 60
# When waiting for a master to accept the minion's public key, salt will
# continuously attempt to reconnect until successful. This is the timeout value,
# in seconds, for each individual attempt. After this timeout expires, the minion
# will wait for acceptance_wait_time seconds before trying again. Unless your master
# is under unusually heavy load, this should be left at the default.
#auth_timeout: 60
# Number of consecutive SaltReqTimeoutError that are acceptable when trying to
# authenticate.
#auth_tries: 7
# If authentication fails due to SaltReqTimeoutError during a ping_interval,
# cause sub minion process to restart.
#auth_safemode: False
# Ping Master to ensure connection is alive (minutes).
#ping_interval: 0
# To auto recover minions if master changes IP address (DDNS)
# auth_tries: 10
# auth_safemode: False
# ping_interval: 90
#
# Minions won't know master is missing until a ping fails. After the ping fail,
# the minion will attempt authentication and likely fails out and cause a restart.
# When the minion restarts it will resolve the masters IP and attempt to reconnect.
# If you don't have any problems with syn-floods, don't bother with the
# three recon_* settings described below, just leave the defaults!
#
# The ZeroMQ pull-socket that binds to the masters publishing interface tries
# to reconnect immediately, if the socket is disconnected (for example if
# the master processes are restarted). In large setups this will have all
# minions reconnect immediately which might flood the master (the ZeroMQ-default
# is usually a 100ms delay). To prevent this, these three recon_* settings
# can be used.
# recon_default: the interval in milliseconds that the socket should wait before
# trying to reconnect to the master (1000ms = 1 second)
#
# recon_max: the maximum time a socket should wait. each interval the time to wait
# is calculated by doubling the previous time. if recon_max is reached,
# it starts again at recon_default. Short example:
#
# reconnect 1: the socket will wait 'recon_default' milliseconds
# reconnect 2: 'recon_default' * 2
# reconnect 3: ('recon_default' * 2) * 2
# reconnect 4: value from previous interval * 2
# reconnect 5: value from previous interval * 2
# reconnect x: if value >= recon_max, it starts again with recon_default
#
# recon_randomize: generate a random wait time on minion start. The wait time will
# be a random value between recon_default and recon_default +
# recon_max. Having all minions reconnect with the same recon_default
# and recon_max value kind of defeats the purpose of being able to
# change these settings. If all minions have the same values and your
# setup is quite large (several thousand minions), they will still
# flood the master. The desired behavior is to have timeframe within
# all minions try to reconnect.
#
# Example on how to use these settings. The goal: have all minions reconnect within a
# 60 second timeframe on a disconnect.
# recon_default: 1000
# recon_max: 59000
# recon_randomize: True
#
# Each minion will have a randomized reconnect value between 'recon_default'
# and 'recon_default + recon_max', which in this example means between 1000ms
# 60000ms (or between 1 and 60 seconds). The generated random-value will be
# doubled after each attempt to reconnect. Lets say the generated random
# value is 11 seconds (or 11000ms).
# reconnect 1: wait 11 seconds
# reconnect 2: wait 22 seconds
# reconnect 3: wait 33 seconds
# reconnect 4: wait 44 seconds
# reconnect 5: wait 55 seconds
# reconnect 6: wait time is bigger than 60 seconds (recon_default + recon_max)
# reconnect 7: wait 11 seconds
# reconnect 8: wait 22 seconds
# reconnect 9: wait 33 seconds
# reconnect x: etc.
#
# In a setup with ~6000 thousand hosts these settings would average the reconnects
# to about 100 per second and all hosts would be reconnected within 60 seconds.
# recon_default: 100
# recon_max: 5000
# recon_randomize: False
#
#
# The loop_interval sets how long in seconds the minion will wait between
# evaluating the scheduler and running cleanup tasks. This defaults to a
# sane 60 seconds, but if the minion scheduler needs to be evaluated more
# often lower this value
#loop_interval: 60
# The grains_refresh_every setting allows for a minion to periodically check
# its grains to see if they have changed and, if so, to inform the master
# of the new grains. This operation is moderately expensive, therefore
# care should be taken not to set this value too low.
#
# Note: This value is expressed in __minutes__!
#
# A value of 10 minutes is a reasonable default.
#
# If the value is set to zero, this check is disabled.
#grains_refresh_every: 1
# Cache grains on the minion. Default is False.
#grains_cache: False
# Grains cache expiration, in seconds. If the cache file is older than this
# number of seconds then the grains cache will be dumped and fully re-populated
# with fresh data. Defaults to 5 minutes. Will have no effect if 'grains_cache'
# is not enabled.
# grains_cache_expiration: 300
# Windows platforms lack posix IPC and must rely on slower TCP based inter-
# process communications. Set ipc_mode to 'tcp' on such systems
#ipc_mode: ipc
# Overwrite the default tcp ports used by the minion when in tcp mode
#tcp_pub_port: 4510
#tcp_pull_port: 4511
# Passing very large events can cause the minion to consume large amounts of
# memory. This value tunes the maximum size of a message allowed onto the
# minion event bus. The value is expressed in bytes.
#max_event_size: 1048576
# To detect failed master(s) and fire events on connect/disconnect, set
# master_alive_interval to the number of seconds to poll the masters for
# connection events.
#
#master_alive_interval: 30
# The minion can include configuration from other files. To enable this,
# pass a list of paths to this option. The paths can be either relative or
# absolute; if relative, they are considered to be relative to the directory
# the main minion configuration file lives in (this file). Paths can make use
# of shell-style globbing. If no files are matched by a path passed to this
# option then the minion will log a warning message.
#
# Include a config file from some other path:
# include: /etc/salt/extra_config
#
# Include config from several files and directories:
#include:
# - /etc/salt/extra_config
# - /etc/roles/webserver
#
#
#
##### Minion module management #####
##########################################
# Disable specific modules. This allows the admin to limit the level of
# access the master has to the minion.
#disable_modules: [cmd,test]
#disable_returners: []
#
# Modules can be loaded from arbitrary paths. This enables the easy deployment
# of third party modules. Modules for returners and minions can be loaded.
# Specify a list of extra directories to search for minion modules and
# returners. These paths must be fully qualified!
#module_dirs: []
#returner_dirs: []
#states_dirs: []
#render_dirs: []
#utils_dirs: []
#
# A module provider can be statically overwritten or extended for the minion
# via the providers option, in this case the default module will be
# overwritten by the specified module. In this example the pkg module will
# be provided by the yumpkg5 module instead of the system default.
#providers:
# pkg: yumpkg5
#
# Enable Cython modules searching and loading. (Default: False)
#cython_enable: False
#
# Specify a max size (in bytes) for modules on import. This feature is currently
# only supported on *nix operating systems and requires psutil.
# modules_max_memory: -1
##### State Management Settings #####
###########################################
# The state management system executes all of the state templates on the minion
# to enable more granular control of system state management. The type of
# template and serialization used for state management needs to be configured
# on the minion, the default renderer is yaml_jinja. This is a yaml file
# rendered from a jinja template, the available options are:
# yaml_jinja
# yaml_mako
# yaml_wempy
# json_jinja
# json_mako
# json_wempy
#
#renderer: yaml_jinja
#
# The failhard option tells the minions to stop immediately after the first
# failure detected in the state execution. Defaults to False.
#failhard: False
#
# Reload the modules prior to a highstate run.
#autoload_dynamic_modules: True
#
# clean_dynamic_modules keeps the dynamic modules on the minion in sync with
# the dynamic modules on the master, this means that if a dynamic module is
# not on the master it will be deleted from the minion. By default, this is
# enabled and can be disabled by changing this value to False.
#clean_dynamic_modules: True
#
# Normally, the minion is not isolated to any single environment on the master
# when running states, but the environment can be isolated on the minion side
# by statically setting it. Remember that the recommended way to manage
# environments is to isolate via the top file.
#environment: None
#
# If using the local file directory, then the state top file name needs to be
# defined, by default this is top.sls.
#state_top: top.sls
#
# Run states when the minion daemon starts. To enable, set startup_states to:
# 'highstate' -- Execute state.highstate
# 'sls' -- Read in the sls_list option and execute the named sls files
# 'top' -- Read top_file option and execute based on that file on the Master
#startup_states: ''
#
# List of states to run when the minion starts up if startup_states is 'sls':
#sls_list:
# - edit.vim
# - hyper
#
# Top file to execute if startup_states is 'top':
#top_file: ''
# Automatically aggregate all states that have support for mod_aggregate by
# setting to True. Or pass a list of state module names to automatically
# aggregate just those types.
#
# state_aggregate:
# - pkg
#
#state_aggregate: False
##### File Directory Settings #####
##########################################
# The Salt Minion can redirect all file server operations to a local directory,
# this allows for the same state tree that is on the master to be used if
# copied completely onto the minion. This is a literal copy of the settings on
# the master but used to reference a local directory on the minion.
# Set the file client. The client defaults to looking on the master server for
# files, but can be directed to look at the local file directory setting
# defined below by setting it to local.
#file_client: remote
# The file directory works on environments passed to the minion, each environment
# can have multiple root directories, the subdirectories in the multiple file
# roots cannot match, otherwise the downloaded files will not be able to be
# reliably ensured. A base environment is required to house the top file.
# Example:
# file_roots:
# base:
# - /srv/salt/
# dev:
# - /srv/salt/dev/services
# - /srv/salt/dev/states
# prod:
# - /srv/salt/prod/services
# - /srv/salt/prod/states
#
#file_roots:
# base:
# - /srv/salt
# By default, the Salt fileserver recurses fully into all defined environments
# to attempt to find files. To limit this behavior so that the fileserver only
# traverses directories with SLS files and special Salt directories like _modules,
# enable the option below. This might be useful for installations where a file root
# has a very large number of files and performance is negatively impacted. Default
# is False.
#fileserver_limit_traversal: False
# The hash_type is the hash to use when discovering the hash of a file in
# the local fileserver. The default is md5, but sha1, sha224, sha256, sha384
# and sha512 are also supported.
#
# Warning: Prior to changing this value, the minion should be stopped and all
# Salt caches should be cleared.
#hash_type: md5
# The Salt pillar is searched for locally if file_client is set to local. If
# this is the case, and pillar data is defined, then the pillar_roots need to
# also be configured on the minion:
#pillar_roots:
# base:
# - /srv/pillar
#
#
###### Security settings #####
###########################################
# Enable "open mode", this mode still maintains encryption, but turns off
# authentication, this is only intended for highly secure environments or for
# the situation where your keys end up in a bad state. If you run in open mode
# you do so at your own risk!
#open_mode: False
# Enable permissive access to the salt keys. This allows you to run the
# master or minion as root, but have a non-root group be given access to
# your pki_dir. To make the access explicit, root must belong to the group
# you've given access to. This is potentially quite insecure.
#permissive_pki_access: False
# The state_verbose and state_output settings can be used to change the way
# state system data is printed to the display. By default all data is printed.
# The state_verbose setting can be set to True or False, when set to False
# all data that has a result of True and no changes will be suppressed.
#state_verbose: True
# The state_output setting changes if the output is the full multi line
# output for each changed state if set to 'full', but if set to 'terse'
# the output will be shortened to a single line.
#state_output: full
# The state_output_diff setting changes whether or not the output from
# successful states is returned. Useful when even the terse output of these
# states is cluttering the logs. Set it to True to ignore them.
#state_output_diff: False
# The state_output_profile setting changes whether profile information
# will be shown for each state run.
#state_output_profile: True
# Fingerprint of the master public key to double verify the master is valid,
# the master fingerprint can be found by running "salt-key -f master.pub" on the
# salt master.
#master_finger: ''
###### Thread settings #####
###########################################
# Disable multiprocessing support, by default when a minion receives a
# publication a new process is spawned and the command is executed therein.
#multiprocessing: True
##### Logging settings #####
##########################################
# The location of the minion log file
# The minion log can be sent to a regular file, local path name, or network
# location. Remote logging works best when configured to use rsyslogd(8) (e.g.:
# ``file:///dev/log``), with rsyslogd(8) configured for network logging. The URI
# format is: <file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility>
#log_file: /var/log/salt/minion
#log_file: file:///dev/log
#log_file: udp://loghost:10514
#
#log_file: /var/log/salt/minion
#key_logfile: /var/log/salt/key
# The level of messages to send to the console.
# One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
# Default: 'warning'
#log_level: warning
# The level of messages to send to the log file.
# One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
# If using 'log_granular_levels' this must be set to the highest desired level.
# Default: 'warning'
#log_level_logfile:
# The date and time format used in log messages. Allowed date/time formating
# can be seen here: http://docs.python.org/library/time.html#time.strftime
#log_datefmt: '%H:%M:%S'
#log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'
# The format of the console logging messages. Allowed formatting options can
# be seen here: http://docs.python.org/library/logging.html#logrecord-attributes
#
# Console log colors are specified by these additional formatters:
#
# %(colorlevel)s
# %(colorname)s
# %(colorprocess)s
# %(colormsg)s
#
# Since it is desirable to include the surrounding brackets, '[' and ']', in
# the coloring of the messages, these color formatters also include padding as
# well. Color LogRecord attributes are only available for console logging.
#
#log_fmt_console: '[%(levelname)-8s] %(message)s'
#log_fmt_logfile: '%(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s'
# This can be used to control logging levels more specificically. This
# example sets the main salt library at the 'warning' level, but sets
# 'salt.modules' to log at the 'debug' level:
# log_granular_levels:
# 'salt': 'warning'
# 'salt.modules': 'debug'
#
#log_granular_levels: {}
# To diagnose issues with minions disconnecting or missing returns, ZeroMQ
# supports the use of monitor sockets # to log connection events. This
# feature requires ZeroMQ 4.0 or higher.
#
# To enable ZeroMQ monitor sockets, set 'zmq_monitor' to 'True' and log at a
# debug level or higher.
#
# A sample log event is as follows:
#
# [DEBUG ] ZeroMQ event: {'endpoint': 'tcp://127.0.0.1:4505', 'event': 512,
# 'value': 27, 'description': 'EVENT_DISCONNECTED'}
#
# All events logged will include the string 'ZeroMQ event'. A connection event
# should be logged on the as the minion starts up and initially connects to the
# master. If not, check for debug log level and that the necessary version of
# ZeroMQ is installed.
#
#zmq_monitor: False
###### Module configuration #####
###########################################
# Salt allows for modules to be passed arbitrary configuration data, any data
# passed here in valid yaml format will be passed on to the salt minion modules
# for use. It is STRONGLY recommended that a naming convention be used in which
# the module name is followed by a . and then the value. Also, all top level
# data must be applied via the yaml dict construct, some examples:
#
# You can specify that all modules should run in test mode:
#test: True
#
# A simple value for the test module:
#test.foo: foo
#
# A list for the test module:
#test.bar: [baz,quo]
#
# A dict for the test module:
#test.baz: {spam: sausage, cheese: bread}
#
#
###### Update settings ######
###########################################
# Using the features in Esky, a salt minion can both run as a frozen app and
# be updated on the fly. These options control how the update process
# (saltutil.update()) behaves.
#
# The url for finding and downloading updates. Disabled by default.
#update_url: False
#
# The list of services to restart after a successful update. Empty by default.
#update_restart_services: []
###### Keepalive settings ######
############################################
# ZeroMQ now includes support for configuring SO_KEEPALIVE if supported by
# the OS. If connections between the minion and the master pass through
# a state tracking device such as a firewall or VPN gateway, there is
# the risk that it could tear down the connection the master and minion
# without informing either party that their connection has been taken away.
# Enabling TCP Keepalives prevents this from happening.
# Overall state of TCP Keepalives, enable (1 or True), disable (0 or False)
# or leave to the OS defaults (-1), on Linux, typically disabled. Default True, enabled.
#tcp_keepalive: True
# How long before the first keepalive should be sent in seconds. Default 300
# to send the first keepalive after 5 minutes, OS default (-1) is typically 7200 seconds
# on Linux see /proc/sys/net/ipv4/tcp_keepalive_time.
#tcp_keepalive_idle: 300
# How many lost probes are needed to consider the connection lost. Default -1
# to use OS defaults, typically 9 on Linux, see /proc/sys/net/ipv4/tcp_keepalive_probes.
#tcp_keepalive_cnt: -1
# How often, in seconds, to send keepalives after the first one. Default -1 to
# use OS defaults, typically 75 seconds on Linux, see
# /proc/sys/net/ipv4/tcp_keepalive_intvl.
#tcp_keepalive_intvl: -1
###### Windows Software settings ######
############################################
# Location of the repository cache file on the master:
#win_repo_cachefile: 'salt://win/repo/winrepo.p'
###### Returner settings ######
############################################
# Which returner(s) will be used for minion's result:
#return: mysql
Salt configuration is very simple. The default configuration for the master will work for most installations and the only requirement for setting up a minion is to set the location of the master in the minion configuration file.
The configuration files will be installed to /etc/salt
and are named
after the respective components, /etc/salt/master
, and
/etc/salt/minion
.
By default the Salt master listens on ports 4505 and 4506 on all
interfaces (0.0.0.0). To bind Salt to a specific IP, redefine the
"interface" directive in the master configuration file, typically
/etc/salt/master
, as follows:
- #interface: 0.0.0.0
+ interface: 10.0.0.1
After updating the configuration file, restart the Salt master. See the master configuration reference for more details about other configurable options.
Although there are many Salt Minion configuration options, configuring a Salt Minion is very simple. By default a Salt Minion will try to connect to the DNS name "salt"; if the Minion is able to resolve that name correctly, no configuration is needed.
If the DNS name "salt" does not resolve to point to the correct
location of the Master, redefine the "master" directive in the minion
configuration file, typically /etc/salt/minion
, as follows:
- #master: salt
+ master: 10.0.0.1
After updating the configuration file, restart the Salt minion. See the minion configuration reference for more details about other configurable options.
Start the master in the foreground (to daemonize the process, pass the
-d flag
):
salt-master
Start the minion in the foreground (to daemonize the process, pass the
-d flag
):
salt-minion
Having trouble?
The simplest way to troubleshoot Salt is to run the master and minion in
the foreground with log level
set to debug
:
salt-master --log-level=debug
For information on salt's logging system please see the logging document.
Run as an unprivileged (non-root) user
To run Salt as another user, set the user
parameter in the
master config file.
Additionally, ownership, and permissions need to be set such that the desired user can read from and write to the following directories (and their subdirectories, where applicable):
More information about running salt as a non-privileged user can be found here.
There is also a full troubleshooting guide available.
Salt uses AES encryption for all communication between the Master and the Minion. This ensures that the commands sent to the Minions cannot be tampered with, and that communication between Master and Minion is authenticated through trusted, accepted keys.
Before commands can be sent to a Minion, its key must be accepted on
the Master. Run the salt-key
command to list the keys known to
the Salt Master:
[root@master ~]# salt-key -L
Unaccepted Keys:
alpha
bravo
charlie
delta
Accepted Keys:
This example shows that the Salt Master is aware of four Minions, but none of
the keys has been accepted. To accept the keys and allow the Minions to be
controlled by the Master, again use the salt-key
command:
[root@master ~]# salt-key -A
[root@master ~]# salt-key -L
Unaccepted Keys:
Accepted Keys:
alpha
bravo
charlie
delta
The salt-key
command allows for signing keys individually or in bulk. The
example above, using -A
bulk-accepts all pending keys. To accept keys
individually use the lowercase of the same option, -a keyname
.
See also
Communication between the Master and a Minion may be verified by running
the test.ping
command:
[root@master ~]# salt alpha test.ping
alpha:
True
Communication between the Master and all Minions may be tested in a similar way:
[root@master ~]# salt '*' test.ping
alpha:
True
bravo:
True
charlie:
True
delta:
True
Each of the Minions should send a True
response as shown above.
Understanding targeting is important. From there, depending on the way you wish to use Salt, you should also proceed to learn about States and Execution Modules.
The Salt system is amazingly simple and easy to configure, the two components of the Salt system each have a respective configuration file. The salt-master is configured via the master configuration file, and the salt-minion is configured via the minion configuration file.
See also
The configuration file for the salt-master is located at
/etc/salt/master
by default. A notable exception is FreeBSD, where the
configuration file is located at /usr/local/etc/salt
. The available
options are as follows:
ipv6
¶Default: False
Whether the master should listen for IPv6 connections. If this is set to True, the interface option must be adjusted too (for example: "interface: '::'")
ipv6: True
publish_port
¶Default: 4505
The network port to set up the publication interface.
publish_port: 4505
master_id
¶Default: None
The id to be passed in the publish job to minions. This is used for MultiSyndics to return the job to the requesting master.
Note
This must be the same string as the syndic is configured with.
master_id: MasterOfMaster
max_open_files
¶Default: 100000
Each minion connecting to the master uses AT LEAST one file descriptor, the master subscription connection. If enough minions connect you might start seeing on the console(and then salt-master crashes):
Too many open files (tcp_listener.cpp:335)
Aborted (core dumped)
max_open_files: 100000
By default this value will be the one of ulimit -Hn, i.e., the hard limit for max open files.
To set a different value than the default one, uncomment, and configure this setting. Remember that this value CANNOT be higher than the hard limit. Raising the hard limit depends on the OS and/or distribution, a good way to find the limit is to search the internet for something like this:
raise max open files hard limit debian
worker_threads
¶Default: 5
The number of threads to start for receiving commands and replies from minions. If minions are stalling on replies because you have many minions, raise the worker_threads value.
Worker threads should not be put below 3 when using the peer system, but can drop down to 1 worker otherwise.
Note
When the master daemon starts, it is expected behaviour to see multiple salt-master processes, even if 'worker_threads' is set to '1'. At a minimum, a controlling process will start along with a Publisher, an EventPublisher, and a number of MWorker processes will be started. The number of MWorker processes is tuneable by the 'worker_threads' configuration value while the others are not.
worker_threads: 5
ret_port
¶Default: 4506
The port used by the return server, this is the server used by Salt to receive execution returns and command executions.
ret_port: 4506
pidfile
¶Default: /var/run/salt-master.pid
Specify the location of the master pidfile.
pidfile: /var/run/salt-master.pid
root_dir
¶Default: /
The system root directory to operate from, change this to make Salt run from an alternative root.
root_dir: /
Note
This directory is prepended to the following options:
pki_dir
, cachedir
, sock_dir
,
log_file
, autosign_file
,
autoreject_file
, pidfile
.
pki_dir
¶Default: /etc/salt/pki
The directory to store the pki authentication keys.
pki_dir: /etc/salt/pki
extension_modules
¶Directory for custom modules. This directory can contain subdirectories for
each of Salt's module types such as "runners", "output", "wheel", "modules",
"states", "returners", etc. This path is appended to root_dir
.
extension_modules: srv/modules
module_dirs
¶Default: []
Like extension_modules
, but a list of extra directories to search
for Salt modules.
module_dirs:
- /var/cache/salt/minion/extmods
cachedir
¶Default: /var/cache/salt
The location used to store cache information, particularly the job information for executed salt commands.
This directory may contain sensitive data and should be protected accordingly.
cachedir: /var/cache/salt
verify_env
¶Default: True
Verify and set permissions on configuration directories at startup.
verify_env: True
loop_interval
¶Default: 60
The loop_interval option controls the seconds for the master's maintenance process check cycle. This process updates file server backends, cleans the job cache and executes the scheduler.
color
¶Default: True
By default output is colored, to disable colored output set the color value to False.
color: False
sock_dir
¶Default: /var/run/salt/master
Set the location to use for creating Unix sockets for master process communication.
sock_dir: /var/run/salt/master
enable_gpu_grains
¶Default: False
The master can take a while to start up when lspci and/or dmidecode is used to populate the grains for the master. Enable if you want to see GPU hardware data for your master.
job_cache
¶Default: True
The master maintains a job cache, while this is a great addition it can be a burden on the master for larger deployments (over 5000 minions). Disabling the job cache will make previously executed jobs unavailable to the jobs system and is not generally recommended. Normally it is wise to make sure the master has access to a faster IO system or a tmpfs is mounted to the jobs dir.
minion_data_cache
¶Default: True
The minion data cache is a cache of information about the minions stored on the master, this information is primarily the pillar and grains data. The data is cached in the Master cachedir under the name of the minion and used to predetermine what minions are expected to reply from executions.
minion_data_cache: True
ext_job_cache
¶Default: ''
Used to specify a default returner for all minions, when this option is set the specified returner needs to be properly configured and the minions will always default to sending returns to this returner. This will also disable the local job cache on the master.
ext_job_cache: redis
event_return
¶New in version 2015.5.0.
Default: ''
Specify the returner to use to log events. A returner may have installation and configuration requirements. Read the returner's documentation.
Note
Not all returners support event returns. Verify that a returner has an
event_return()
function before configuring this option with a returner.
event_return: cassandra_cql
master_job_cache
¶New in version 2014.7.
Default: 'local_cache'
Specify the returner to use for the job cache. The job cache will only be interacted with from the salt master and therefore does not need to be accessible from the minions.
master_job_cache: redis
enforce_mine_cache
¶Default: False
By-default when disabling the minion_data_cache mine will stop working since it is based on cached data, by enabling this option we explicitly enabling only the cache for the mine system.
enforce_mine_cache: False
max_minions
¶Default: 0
The number of minions the master should allow to connect. Use this to accommodate
the number of minions per master if you have different types of hardware serving
your minions. The default of 0
means unlimited connections. Please note, that
this can slow down the authentication process a bit in large setups.
max_minions: 100
con_cache
¶Default: False
If max_minions is used in large installations, the master might experience high-load situations because of having to check the number of connected minions for every authentication. This cache provides the minion-ids of all connected minions to all MWorker-processes and greatly improves the performance of max_minions.
con_cache: True
presence_events
¶Default: False
Causes the master to periodically look for actively connected minions. Presence events are fired on the event bus on a regular interval with a list of connected minions, as well as events with lists of newly connected or disconnected minions. This is a master-only operation that does not send executions to minions. Note, this does not detect minions that connect to a master via localhost.
presence_events: False
roster_file
¶Default: '/etc/salt/roster'
Pass in an alternative location for the salt-ssh roster file.
roster_file: /root/roster
ssh_minion_opts
¶Default: None
Pass in minion option overrides that will be inserted into the SHIM for
salt-ssh calls. The local minion config is not used for salt-ssh. Can be
overridden on a per-minion basis in the roster (minion_opts
)
minion_opts:
gpg_keydir: /root/gpg
open_mode
¶Default: False
Open mode is a dangerous security feature. One problem encountered with pki
authentication systems is that keys can become "mixed up" and authentication
begins to fail. Open mode turns off authentication and tells the master to
accept all authentication. This will clean up the pki keys received from the
minions. Open mode should not be turned on for general use. Open mode should
only be used for a short period of time to clean up pki keys. To turn on open
mode set this value to True
.
open_mode: False
auto_accept
¶Default: False
Enable auto_accept. This setting will automatically accept all incoming public keys from minions.
auto_accept: False
autosign_timeout
¶New in version 2014.7.0.
Default: 120
Time in minutes that a incoming public key with a matching name found in pki_dir/minion_autosign/keyid is automatically accepted. Expired autosign keys are removed when the master checks the minion_autosign directory. This method to auto accept minions can be safer than an autosign_file because the keyid record can expire and is limited to being an exact name match. This should still be considered a less than secure option, due to the fact that trust is based on just the requesting minion id.
autosign_file
¶Default: not defined
If the autosign_file
is specified incoming keys specified in the autosign_file
will be automatically accepted. Matches will be searched for first by string
comparison, then by globbing, then by full-string regex matching.
This should still be considered a less than secure option, due to the fact
that trust is based on just the requesting minion id.
autoreject_file
¶New in version 2014.1.0.
Default: not defined
Works like autosign_file
, but instead allows you to specify
minion IDs for which keys will automatically be rejected. Will override both
membership in the autosign_file
and the
auto_accept
setting.
client_acl
¶Default: {}
Enable user accounts on the master to execute specific modules. These modules can be expressed as regular expressions.
client_acl:
fred:
- test.ping
- pkg.*
client_acl_blacklist
¶Default: {}
Blacklist users or modules
This example would blacklist all non sudo users, including root from running any commands. It would also blacklist any use of the "cmd" module.
This is completely disabled by default.
client_acl_blacklist:
users:
- root
- '^(?!sudo_).*$' # all non sudo users
modules:
- cmd
external_auth
¶Default: {}
The external auth system uses the Salt auth modules to authenticate and validate users to access areas of the Salt system.
external_auth:
pam:
fred:
- test.*
token_expire
¶Default: 43200
Time (in seconds) for a newly generated token to live.
Default: 12 hours
token_expire: 43200
file_recv
¶Default: False
Allow minions to push files to the master. This is disabled by default, for security purposes.
file_recv: False
master_sign_pubkey
¶Default: False
Sign the master auth-replies with a cryptographic signature of the masters public key. Please see the tutorial how to use these settings in the Multimaster-PKI with Failover Tutorial
master_sign_pubkey: True
master_sign_key_name
¶Default: master_sign
The customizable name of the signing-key-pair without suffix.
master_sign_key_name: <filename_without_suffix>
master_pubkey_signature
¶Default: master_pubkey_signature
The name of the file in the masters pki-directory that holds the pre-calculated signature of the masters public-key.
master_pubkey_signature: <filename>
master_use_pubkey_signature
¶Default: False
Instead of computing the signature for each auth-reply, use a pre-calculated
signature. The master_pubkey_signature
must also be set for this.
master_use_pubkey_signature: True
rotate_aes_key
¶Default: True
Rotate the salt-masters AES-key when a minion-public is deleted with salt-key. This is a very important security-setting. Disabling it will enable deleted minions to still listen in on the messages published by the salt-master. Do not disable this unless it is absolutely clear what this does.
rotate_aes_key: True
cython_enable
¶Default: False
Set to true to enable Cython modules (.pyx files) to be compiled on the fly on the Salt master.
cython_enable: False
state_top
¶Default: top.sls
The state system uses a "top" file to tell the minions what environment to use and what modules to use. The state_top file is defined relative to the root of the base environment.
state_top: top.sls
master_tops
¶Default: {}
The master_tops option replaces the external_nodes option by creating a pluggable system for the generation of external top data. The external_nodes option is deprecated by the master_tops option. To gain the capabilities of the classic external_nodes system, use the following configuration:
master_tops:
ext_nodes: <Shell command which returns yaml>
external_nodes
¶Default: None
The external_nodes option allows Salt to gather data that would normally be placed in a top file from and external node controller. The external_nodes option is the executable that will return the ENC data. Remember that Salt will look for external nodes AND top files and combine the results if both are enabled and available!
external_nodes: cobbler-ext-nodes
renderer
¶Default: yaml_jinja
The renderer to use on the minions to render the state data.
renderer: yaml_jinja
failhard
¶Default: False
Set the global failhard flag, this informs all states to stop running states at the moment a single state fails.
failhard: False
state_verbose
¶Default: True
Controls the verbosity of state runs. By default, the results of all states are
returned, but setting this value to False
will cause salt to only display
output for states which either failed, or succeeded without making any changes
to the minion.
state_verbose: False
state_output
¶Default: full
The state_output setting changes if the output is the full multi line output for each changed state if set to 'full', but if set to 'terse' the output will be shortened to a single line. If set to 'mixed', the output will be terse unless a state failed, in which case that output will be full. If set to 'changes', the output will be full unless the state didn't change.
state_output: full
yaml_utf8
¶Default: False
Enable extra routines for YAML renderer used states containing UTF characters.
yaml_utf8: False
test
¶Default: False
Set all state calls to only test if they are going to actually make changes or just post what changes are going to be made.
test: False
fileserver_backend
¶Default: ['roots']
Salt supports a modular fileserver backend system, this system allows the salt
master to link directly to third party systems to gather and manage the files
available to minions. Multiple backends can be configured and will be searched
for the requested file in the order in which they are defined here. The default
setting only enables the standard backend roots
, which is configured using
the file_roots
option.
Example:
fileserver_backend:
- roots
- git
hash_type
¶Default: md5
The hash_type is the hash to use when discovering the hash of a file on the master server. The default is md5, but sha1, sha224, sha256, sha384, and sha512 are also supported.
hash_type: md5
file_buffer_size
¶Default: 1048576
The buffer size in the file server in bytes.
file_buffer_size: 1048576
file_ignore_regex
¶Default: ''
A regular expression (or a list of expressions) that will be matched against the file path before syncing the modules and states to the minions. This includes files affected by the file.recurse state. For example, if you manage your custom modules and states in subversion and don't want all the '.svn' folders and content synced to your minions, you could set this to '/.svn($|/)'. By default nothing is ignored.
file_ignore_regex:
- '/\.svn($|/)'
- '/\.git($|/)'
file_ignore_glob
¶Default ''
A file glob (or list of file globs) that will be matched against the file path before syncing the modules and states to the minions. This is similar to file_ignore_regex above, but works on globs instead of regex. By default nothing is ignored.
file_ignore_glob:
- '\*.pyc'
- '\*/somefolder/\*.bak'
- '\*.swp'
file_roots
¶Default:
base:
- /srv/salt
Salt runs a lightweight file server written in ZeroMQ to deliver files to minions. This file server is built into the master daemon and does not require a dedicated port.
The file server works on environments passed to the master. Each environment can have multiple root directories. The subdirectories in the multiple file roots cannot match, otherwise the downloaded files will not be able to be reliably ensured. A base environment is required to house the top file.
Example:
file_roots:
base:
- /srv/salt
dev:
- /srv/salt/dev/services
- /srv/salt/dev/states
prod:
- /srv/salt/prod/services
- /srv/salt/prod/states
gitfs_remotes
¶Default: []
When using the git
fileserver backend at least one git remote needs to be
defined. The user running the salt master will need read access to the repo.
The repos will be searched in order to find the file requested by a client and the first repo to have the file will return it. Branches and tags are translated into salt environments.
gitfs_remotes:
- git://github.com/saltstack/salt-states.git
- file:///var/git/saltmaster
Note
file://
repos will be treated as a remote and copied into the master's
gitfs cache, so only the local refs for those repos will be exposed as
fileserver environments.
As of 2014.7.0, it is possible to have per-repo versions of several of the gitfs configuration parameters. For more information, see the GitFS Walkthrough.
gitfs_provider
¶New in version 2014.7.0.
Specify the provider to be used for gitfs. More information can be found in the GitFS Walkthrough.
Specify one value among valid values: gitpython
, pygit2
, dulwich
gitfs_provider: dulwich
gitfs_ssl_verify
¶Default: True
The gitfs_ssl_verify
option specifies whether to ignore SSL certificate
errors when contacting the gitfs backend. You might want to set this to false
if you're using a git backend that uses a self-signed certificate but keep in
mind that setting this flag to anything other than the default of True
is a
security concern, you may want to try using the ssh transport.
gitfs_ssl_verify: True
gitfs_mountpoint
¶New in version 2014.7.0.
Default: ''
Specifies a path on the salt fileserver from which gitfs remotes are served.
Can be used in conjunction with gitfs_root
. Can also be
configured on a per-remote basis, see here for
more info.
gitfs_mountpoint: salt://foo/bar
Note
The salt://
protocol designation can be left off (in other words,
foo/bar
and salt://foo/bar
are equivalent).
gitfs_root
¶Default: ''
Serve files from a subdirectory within the repository, instead of the root.
This is useful when there are files in the repository that should not be
available to the Salt fileserver. Can be used in conjunction with
gitfs_mountpoint
.
gitfs_root: somefolder/otherfolder
Changed in version 2014.7.0: Ability to specify gitfs roots on a per-remote basis was added. See here for more info.
gitfs_base
¶Default: master
Defines which branch/tag should be used as the base
environment.
gitfs_base: salt
Changed in version 2014.7.0: Ability to specify the base on a per-remote basis was added. See here for more info.
gitfs_env_whitelist
¶New in version 2014.7.0.
Default: []
Used to restrict which environments are made available. Can speed up state runs
if the repos in gitfs_remotes
contain many branches/tags. More
information can be found in the GitFS Walkthrough.
gitfs_env_whitelist:
- base
- v1.*
- 'mybranch\d+'
gitfs_env_blacklist
¶New in version 2014.7.0.
Default: []
Used to restrict which environments are made available. Can speed up state runs
if the repos in gitfs_remotes
contain many branches/tags. More
information can be found in the GitFS Walkthrough.
gitfs_env_blacklist:
- base
- v1.*
- 'mybranch\d+'
These parameters only currently apply to the pygit2 gitfs provider. Examples of how to use these can be found in the GitFS Walkthrough.
gitfs_user
¶New in version 2014.7.0.
Default: ''
Along with gitfs_password
, is used to authenticate to HTTPS
remotes.
gitfs_user: git
gitfs_password
¶New in version 2014.7.0.
Default: ''
Along with gitfs_user
, is used to authenticate to HTTPS remotes.
This parameter is not required if the repository does not use authentication.
gitfs_password: mypassword
gitfs_insecure_auth
¶New in version 2014.7.0.
Default: False
By default, Salt will not authenticate to an HTTP (non-HTTPS) remote. This parameter enables authentication over HTTP. Enable this at your own risk.
gitfs_insecure_auth: True
gitfs_pubkey
¶New in version 2014.7.0.
Default: ''
Along with gitfs_privkey
(and optionally
gitfs_passphrase
), is used to authenticate to SSH remotes. This
parameter (or its per-remote counterpart) is
required for SSH remotes.
gitfs_pubkey: /path/to/key.pub
gitfs_privkey
¶New in version 2014.7.0.
Default: ''
Along with gitfs_pubkey
(and optionally
gitfs_passphrase
), is used to authenticate to SSH remotes. This
parameter (or its per-remote counterpart) is
required for SSH remotes.
gitfs_privkey: /path/to/key
gitfs_passphrase
¶New in version 2014.7.0.
Default: ''
This parameter is optional, required only when the SSH key being used to authenticate is protected by a passphrase.
gitfs_passphrase: mypassphrase
hgfs_remotes
¶New in version 0.17.0.
Default: []
When using the hg
fileserver backend at least one mercurial remote needs to
be defined. The user running the salt master will need read access to the repo.
The repos will be searched in order to find the file requested by a client and
the first repo to have the file will return it. Branches and/or bookmarks are
translated into salt environments, as defined by the
hgfs_branch_method
parameter.
hgfs_remotes:
- https://username@bitbucket.org/username/reponame
Note
As of 2014.7.0, it is possible to have per-repo versions of the
hgfs_root
, hgfs_mountpoint
,
hgfs_base
, and hgfs_branch_method
parameters.
For example:
hgfs_remotes:
- https://username@bitbucket.org/username/repo1
- base: saltstates
- https://username@bitbucket.org/username/repo2:
- root: salt
- mountpoint: salt://foo/bar/baz
- https://username@bitbucket.org/username/repo3:
- root: salt/states
- branch_method: mixed
hgfs_branch_method
¶New in version 0.17.0.
Default: branches
Defines the objects that will be used as fileserver environments.
branches
- Only branches and tags will be usedbookmarks
- Only bookmarks and tags will be usedmixed
- Branches, bookmarks, and tags will be usedhgfs_branch_method: mixed
Note
Starting in version 2014.1.0, the value of the hgfs_base
parameter defines which branch is used as the base
environment,
allowing for a base
environment to be used with an
hgfs_branch_method
of bookmarks
.
Prior to this release, the default
branch will be used as the base
environment.
hgfs_mountpoint
¶New in version 2014.7.0.
Default: ''
Specifies a path on the salt fileserver from which hgfs remotes are served.
Can be used in conjunction with hgfs_root
. Can also be
configured on a per-remote basis, see here
for
more info.
hgfs_mountpoint: salt://foo/bar
Note
The salt://
protocol designation can be left off (in other words,
foo/bar
and salt://foo/bar
are equivalent).
hgfs_root
¶New in version 0.17.0.
Default: ''
Serve files from a subdirectory within the repository, instead of the root.
This is useful when there are files in the repository that should not be
available to the Salt fileserver. Can be used in conjunction with
hgfs_mountpoint
.
hgfs_root: somefolder/otherfolder
Changed in version 2014.7.0: Ability to specify hgfs roots on a per-remote basis was added. See
here
for more info.
hgfs_base
¶New in version 2014.1.0.
Default: default
Defines which branch should be used as the base
environment. Change this if
hgfs_branch_method
is set to bookmarks
to specify which
bookmark should be used as the base
environment.
hgfs_base: salt
hgfs_env_whitelist
¶New in version 2014.7.0.
Default: []
Used to restrict which environments are made available. Can speed up state runs if your hgfs remotes contain many branches/bookmarks/tags. Full names, globs, and regular expressions are supported. If using a regular expression, the expression must match the entire minion ID.
If used, only branches/bookmarks/tags which match one of the specified expressions will be exposed as fileserver environments.
If used in conjunction with hgfs_env_blacklist
, then the subset
of branches/bookmarks/tags which match the whitelist but do not match the
blacklist will be exposed as fileserver environments.
hgfs_env_whitelist:
- base
- v1.*
- 'mybranch\d+'
hgfs_env_blacklist
¶New in version 2014.7.0.
Default: []
Used to restrict which environments are made available. Can speed up state runs if your hgfs remotes contain many branches/bookmarks/tags. Full names, globs, and regular expressions are supported. If using a regular expression, the expression must match the entire minion ID.
If used, branches/bookmarks/tags which match one of the specified expressions will not be exposed as fileserver environments.
If used in conjunction with hgfs_env_whitelist
, then the subset
of branches/bookmarks/tags which match the whitelist but do not match the
blacklist will be exposed as fileserver environments.
hgfs_env_blacklist:
- base
- v1.*
- 'mybranch\d+'
svnfs_remotes
¶New in version 0.17.0.
Default: []
When using the svn
fileserver backend at least one subversion remote needs
to be defined. The user running the salt master will need read access to the
repo.
The repos will be searched in order to find the file requested by a client and
the first repo to have the file will return it. The trunk, branches, and tags
become environments, with the trunk being the base
environment.
svnfs_remotes:
- svn://foo.com/svn/myproject
Note
As of 2014.7.0, it is possible to have per-repo versions of the following configuration parameters:
For example:
svnfs_remotes:
- svn://foo.com/svn/project1
- svn://foo.com/svn/project2:
- root: salt
- mountpoint: salt://foo/bar/baz
- svn//foo.com/svn/project3:
- root: salt/states
- branches: branch
- tags: tag
svnfs_mountpoint
¶New in version 2014.7.0.
Default: ''
Specifies a path on the salt fileserver from which svnfs remotes are served.
Can be used in conjunction with svnfs_root
. Can also be
configured on a per-remote basis, see here
for
more info.
svnfs_mountpoint: salt://foo/bar
Note
The salt://
protocol designation can be left off (in other words,
foo/bar
and salt://foo/bar
are equivalent).
svnfs_root
¶New in version 0.17.0.
Default: ''
Serve files from a subdirectory within the repository, instead of the root.
This is useful when there are files in the repository that should not be
available to the Salt fileserver. Can be used in conjunction with
svnfs_mountpoint
.
svnfs_root: somefolder/otherfolder
Changed in version 2014.7.0: Ability to specify svnfs roots on a per-remote basis was added. See
here
for more info.
svnfs_trunk
¶New in version 2014.7.0.
Default: trunk
Path relative to the root of the repository where the trunk is located. Can
also be configured on a per-remote basis, see here
for more info.
svnfs_trunk: trunk
svnfs_branches
¶New in version 2014.7.0.
Default: branches
Path relative to the root of the repository where the branches are located. Can
also be configured on a per-remote basis, see here
for more info.
svnfs_branches: branches
svnfs_tags
¶New in version 2014.7.0.
Default: tags
Path relative to the root of the repository where the tags are located. Can
also be configured on a per-remote basis, see here
for more info.
svnfs_tags: tags
svnfs_env_whitelist
¶New in version 2014.7.0.
Default: []
Used to restrict which environments are made available. Can speed up state runs if your svnfs remotes contain many branches/tags. Full names, globs, and regular expressions are supported. If using a regular expression, the expression must match the entire minion ID.
If used, only branches/tags which match one of the specified expressions will be exposed as fileserver environments.
If used in conjunction with svnfs_env_blacklist
, then the subset
of branches/tags which match the whitelist but do not match the blacklist
will be exposed as fileserver environments.
svnfs_env_whitelist:
- base
- v1.*
- 'mybranch\d+'
svnfs_env_blacklist
¶New in version 2014.7.0.
Default: []
Used to restrict which environments are made available. Can speed up state runs if your svnfs remotes contain many branches/tags. Full names, globs, and regular expressions are supported. If using a regular expression, the expression must match the entire minion ID.
If used, branches/tags which match one of the specified expressions will not be exposed as fileserver environments.
If used in conjunction with svnfs_env_whitelist
, then the subset
of branches/tags which match the whitelist but do not match the blacklist
will be exposed as fileserver environments.
svnfs_env_blacklist:
- base
- v1.*
- 'mybranch\d+'
minionfs_env
¶New in version 2014.7.0.
Default: base
Environment from which MinionFS files are made available.
minionfs_env: minionfs
minionfs_mountpoint
¶New in version 2014.7.0.
Default: ''
Specifies a path on the salt fileserver from which minionfs files are served.
minionfs_mountpoint: salt://foo/bar
Note
The salt://
protocol designation can be left off (in other words,
foo/bar
and salt://foo/bar
are equivalent).
minionfs_whitelist
¶New in version 2014.7.0.
Default: []
Used to restrict which minions' pushed files are exposed via minionfs. If using a regular expression, the expression must match the entire minion ID.
If used, only the pushed files from minions which match one of the specified expressions will be exposed.
If used in conjunction with minionfs_blacklist
, then the subset
of hosts which match the whitelist but do not match the blacklist will be
exposed.
minionfs_whitelist:
- base
- v1.*
- 'mybranch\d+'
minionfs_blacklist
¶New in version 2014.7.0.
Default: []
Used to restrict which minions' pushed files are exposed via minionfs. If using a regular expression, the expression must match the entire minion ID.
If used, only the pushed files from minions which match one of the specified expressions will not be exposed.
If used in conjunction with minionfs_whitelist
, then the subset
of hosts which match the whitelist but do not match the blacklist will be
exposed.
minionfs_blacklist:
- base
- v1.*
- 'mybranch\d+'
pillar_roots
¶Default:
base:
- /srv/pillar
Set the environments and directories used to hold pillar sls data. This
configuration is the same as file_roots
:
pillar_roots:
base:
- /srv/pillar
dev:
- /srv/pillar/dev
prod:
- /srv/pillar/prod
ext_pillar
¶The ext_pillar option allows for any number of external pillar interfaces to be called when populating pillar data. The configuration is based on ext_pillar functions. The available ext_pillar functions can be found herein:
https://github.com/saltstack/salt/blob/develop/salt/pillar
By default, the ext_pillar interface is not configured to run.
Default: None
ext_pillar:
- hiera: /etc/hiera.yaml
- cmd_yaml: cat /etc/salt/yaml
- reclass:
inventory_base_uri: /etc/reclass
There are additional details at Pillars
ext_pillar_first
¶New in version 2015.5.0.
The ext_pillar_first option allows for external pillar sources to populate before file system pillar. This allows for targeting file system pillar from ext_pillar.
Default: False
ext_pillar_first: False
pillar_source_merging_strategy
¶New in version 2014.7.0.
Default: smart
The pillar_source_merging_strategy option allows you to configure merging strategy between different sources. It accepts 4 values:
recurse:
it will merge recursively mapping of data. For example, theses 2 sources:
foo: 42
bar:
element1: True
bar:
element2: True
baz: quux
will be merged as:
foo: 42
bar:
element1: True
element2: True
baz: quux
aggregate:
instructs aggregation of elements between sources that use the #!yamlex renderer.
For example, these two documents:
#!yamlex
foo: 42
bar: !aggregate {
element1: True
}
baz: !aggregate quux
#!yamlex
bar: !aggregate {
element2: True
}
baz: !aggregate quux2
will be merged as:
foo: 42
bar:
element1: True
element2: True
baz:
- quux
- quux2
overwrite:
Will use the behaviour of the 2014.1 branch and earlier.
Overwrites elements according the order in which they are processed.
First pillar processed:
A:
first_key: blah
second_key: blah
Second pillar processed:
A:
third_key: blah
fourth_key: blah
will be merged as:
A:
third_key: blah
fourth_key: blah
smart (default):
Guesses the best strategy based on the "renderer" setting.
A Salt syndic is a Salt master used to pass commands from a higher Salt master to minions below the syndic. Using the syndic is simple. If this is a master that will have syndic servers(s) below it, set the "order_masters" setting to True.
If this is a master that will be running a syndic daemon for passthrough the "syndic_master" setting needs to be set to the location of the master server.
Do not not forget that, in other words, it means that it shares with the local minion its ID and PKI_DIR.
order_masters
¶Default: False
Extra data needs to be sent with publications if the master is controlling a lower level master via a syndic minion. If this is the case the order_masters value must be set to True
order_masters: False
syndic_master
¶Default: None
If this master will be running a salt-syndic to connect to a higher level master, specify the higher level master with this configuration value.
syndic_master: masterofmasters
syndic_master_port
¶Default: 4506
If this master will be running a salt-syndic to connect to a higher level master, specify the higher level master port with this configuration value.
syndic_master_port: 4506
syndic_pidfile
¶Default: salt-syndic.pid
If this master will be running a salt-syndic to connect to a higher level master, specify the pidfile of the syndic daemon.
syndic_pidfile: syndic.pid
syndic_log_file
¶Default: syndic.log
If this master will be running a salt-syndic to connect to a higher level master, specify the log_file of the syndic daemon.
syndic_log_file: salt-syndic.log
Salt minions can send commands to other minions, but only if the minion is allowed to. By default "Peer Publication" is disabled, and when enabled it is enabled for specific minions and specific commands. This allows secure compartmentalization of commands based on individual minions.
peer
¶Default: {}
The configuration uses regular expressions to match minions and then a list of regular expressions to match functions. The following will allow the minion authenticated as foo.example.com to execute functions from the test and pkg modules.
peer:
foo.example.com:
- test.*
- pkg.*
This will allow all minions to execute all commands:
peer:
.*:
- .*
This is not recommended, since it would allow anyone who gets root on any single minion to instantly have root on all of the minions!
By adding an additional layer you can limit the target hosts in addition to the accessible commands:
peer:
foo.example.com:
'db*':
- test.*
- pkg.*
peer_run
¶Default: {}
The peer_run option is used to open up runners on the master to access from the minions. The peer_run configuration matches the format of the peer configuration.
The following example would allow foo.example.com to execute the manage.up runner:
peer_run:
foo.example.com:
- manage.up
log_file
¶Default: /var/log/salt/master
The master log can be sent to a regular file, local path name, or network
location. See also log_file
.
Examples:
log_file: /var/log/salt/master
log_file: file:///dev/log
log_file: udp://loghost:10514
log_level
¶Default: warning
The level of messages to send to the console. See also log_level
.
log_level: warning
log_level_logfile
¶Default: warning
The level of messages to send to the log file. See also
log_level_logfile
. When it is not set explicitly
it will inherit the level set by log_level
option.
log_level_logfile: warning
log_datefmt
¶Default: %H:%M:%S
The date and time format used in console log messages. See also
log_datefmt
.
log_datefmt: '%H:%M:%S'
log_datefmt_logfile
¶Default: %Y-%m-%d %H:%M:%S
The date and time format used in log file messages. See also
log_datefmt_logfile
.
log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'
log_fmt_console
¶Default: [%(levelname)-8s] %(message)s
The format of the console logging messages. See also
log_fmt_console
.
log_fmt_console: '[%(levelname)-8s] %(message)s'
log_fmt_logfile
¶Default: %(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s
The format of the log file logging messages. See also
log_fmt_logfile
.
log_fmt_logfile: '%(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s'
log_granular_levels
¶Default: {}
This can be used to control logging levels more specifically. See also
log_granular_levels
.
Default: {}
Node groups allow for logical groupings of minion nodes. A group consists of a group name and a compound target.
nodegroups:
group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com or bl*.domain.com'
group2: 'G@os:Debian and foo.domain.com'
group3: 'G@os:Debian and N@group1'
More information on using nodegroups can be found here.
range_server
¶Default: ''
The range server (and optional port) that serves your cluster information https://github.com/ytoolshed/range/wiki/%22yamlfile%22-module-file-spec
range_server: range:80
default_include
¶Default: master.d/*.conf
The master can include configuration from other files. Per default the
master will automatically include all config files from master.d/*.conf
where master.d
is relative to the directory of the master configuration
file.
include
¶Default: not defined
The master can include configuration from other files. To enable this, pass a list of paths to this option. The paths can be either relative or absolute; if relative, they are considered to be relative to the directory the main minion configuration file lives in. Paths can make use of shell-style globbing. If no files are matched by a path passed to this option then the master will log a warning message.
# Include files from a master.d directory in the same
# directory as the master config file
include: master.d/*
# Include a single extra file into the configuration
include: /etc/roles/webserver
# Include several files and the master.d directory
include:
- extra_config
- master.d/*
- /etc/roles/webserver
win_repo
¶Default: /srv/salt/win/repo
Location of the repo on the master
win_repo: '/srv/salt/win/repo'
win_repo_mastercachefile
¶Default: /srv/salt/win/repo/winrepo.p
win_repo_mastercachefile: '/srv/salt/win/repo/winrepo.p'
win_gitrepos
¶Default: ''
List of git repositories to include with the local repo.
win_gitrepos:
- 'https://github.com/saltstack/salt-winrepo.git'
The Salt system is amazingly simple and easy to configure. The two components of the Salt system each have a respective configuration file. The salt-master is configured via the master configuration file, and the salt-minion is configured via the minion configuration file.
See also
The Salt Minion configuration is very simple. Typically, the only value that needs to be set is the master value so the minion knows where to locate its master.
By default, the salt-minion configuration will be in /etc/salt/minion
.
A notable exception is FreeBSD, where the configuration will be in
/usr/local/etc/salt/minion
.
master
¶Default: salt
The hostname or ipv4 of the master.
Default: salt
master: salt
The option can can also be set to a list of masters, enabling multi-master mode.
master:
- address1
- address2
Changed in version 2014.7.0: The master can be dynamically configured. The master
value
can be set to an module function which will be executed and will assume
that the returning value is the ip or hostname of the desired master. If a
function is being specified, then the master_type
option
must be set to func
, to tell the minion that the value is a function to
be run and not a fully-qualified domain name.
master: module.function
master_type: func
In addition, instead of using multi-master mode, the minion can be
configured to use the list of master addresses as a failover list, trying
the first address, then the second, etc. until the minion successfully
connects. To enable this behavior, set master_type
to
failover
:
master:
- address1
- address2
master_type: failover
master_type
¶New in version 2014.7.0.
Default: str
The type of the master
variable. Can be either func
or
failover
.
If the master needs to be dynamically assigned by executing a function instead
of reading in the static master value, set this to func
. This can be used
to manage the minion's master setting from an execution module. By simply
changing the algorithm in the module to return a new master ip/fqdn, restart
the minion and it will connect to the new master.
master_type: func
If this option is set to failover
, master
must be a list of
master addresses. The minion will then try each master in the order specified
in the list until it successfully connects.
master_type: failover
master_shuffle
¶New in version 2014.7.0.
Default: False
If master
is a list of addresses, shuffle them before trying to
connect to distribute the minions over all available masters. This uses
Python's random.shuffle
method.
master_shuffle: True
retry_dns
¶Default: 30
Set the number of seconds to wait before attempting to resolve the master hostname if name resolution fails. Defaults to 30 seconds. Set to zero if the minion should shutdown and not retry.
retry_dns: 30
master_port
¶Default: 4506
The port of the master ret server, this needs to coincide with the ret_port option on the Salt master.
master_port: 4506
sudo_runas
¶Default: None
The user to run salt remote execution commands as via sudo. If this option is
enabled then sudo will be used to change the active user executing the remote
command. If enabled the user will need to be allowed access via the sudoers file
for the user that the salt minion is configured to run as. The most common
option would be to use the root user. If this option is set the user
option
should also be set to a non-root user. If migrating from a root minion to a non
root minion the minion cache should be cleared and the minion pki directory will
need to be changed to the ownership of the new user.
sudo_user: root
pidfile
¶Default: /var/run/salt-minion.pid
The location of the daemon's process ID file
pidfile: /var/run/salt-minion.pid
root_dir
¶Default: /
This directory is prepended to the following options: pki_dir
,
cachedir
, log_file
, sock_dir
, and
pidfile
.
root_dir: /
pki_dir
¶Default: /etc/salt/pki
The directory used to store the minion's public and private keys.
pki_dir: /etc/salt/pki
id
¶Default: the system's hostname
See also
The Setting up a Salt Minion section contains detailed information on how the hostname is determined.
Explicitly declare the id for this minion to use. Since Salt uses detached ids it is possible to run multiple minions on the same machine but with different ids.
id: foo.bar.com
append_domain
¶Default: None
Append a domain to a hostname in the event that it does not exist. This is
useful for systems where socket.getfqdn()
does not actually result in a
FQDN (for instance, Solaris).
append_domain: foo.org
cachedir
¶Default: /var/cache/salt
The location for minion cache data.
This directory may contain sensitive data and should be protected accordingly.
cachedir: /var/cache/salt
verify_env
¶Default: True
Verify and set permissions on configuration directories at startup.
verify_env: True
Note
When marked as True the verify_env option requires WRITE access to the configuration directory (/etc/salt/). In certain situations such as mounting /etc/salt/ as read-only for templating this will create a stack trace when state.highstate is called.
cache_jobs
¶Default: False
The minion can locally cache the return data from jobs sent to it, this can be
a good way to keep track of the minion side of the jobs the minion has
executed. By default this feature is disabled, to enable set cache_jobs to
True
.
cache_jobs: False
sock_dir
¶Default: /var/run/salt/minion
The directory where Unix sockets will be kept.
sock_dir: /var/run/salt/minion
backup_mode
¶Default: []
Backup files replaced by file.managed and file.recurse under cachedir.
backup_mode: minion
acceptance_wait_time
¶Default: 10
The number of seconds to wait until attempting to re-authenticate with the master.
acceptance_wait_time: 10
random_reauth_delay
¶When the master key changes, the minion will try to re-auth itself to receive the new master key. In larger environments this can cause a syn-flood on the master because all minions try to re-auth immediately. To prevent this and have a minion wait for a random amount of time, use this optional parameter. The wait-time will be a random number of seconds between 0 and the defined value.
random_reauth_delay: 60
acceptance_wait_time_max
¶Default: None
The maximum number of seconds to wait until attempting to re-authenticate with the master. If set, the wait will increase by acceptance_wait_time seconds each iteration.
acceptance_wait_time_max: None
recon_default
¶Default: 1000
The interval in milliseconds that the socket should wait before trying to reconnect to the master (1000ms = 1 second).
recon_default: 1000
recon_max
¶Default: 10000
The maximum time a socket should wait. Each interval the time to wait is calculated by doubling the previous time. If recon_max is reached, it starts again at the recon_default.
recon_max: 10000
recon_randomize
¶Default: True
Generate a random wait time on minion start. The wait time will be a random value between recon_default and recon_default and recon_max. Having all minions reconnect with the same recon_default and recon_max value kind of defeats the purpose of being able to change these settings. If all minions have the same values and the setup is quite large (several thousand minions), they will still flood the master. The desired behavior is to have time-frame within all minions try to reconnect.
recon_randomize: True
dns_check
¶Default: True
When healing, a dns_check is run. This is to make sure that the originally
resolved dns has not changed. If this is something that does not happen in your
environment, set this value to False
.
dns_check: True
cache_sreqs
¶Default: True
The connection to the master ret_port is kept open. When set to False, the minion
creates a new connection for every return to the master.
environment, set this value to False
.
cache_sreqs: True
ipc_mode
¶Default: ipc
Windows platforms lack POSIX IPC and must rely on slower TCP based inter-
process communications. Set ipc_mode to tcp
on such systems.
ipc_mode: ipc
disable_modules
¶Default: []
(all modules are enabled by default)
The event may occur in which the administrator desires that a minion should not be able to execute a certain module. The sys module is built into the minion and cannot be disabled.
This setting can also tune the minion, as all modules are loaded into ram disabling modules will lover the minion's ram footprint.
disable_modules:
- test
- solr
disable_returners
¶Default: []
(all returners are enabled by default)
If certain returners should be disabled, this is the place
disable_returners:
- mongo_return
module_dirs
¶Default: []
A list of extra directories to search for Salt modules
module_dirs:
- /var/lib/salt/modules
returner_dirs
¶Default: []
A list of extra directories to search for Salt returners
returners_dirs:
- /var/lib/salt/returners
states_dirs
¶Default: []
A list of extra directories to search for Salt states
states_dirs:
- /var/lib/salt/states
grains_dirs
¶Default: []
A list of extra directories to search for Salt grains
grains_dirs:
- /var/lib/salt/grains
render_dirs
¶Default: []
A list of extra directories to search for Salt renderers
render_dirs:
- /var/lib/salt/renderers
cython_enable
¶Default: False
Set this value to true to enable auto-loading and compiling of .pyx
modules,
This setting requires that gcc
and cython
are installed on the minion
cython_enable: False
providers
¶Default: (empty)
A module provider can be statically overwritten or extended for the minion via
the providers
option. This can be done on an individual basis in an
SLS file, or globally here in the minion config, like
below.
providers:
service: systemd
renderer
¶Default: yaml_jinja
The default renderer used for local state executions
renderer: yaml_jinja
state_verbose
¶Default: False
state_verbose allows for the data returned from the minion to be more
verbose. Normally only states that fail or states that have changes are
returned, but setting state_verbose to True
will return all states that
were checked
state_verbose: True
state_output
¶Default: full
The state_output setting changes if the output is the full multi line output for each changed state if set to 'full', but if set to 'terse' the output will be shortened to a single line.
state_output: full
autoload_dynamic_modules
¶Default: True
autoload_dynamic_modules Turns on automatic loading of modules found in the
environments on the master. This is turned on by default, to turn of
auto-loading modules when states run set this value to False
autoload_dynamic_modules: True
Default: True
clean_dynamic_modules keeps the dynamic modules on the minion in sync with
the dynamic modules on the master, this means that if a dynamic module is
not on the master it will be deleted from the minion. By default this is
enabled and can be disabled by changing this value to False
clean_dynamic_modules: True
environment
¶Default: None
Normally the minion is not isolated to any single environment on the master when running states, but the environment can be isolated on the minion side by statically setting it. Remember that the recommended way to manage environments is to isolate via the top file.
environment: None
file_client
¶Default: remote
The client defaults to looking on the master server for files, but can be
directed to look on the minion by setting this parameter to local
.
file_client: remote
use_master_when_local
¶Default: False
When using a local file_client
, this parameter is used to allow
the client to connect to a master for remote execution.
use_master_when_local: False
file_roots
¶Default:
base:
- /srv/salt
When using a local file_client
, this parameter is used to setup
the fileserver's environments. This parameter operates identically to the
master config parameter
of the same name.
file_roots:
base:
- /srv/salt
dev:
- /srv/salt/dev/services
- /srv/salt/dev/states
prod:
- /srv/salt/prod/services
- /srv/salt/prod/states
hash_type
¶Default: md5
The hash_type is the hash to use when discovering the hash of a file on the local fileserver. The default is md5, but sha1, sha224, sha256, sha384, and sha512 are also supported.
hash_type: md5
pillar_roots
¶Default:
base:
- /srv/pillar
When using a local file_client
, this parameter is used to setup
the pillar environments.
pillar_roots:
base:
- /srv/pillar
dev:
- /srv/pillar/dev
prod:
- /srv/pillar/prod
open_mode
¶Default: False
Open mode can be used to clean out the PKI key received from the Salt master, turn on open mode, restart the minion, then turn off open mode and restart the minion to clean the keys.
open_mode: False
verify_master_pubkey_sign
¶Default: False
Enables verification of the master-public-signature returned by the master in auth-replies. Please see the tutorial on how to configure this properly Multimaster-PKI with Failover Tutorial
New in version 2014.7.0.
verify_master_pubkey_sign: True
If this is set to True
, master_sign_pubkey
must be also set
to True
in the master configuration file.
master_sign_key_name
¶Default: master_sign
The filename without the .pub suffix of the public key that should be used for verifying the signature from the master. The file must be located in the minion's pki directory.
New in version 2014.7.0.
master_sign_key_name: <filename_without_suffix>
always_verify_signature
¶Default: False
If verify_master_pubkey_sign
is enabled, the signature is only verified,
if the public-key of the master changes. If the signature should always be verified,
this can be set to True
.
New in version 2014.7.0.
always_verify_signature: True
Default: True
Disable multiprocessing support by default when a minion receives a publication a new process is spawned and the command is executed therein.
multiprocessing: True
log_file
¶Default: /var/log/salt/minion
The minion log can be sent to a regular file, local path name, or network
location. See also log_file
.
Examples:
log_file: /var/log/salt/minion
log_file: file:///dev/log
log_file: udp://loghost:10514
log_level
¶Default: warning
The level of messages to send to the console. See also log_level
.
log_level: warning
log_level_logfile
¶Default: warning
The level of messages to send to the log file. See also
log_level_logfile
. When it is not set explicitly
it will inherit the level set by log_level
option.
log_level_logfile: warning
log_datefmt
¶Default: %H:%M:%S
The date and time format used in console log messages. See also
log_datefmt
.
log_datefmt: '%H:%M:%S'
log_datefmt_logfile
¶Default: %Y-%m-%d %H:%M:%S
The date and time format used in log file messages. See also
log_datefmt_logfile
.
log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'
log_fmt_console
¶Default: [%(levelname)-8s] %(message)s
The format of the console logging messages. See also
log_fmt_console
.
log_fmt_console: '[%(levelname)-8s] %(message)s'
log_fmt_logfile
¶Default: %(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s
The format of the log file logging messages. See also
log_fmt_logfile
.
log_fmt_logfile: '%(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s'
log_granular_levels
¶Default: {}
This can be used to control logging levels more specifically. See also
log_granular_levels
.
failhard
¶Default: False
Set the global failhard flag, this informs all states to stop running states at the moment a single state fails
failhard: False
default_include
¶Default: minion.d/*.conf
The minion can include configuration from other files. Per default the minion will automatically include all config files from minion.d/*.conf where minion.d is relative to the directory of the minion configuration file.
include
¶Default: not defined
The minion can include configuration from other files. To enable this, pass a list of paths to this option. The paths can be either relative or absolute; if relative, they are considered to be relative to the directory the main minion configuration file lives in. Paths can make use of shell-style globbing. If no files are matched by a path passed to this option then the minion will log a warning message.
# Include files from a minion.d directory in the same
# directory as the minion config file
include: minion.d/*.conf
# Include a single extra file into the configuration
include: /etc/roles/webserver
# Include several files and the minion.d directory
include:
- extra_config
- minion.d/*
- /etc/roles/webserver
These options control how salt.modules.saltutil.update()
works with esky
frozen apps. For more information look at https://github.com/cloudmatrix/esky/.
update_url
¶Default: False
(Update feature is disabled)
The url to use when looking for application updates. Esky depends on directory listings to search for new versions. A webserver running on your Master is a good starting point for most setups.
update_url: 'http://salt.example.com/minion-updates'
update_restart_services
¶Default: []
(service restarting on update is disabled)
A list of services to restart when the minion software is updated. This would typically just be a list containing the minion's service name, but you may have other services that need to go with it.
update_restart_services: ['salt-minion']
While the default setup runs the master and minion as the root user, some may consider it an extra measure of security to run the master as a non-root user. Keep in mind that doing so does not change the master's capability to access minions as the user they are running as. Due to this many feel that running the master as a non-root user does not grant any real security advantage which is why the master has remained as root by default.
Note
Some of Salt's operations cannot execute correctly when the master is not running as root, specifically the pam external auth system, as this system needs root access to check authentication.
As of Salt 0.9.10 it is possible to run Salt as a non-root user. This can be
done by setting the user
parameter in the master configuration
file. and restarting the salt-master
service.
The minion has it's own user
parameter as well, but running the
minion as an unprivileged user will keep it from making changes to things like
users, installed packages, etc. unless access controls (sudo, etc.) are setup
on the minion to permit the non-root user to make the needed changes.
In order to allow Salt to successfully run as a non-root user, ownership, and permissions need to be set such that the desired user can read from and write to the following directories (and their subdirectories, where applicable):
Ownership can be easily changed with chown
, like so:
# chown -R user /etc/salt /var/cache/salt /var/log/salt /var/run/salt
The salt project tries to get the logging to work for you and help us solve any issues you might find along the way.
If you want to get some more information on the nitty-gritty of salt's logging system, please head over to the logging development document, if all you're after is salt's logging configurations, please continue reading.
log_file
¶The log records can be sent to a regular file, local path name, or network location.
Remote logging works best when configured to use rsyslogd(8) (e.g.: file:///dev/log
),
with rsyslogd(8) configured for network logging. The format for remote addresses is:
<file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility>
.
Default: Dependent of the binary being executed, for example, for salt-master
,
/var/log/salt/master
.
Examples:
log_file: /var/log/salt/master
log_file: /var/log/salt/minion
log_file: file:///dev/log
log_file: udp://loghost:10514
log_level
¶Default: warning
The level of log record messages to send to the console.
One of all
, garbage
, trace
, debug
, info
, warning
,
error
, critical
, quiet
.
log_level: warning
log_level_logfile
¶Default: warning
The level of messages to send to the log file.
One of all
, garbage
, trace
, debug
, info
, warning
,
error
, critical
, quiet
.
log_level_logfile: warning
log_datefmt
¶Default: %H:%M:%S
The date and time format used in console log messages. Allowed date/time
formatting can be seen on time.strftime
.
log_datefmt: '%H:%M:%S'
log_datefmt_logfile
¶Default: %Y-%m-%d %H:%M:%S
The date and time format used in log file messages. Allowed date/time
formatting can be seen on time.strftime
.
log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'
log_fmt_console
¶Default: [%(levelname)-8s] %(message)s
The format of the console logging messages. All standard python logging LogRecord attributes can be used. Salt also provides these custom LogRecord attributes to colorize console log output:
'%(colorlevel)s' # log level name colorized by level
'%(colorname)s' # colorized module name
'%(colorprocess)s' # colorized process number
'%(colormsg)s' # log message colorized by level
Note
The %(colorlevel)s
, %(colorname)s
, and %(colorprocess)
LogRecord attributes also include padding and enclosing brackets, [
and
]
to match the default values of their collateral non-colorized
LogRecord attributes.
log_fmt_console: '[%(levelname)-8s] %(message)s'
log_fmt_logfile
¶Default: %(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s
The format of the log file logging messages. All standard python logging
LogRecord attributes can be used. Salt
also provides these custom LogRecord attributes that include padding and
enclosing brackets [
and ]
:
'%(bracketlevel)s' # equivalent to [%(levelname)-8s]
'%(bracketname)s' # equivalent to [%(name)-17s]
'%(bracketprocess)s' # equivalent to [%(process)5s]
log_fmt_logfile: '%(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s'
log_granular_levels
¶Default: {}
This can be used to control logging levels more specifically. The example sets
the main salt library at the 'warning' level, but sets salt.modules
to log
at the debug
level:
log_granular_levels:
'salt': 'warning'
'salt.modules': 'debug'
Besides the internal logging handlers used by salt, there are some external which can be used, see the external logging handlers document.
logstash_mod |
Logstash Logging Handler |
sentry_mod |
Sentry Logging Handler |
Salt comes with a simple file server suitable for distributing files to the Salt minions. The file server is a stateless ZeroMQ server that is built into the Salt master.
The main intent of the Salt file server is to present files for use in the Salt state system. With this said, the Salt file server can be used for any general file transfer from the master to the minions.
In Salt 0.12.0, the modular fileserver was introduced. This feature added the
ability for the Salt Master to integrate different file server backends. File
server backends allow the Salt file server to act as a transparent bridge to
external resources. A good example of this is the git
backend, which allows Salt to serve files sourced from
one or more git repositories, but there are several others as well. Click
here for a full list of Salt's fileserver
backends.
Fileserver backends can be enabled with the fileserver_backend
option.
fileserver_backend:
- git
See the documentation for each backend to find the
correct value to add to fileserver_backend
in order to enable
them.
If fileserver_backend
is not defined in the Master config file,
Salt will use the roots
backend, but the
fileserver_backend
option supports multiple backends. When more
than one backend is in use, the files from the enabled backends are merged into a
single virtual filesystem. When a file is requested, the backends will be
searched in order for that file, and the first backend to match will be the one
which returns the file.
fileserver_backend:
- roots
- git
With this configuration, the environments and files defined in the
file_roots
parameter will be searched first, and if the file is
not found then the git repositories defined in gitfs_remotes
will be searched.
Just as the order of the values in fileserver_backend
matters,
so too does the order in which different sources are defined within a
fileserver environment. For example, given the below file_roots
configuration, if both /srv/salt/dev/foo.txt
and /srv/salt/prod/foo.txt
exist on the Master, then salt://foo.txt
would point to
/srv/salt/dev/foo.txt
in the dev
environment, but it would point to
/srv/salt/prod/foo.txt
in the base
environment.
file_roots:
base:
- /srv/salt/prod
qa:
- /srv/salt/qa
- /srv/salt/prod
dev:
- /srv/salt/dev
- /srv/salt/qa
- /srv/salt/prod
Similarly, when using the git
backend, if both
repositories defined below have a hotfix23
branch/tag, and both of them
also contain the file bar.txt
in the root of the repository at that
branch/tag, then salt://bar.txt
in the hotfix23
environment would be
served from the first
repository.
gitfs_remotes:
- https://mydomain.tld/repos/first.git
- https://mydomain.tld/repos/second.git
Note
Environments map differently based on the fileserver backend. For instance,
the mappings are explicitly defined in roots
backend, while in the VCS backends (git
,
hg
, svn
) the
environments are created from branches/tags/bookmarks/etc. For the
minion
backend, the files are all in a
single environment, which is specified by the minionfs_env
option.
See the documentation for each backend for a more detailed explanation of how environments are mapped.
New in version 0.9.5.
Salt Python modules can be distributed automatically via the Salt file server.
Under the root of any environment defined via the file_roots
option on the master server directories corresponding to the type of module can
be used.
The directories are prepended with an underscore:
_modules
_grains
_renderers
_returners
_states
The contents of these directories need to be synced over to the minions after Python modules have been created in them. There are a number of ways to sync the modules.
The minion configuration contains an option autoload_dynamic_modules
which defaults to True. This option makes the state system refresh all
dynamic modules when states are run. To disable this behavior set
autoload_dynamic_modules
to False in the minion config.
When dynamic modules are autoloaded via states, modules only pertinent to the environments matched in the master's top file are downloaded.
This is important to remember, because modules can be manually loaded from any specific environment that environment specific modules will be loaded when a state run is executed.
The saltutil module has a number of functions that can be used to sync all
or specific dynamic modules. The saltutil module function saltutil.sync_all
will sync all module types over to a minion. For more information see:
salt.modules.saltutil
The Salt file server is a high performance file server written in ZeroMQ. It manages large files quickly and with little overhead, and has been optimized to handle small files in an extremely efficient manner.
The Salt file server is an environment aware file server. This means that files can be allocated within many root directories and accessed by specifying both the file path and the environment to search. The individual environments can span across multiple directory roots to create overlays and to allow for files to be organized in many flexible ways.
The Salt file server defaults to the mandatory base
environment. This
environment MUST be defined and is used to download files when no
environment is specified.
Environments allow for files and sls data to be logically separated, but environments are not isolated from each other. This allows for logical isolation of environments by the engineer using Salt, but also allows for information to be used in multiple environments.
The environment
setting is a list of directories to publish files from.
These directories are searched in order to find the specified file and the
first file found is returned.
This means that directory data is prioritized based on the order in which they
are listed. In the case of this file_roots
configuration:
file_roots:
base:
- /srv/salt/base
- /srv/salt/failover
If a file's URI is salt://httpd/httpd.conf
, it will first search for the
file at /srv/salt/base/httpd/httpd.conf
. If the file is found there it
will be returned. If the file is not found there, then
/srv/salt/failover/httpd/httpd.conf
will be used for the source.
This allows for directories to be overlaid and prioritized based on the order they are defined in the configuration.
It is also possible to have file_roots
which supports multiple
environments:
file_roots:
base:
- /srv/salt/base
dev:
- /srv/salt/dev
- /srv/salt/base
prod:
- /srv/salt/prod
- /srv/salt/base
This example ensures that each environment will check the associated environment directory for files first. If a file is not found in the appropriate directory, the system will default to using the base directory.
New in version 0.9.8.
The file server can be rerouted to run from the minion. This is primarily to
enable running Salt states without a Salt master. To use the local file server
interface, copy the file server data to the minion and set the file_roots
option on the minion to point to the directories copied from the master.
Once the minion file_roots
option has been set, change the file_client
option to local to make sure that the local file server interface is used.
The cp module is the home of minion side file server operations. The cp module is used by the Salt state system, salt-cp, and can be used to distribute files presented by the Salt file server.
Since the file server is made to work with the Salt state system, it supports environments. The environments are defined in the master config file and when referencing an environment the file specified will be based on the root directory of the environment.
The cp.get_file function can be used on the minion to download a file from the master, the syntax looks like this:
# salt '*' cp.get_file salt://vimrc /etc/vimrc
This will instruct all Salt minions to download the vimrc file and copy it to /etc/vimrc
Template rendering can be enabled on both the source and destination file names like so:
# salt '*' cp.get_file "salt://{{grains.os}}/vimrc" /etc/vimrc template=jinja
This example would instruct all Salt minions to download the vimrc from a directory with the same name as their OS grain and copy it to /etc/vimrc
For larger files, the cp.get_file module also supports gzip compression. Because gzip is CPU-intensive, this should only be used in scenarios where the compression ratio is very high (e.g. pretty-printed JSON or YAML files).
To use compression, use the gzip
named argument. Valid values are integers
from 1 to 9, where 1 is the lightest compression and 9 the heaviest. In other
words, 1 uses the least CPU on the master (and minion), while 9 uses the most.
# salt '*' cp.get_file salt://vimrc /etc/vimrc gzip=5
Finally, note that by default cp.get_file does not create new destination
directories if they do not exist. To change this, use the makedirs
argument:
# salt '*' cp.get_file salt://vimrc /etc/vim/vimrc makedirs=True
In this example, /etc/vim/ would be created if it didn't already exist.
The cp.get_dir function can be used on the minion to download an entire directory from the master. The syntax is very similar to get_file:
# salt '*' cp.get_dir salt://etc/apache2 /etc
cp.get_dir supports template rendering and gzip compression arguments just like get_file:
# salt '*' cp.get_dir salt://etc/{{pillar.webserver}} /etc gzip=5 template=jinja
A client API is available which allows for modules and applications to be written which make use of the Salt file server.
The file server uses the same authentication and encryption used by the rest of the Salt system for network communication.
The FileClient class is used to set up the communication from the minion to
the master. When creating a FileClient object the minion configuration needs
to be passed in. When using the FileClient from within a minion module the
built in __opts__
data can be passed:
import salt.minion
def get_file(path, dest, env='base'):
'''
Used to get a single file from the Salt master
CLI Example:
salt '*' cp.get_file salt://vimrc /etc/vimrc
'''
# Create the FileClient object
client = salt.minion.FileClient(__opts__)
# Call get_file
return client.get_file(path, dest, False, env)
Using the FileClient class outside of a minion module where the __opts__
data is not available, it needs to be generated:
import salt.minion
import salt.config
def get_file(path, dest, env='base'):
'''
Used to get a single file from the Salt master
'''
# Get the configuration data
opts = salt.config.minion_config('/etc/salt/minion')
# Create the FileClient object
client = salt.minion.FileClient(opts)
# Call get_file
return client.get_file(path, dest, False, env)
azurefs |
The backend for serving files from the Azure blob storage service. |
gitfs |
Git Fileserver Backend |
hgfs |
Mercurial Fileserver Backend |
minionfs |
Fileserver backend which serves files pushed to the Master |
roots |
The default file server backend |
s3fs |
Amazon S3 Fileserver Backend |
svnfs |
Subversion Fileserver Backend |
Reference documentation on Salt's internal code.
This library makes it possible to introspect dataset and aggregate nodes when it is instructed.
Note
The following examples with be expressed in YAML for convenience's sake:
This yaml document has duplicate keys:
foo: !aggr-scalar first
foo: !aggr-scalar second
bar: !aggr-map {first: foo}
bar: !aggr-map {second: bar}
baz: !aggr-scalar 42
but tagged values instruct Salt that overlapping values they can be merged together:
foo: !aggr-seq [first, second]
bar: !aggr-map {first: foo, second: bar}
baz: !aggr-seq [42]
For example, this yaml document still has duplicate keys, but does not instruct aggregation:
foo: first
foo: second
bar: {first: foo}
bar: {second: bar}
baz: 42
So the late found values prevail:
foo: second
bar: {second: bar}
baz: 42
Aggregation is permitted between tagged objects that share the same type. If not, the default merge strategy prevails.
For example, these examples:
foo: {first: value}
foo: !aggr-map {second: value}
bar: !aggr-map {first: value}
bar: 42
baz: !aggr-seq [42]
baz: [fail]
qux: 42
qux: !aggr-scalar fail
are interpreted like this:
foo: !aggr-map{second: value}
bar: 42
baz: [fail]
qux: !aggr-seq [fail]
TODO: write this part
salt.utils.aggregation.
aggregate
(obj_a, obj_b, level=False, map_class=<class 'salt.utils.aggregation.Map'>, sequence_class=<class 'salt.utils.aggregation.Sequence'>)¶Merge obj_b into obj_a.
>>> aggregate('first', 'second', True) == ['first', 'second']
True
salt.utils.aggregation.
Aggregate
¶Aggregation base.
salt.utils.aggregation.
Map
(*args, **kwds)¶Map aggregation.
salt.utils.aggregation.
Scalar
(obj)¶Shortcut for Sequence creation
>>> Scalar('foo') == Sequence(['foo'])
True
salt.utils.aggregation.
Sequence
¶Sequence aggregation.
Salt-specific exceptions should be thrown as often as possible so the various interfaces to Salt (CLI, API, etc) can handle those errors appropriately and display error messages appropriately.
salt.exceptions |
This module is a central location for all salt exceptions |
It is very common in the Salt codebase to see opts referred to in a number of contexts.
For example, it can be seen as __opts__ in certain cases, or simply as opts as an argument to a function in others.
Simply put, this data structure is a dictionary of Salt's runtime configuration information that's passed around in order for functions to know how Salt is configured.
When writing Python code to use specific parts of Salt, it may become necessary to initialize a copy of opts from scratch in order to have it available for a given function.
To do so, use the utility functions available in salt.config.
As an example, here is how one might generate and print an options dictionary for a minion instance:
import salt.config
opts = salt.config.minion_config('/etc/salt/minion')
print(opts)
To generate and display opts for a master, the process is similar:
import salt.config
opts = salt.config.master_config('/etc/salt/master')
print(opts)
This module is a central location for all salt exceptions
salt.exceptions.
AuthenticationError
(message='')¶If sha256 signature fails during decryption
salt.exceptions.
AuthorizationError
(message='')¶Thrown when runner or wheel execution fails due to permissions
salt.exceptions.
CommandExecutionError
(message='')¶Used when a module runs a command which returns an error and wants to show the user the output gracefully instead of dying
salt.exceptions.
CommandNotFoundError
(message='')¶Used in modules or grains when a required binary is not available
salt.exceptions.
EauthAuthenticationError
(message='')¶Thrown when eauth authentication fails
salt.exceptions.
FileserverConfigError
(message='')¶Used when invalid fileserver settings are detected
salt.exceptions.
LoaderError
(message='')¶Problems loading the right renderer
salt.exceptions.
MasterExit
¶Rise when the master exits
salt.exceptions.
MinionError
(message='')¶Minion problems reading uris such as salt:// or http://
salt.exceptions.
PkgParseError
(message='')¶Used when of the pkg modules cannot correctly parse the output from the CLI tool (pacman, yum, apt, aptitude, etc)
salt.exceptions.
PublishError
(message='')¶Problems encountered when trying to publish a command
salt.exceptions.
SaltClientError
(message='')¶Problem reading the master root key
salt.exceptions.
SaltClientTimeout
(msg, jid=None, *args, **kwargs)¶Thrown when a job sent through one of the Client interfaces times out
Takes the jid
as a parameter
salt.exceptions.
SaltCloudConfigError
(message='')¶Raised when a configuration setting is not found and should exist.
salt.exceptions.
SaltCloudException
(message='')¶Generic Salt Cloud Exception
salt.exceptions.
SaltCloudExecutionFailure
(message='')¶Raised when too much failures have occurred while querying/waiting for data.
salt.exceptions.
SaltCloudExecutionTimeout
(message='')¶Raised when too much time has passed while querying/waiting for data.
salt.exceptions.
SaltCloudNotFound
(message='')¶Raised when some cloud provider function cannot find what's being searched.
salt.exceptions.
SaltCloudPasswordError
(message='')¶Raise when virtual terminal password input failed
salt.exceptions.
SaltCloudSystemExit
(message, exit_code=1)¶This exception is raised when the execution should be stopped.
salt.exceptions.
SaltDaemonNotRunning
(message='')¶Throw when a running master/minion/syndic is not running but is needed to perform the requested operation (e.g., eauth).
salt.exceptions.
SaltException
(message='')¶Base exception class; all Salt-specific exceptions should subclass this
pack
()¶Pack this exception into a serializable dictionary that is safe for transport via msgpack
salt.exceptions.
SaltInvocationError
(message='')¶Used when the wrong number of arguments are sent to modules or invalid arguments are specified on the command line
salt.exceptions.
SaltMasterError
(message='')¶Problem reading the master root key
salt.exceptions.
SaltNoMinionsFound
(message='')¶An attempt to retrieve a list of minions failed
salt.exceptions.
SaltRenderError
(message, line_num=None, buf='', marker=' <======================', trace=None)¶Used when a renderer needs to raise an explicit error. If a line number and buffer string are passed, get_context will be invoked to get the location of the error.
salt.exceptions.
SaltReqTimeoutError
(message='')¶Thrown when a salt master request call fails to return within the timeout
salt.exceptions.
SaltRunnerError
(message='')¶Problem in runner
salt.exceptions.
SaltSyndicMasterError
(message='')¶Problem while proxying a request in the syndication master
salt.exceptions.
SaltSystemExit
(code=0, msg=None)¶This exception is raised when an unsolvable problem is found. There's nothing else to do, salt should just exit.
salt.exceptions.
SaltWheelError
(message='')¶Problem in wheel
salt.exceptions.
TimedProcTimeoutError
(message='')¶Thrown when a timed subprocess does not terminate within the timeout, or if the specified timeout is not an int or a float
salt.exceptions.
TokenAuthenticationError
(message='')¶Thrown when token authentication fails
Virtual modules
pkg
is a virtual module that is fulfilled by one of the following modules:
aliases |
Manage the information in the aliases file | ||
alternatives |
Support for Alternatives system | ||
apache |
Support for Apache | ||
aptpkg |
Support for APT (Advanced Packaging Tool) | ||
archive |
A module to wrap (non-Windows) archive calls | ||
artifactory |
Module for fetching artifacts from Artifactory | ||
at |
Wrapper module for at(1) | ||
augeas_cfg |
Manages configuration files via augeas | ||
aws_sqs |
Support for the Amazon Simple Queue Service. | ||
blockdev |
Module for managing block devices | ||
bluez |
Support for Bluetooth (using BlueZ in Linux). | ||
boto_asg |
Connection module for Amazon Autoscale Groups | ||
boto_cfn |
Connection module for Amazon Cloud Formation | ||
boto_cloudwatch |
Connection module for Amazon CloudWatch | ||
boto_dynamodb |
Connection module for Amazon DynamoDB | ||
boto_ec2 |
Connection module for Amazon EC2 | ||
boto_elasticache |
Connection module for Amazon Elasticache | ||
boto_elb |
Connection module for Amazon ELB | ||
boto_iam |
Connection module for Amazon IAM | ||
boto_kms |
Connection module for Amazon KMS | ||
boto_rds |
Connection module for Amazon RDS | ||
boto_route53 |
Connection module for Amazon Route53 | ||
boto_secgroup |
Connection module for Amazon Security Groups | ||
boto_sns |
Connection module for Amazon SNS | ||
boto_sqs |
Connection module for Amazon SQS | ||
boto_vpc |
Connection module for Amazon VPC | ||
bower |
Manage and query Bower packages | ||
brew |
Homebrew for Mac OS X | ||
bridge |
Module for gathering and managing bridging information | ||
bsd_shadow |
Manage the password database on BSD systems | ||
btrfs |
Module for managing BTRFS file systems. | ||
cabal |
Manage and query Cabal packages | ||
cassandra |
Cassandra NoSQL Database Module | ||
cassandra_cql |
Cassandra Database Module | ||
chef |
Execute chef in server or solo mode | ||
chocolatey |
A dead simple module wrapping calls to the Chocolatey package manager | ||
cloud |
Salt-specific interface for calling Salt Cloud directly | ||
cmdmod |
A module for shelling out. | ||
composer |
Use composer to install PHP dependencies for a directory | ||
config |
Return config information | ||
container_resource |
Common resources for LXC and systemd-nspawn containers | ||
cp |
Minion side functions for salt-cp | ||
cpan |
Manage Perl modules using CPAN | ||
cron |
Work with cron | ||
cyg |
Manage cygwin packages. | ||
daemontools |
daemontools service module. This module will create daemontools type | ||
darwin_pkgutil |
Installer support for OS X. | ||
darwin_sysctl |
Module for viewing and modifying sysctl parameters | ||
data |
Manage a local persistent data structure that can hold any arbitrary data | ||
ddns |
Support for RFC 2136 dynamic DNS updates. | ||
deb_apache |
Support for Apache | ||
deb_postgres |
Module to provide Postgres compatibility to salt for debian family specific tools. | ||
debconfmod |
Support for Debconf | ||
debian_ip |
The networking module for Debian based distros | ||
debian_service |
Service support for Debian systems (uses update-rc.d and /sbin/service) | ||
defaults |
|||
devmap |
Device-Mapper module | ||
dig |
Compendium of generic DNS utilities | ||
disk |
Module for gathering disk information | ||
djangomod |
Manage Django sites | ||
dnsmasq |
Module for managing dnsmasq | ||
dnsutil |
Compendium of generic DNS utilities | ||
dockerio |
Management of Docker Containers | ||
dockerng |
Management of Docker Containers | ||
dpkg |
Support for DEB packages | ||
drac |
Manage Dell DRAC | ||
drbd |
DRBD administration module | ||
ebuild |
Support for Portage | ||
eix |
Support for Eix | ||
elasticsearch |
Connection module for Elasticsearch | ||
environ |
Support for getting and setting the environment variables of the current salt process. | ||
eselect |
Support for eselect, Gentoo's configuration and management tool. | ||
etcd_mod |
Execution module to work with etcd | ||
event |
Use the Salt Event System to fire events from the master to the minion and vice-versa. | ||
extfs |
Module for managing ext2/3/4 file systems | ||
file |
Manage information about regular files, directories, | ||
firewalld |
Support for firewalld. | ||
freebsd_sysctl |
Module for viewing and modifying sysctl parameters | ||
freebsdjail |
The jail module for FreeBSD | ||
freebsdkmod |
Module to manage FreeBSD kernel modules | ||
freebsdpkg |
Remote package support using pkg_add(1) |
||
freebsdports |
Install software from the FreeBSD ports(7) system |
||
freebsdservice |
The service module for FreeBSD | ||
gem |
Manage ruby gems. | ||
genesis |
Module for managing container and VM images | ||
gentoo_service |
Top level package command wrapper, used to translate the os detected by grains | ||
gentoolkitmod |
Support for Gentoolkit | ||
git |
Support for the Git SCM | ||
glance |
Module for handling openstack glance calls. | ||
glusterfs |
Manage a glusterfs pool | ||
gnomedesktop |
GNOME implementations | ||
gpg |
Manage a GPG keychains, add keys, create keys, retrieve keys from keyservers. | ||
grains |
Return/control aspects of the grains data | ||
groupadd |
Manage groups on Linux, OpenBSD and NetBSD | ||
grub_legacy |
Support for GRUB Legacy | ||
guestfs |
Interact with virtual machine images via libguestfs | ||
hadoop |
Support for hadoop | ||
haproxyconn |
Support for haproxy | ||
hashutil |
A collection of hashing and encoding functions | ||
hg |
Support for the Mercurial SCM | ||
hipchat |
Module for sending messages to hipchat. | ||
hosts |
Manage the information in the hosts file | ||
htpasswd |
Support for htpasswd command | ||
http |
Module for making various web calls. | ||
ilo |
Manage HP ILO | ||
img |
Virtual machine image management tools | ||
incron |
Work with incron | ||
influx |
InfluxDB - A distributed time series database | ||
ini_manage |
Edit ini files | ||
introspect |
Functions to perform introspection on a minion, and return data in a format | ||
ipmi |
Support IPMI commands over LAN. | ||
ipset |
Support for ipset | ||
iptables |
Support for iptables | ||
jboss7 |
Module for managing JBoss AS 7 through the CLI interface. | ||
jboss7_cli |
Module for low-level interaction with JbossAS7 through CLI. | ||
junos |
Module for interfacing to Junos devices | ||
kerberos |
Manage Kerberos KDC | ||
key |
Functions to view the minion's public key information | ||
keyboard |
Module for managing keyboards on supported POSIX-like systems using systemd, or such as Redhat, Debian and Gentoo. | ||
keystone |
Module for handling openstack keystone calls. | ||
kmod |
Module to manage Linux kernel modules | ||
launchctl |
Module for the management of MacOS systems that use launchd/launchctl | ||
layman |
Support for Layman | ||
ldapmod |
Salt interface to LDAP commands | ||
linux_acl |
Support for Linux File Access Control Lists | ||
linux_lvm |
Support for Linux LVM2 | ||
linux_sysctl |
Module for viewing and modifying sysctl parameters | ||
localemod |
Module for managing locales on POSIX-like systems. | ||
locate |
Module for using the locate utilities | ||
logadm |
Module for managing Solaris logadm based log rotations. | ||
logrotate |
Module for managing logrotate. | ||
lvs |
Support for LVS (Linux Virtual Server) | ||
lxc |
Control Linux Containers via Salt | ||
mac_group |
Manage groups on Mac OS 10.7+ | ||
mac_user |
Manage users on Mac OS 10.7+ | ||
macports |
Support for MacPorts under Mac OSX. | ||
makeconf |
Support for modifying make.conf under Gentoo | ||
match |
The match module allows for match routines to be run and determine target specs | ||
mdadm |
Salt module to manage RAID arrays with mdadm | ||
memcached |
Module for Management of Memcached Keys | ||
mine |
The function cache system allows for data to be stored on the master so it can be easily read by other minions | ||
mod_random |
New in version 2014.7.0. |
||
modjk |
Control Modjk via the Apache Tomcat "Status" worker | ||
mongodb |
Module to provide MongoDB functionality to Salt | ||
monit |
Monit service module. | ||
moosefs |
Module for gathering and managing information about MooseFS | ||
mount |
Salt module to manage unix mounts and the fstab file | ||
mssql |
Module to provide MS SQL Server compatibility to salt. | ||
munin |
Run munin plugins/checks from salt and format the output as data. | ||
mysql |
Module to provide MySQL compatibility to salt. | ||
nacl |
|
||
nagios |
Run nagios plugins/checks from salt and get the return as data. | ||
nagios_rpc |
Check Host & Service status from Nagios via JSON RPC. | ||
netbsd_sysctl |
Module for viewing and modifying sysctl parameters | ||
netbsdservice |
The service module for NetBSD | ||
netscaler |
|||
network |
Module for gathering and managing network information | ||
neutron |
Module for handling OpenStack Neutron calls | ||
nfs3 |
Module for managing NFS version 3. | ||
nftables |
Support for nftables | ||
nginx |
Support for nginx | ||
nova |
Module for handling OpenStack Nova calls | ||
npm |
Manage and query NPM packages. | ||
nspawn |
Manage nspawn containers | ||
omapi |
This module interacts with an ISC DHCP Server via OMAPI. | ||
openbsd_sysctl |
Module for viewing and modifying OpenBSD sysctl parameters | ||
openbsdpkg |
Package support for OpenBSD | ||
openbsdrcctl |
The rcctl service module for OpenBSD | ||
openbsdservice |
The service module for OpenBSD | ||
openstack_config |
Modify, retrieve, or delete values from OpenStack configuration files. | ||
oracle |
Oracle DataBase connection module | ||
osquery |
Support for OSQuery - https://osquery.io | ||
osxdesktop |
Mac OS X implementations of various commands in the "desktop" interface | ||
pacman |
A module to wrap pacman calls, since Arch is the best | ||
pagerduty |
Module for Firing Events via PagerDuty | ||
pam |
Support for pam | ||
parted |
Module for managing partitions on POSIX-like systems. | ||
pecl |
Manage PHP pecl extensions. | ||
pillar |
Extract the pillar data for this minion | ||
pip |
Install Python packages with pip to either the system or a virtualenv | ||
pkg_resource |
Resources needed by pkg providers | ||
pkgin |
Package support for pkgin based systems, inspired from freebsdpkg module | ||
pkgng |
Support for pkgng , the new package manager for FreeBSD |
||
pkgutil |
Pkgutil support for Solaris | ||
portage_config |
Configure portage(5) |
||
postfix |
Support for Postfix | ||
postgres |
Module to provide Postgres compatibility to salt. | ||
poudriere |
Support for poudriere | ||
powerpath |
powerpath support. | ||
ps |
|||
publish |
Publish a command from a minion to a target | ||
puppet |
Execute puppet routines | ||
pushover_notify |
Module for sending messages to Pushover (https://www.pushover.net) | ||
pw_group |
Manage groups on FreeBSD | ||
pw_user |
Manage users with the useradd command | ||
pyenv |
Manage python installations with pyenv. | ||
qemu_img |
Qemu-img Command Wrapper | ||
qemu_nbd |
Qemu Command Wrapper | ||
quota |
Module for managing quotas on POSIX-like systems. | ||
rabbitmq |
Module to provide RabbitMQ compatibility to Salt. | ||
raet_publish |
Publish a command from a minion to a target | ||
random_org |
Module for retrieving random information from Random.org | ||
rbenv |
Manage ruby installations with rbenv. | ||
rdp |
Manage RDP Service on Windows servers | ||
redismod |
Module to provide redis functionality to Salt | ||
reg |
Manage the registry on Windows | ||
rest_package |
Service support for the REST example | ||
rest_sample |
Module for interfacing to the REST example | ||
rest_service |
Service support for the REST example | ||
ret |
Module to integrate with the returner system and retrieve data sent to a salt returner | ||
rh_ip |
The networking module for RHEL/Fedora based distros | ||
rh_service |
Service support for RHEL-based systems, including support for both upstart and sysvinit | ||
riak |
Riak Salt Module | ||
rpm |
Support for rpm | ||
rsync |
Wrapper for rsync | ||
runit |
runit service module | ||
rvm |
Manage ruby installations and gemsets with RVM, the Ruby Version Manager. | ||
s3 |
Connection module for Amazon S3 | ||
saltcloudmod |
Control a salt cloud system | ||
saltutil |
The Saltutil module is used to manage the state of the salt minion itself. | ||
schedule |
Module for managing the Salt schedule on a minion | ||
scsi |
SCSI administration module | ||
sdb |
Module for Manipulating Data via the Salt DB API | ||
seed |
Virtual machine image management tools | ||
selinux |
Execute calls on selinux | ||
sensors |
Read lm-sensors | ||
serverdensity_device |
Wrapper around Server Density API | ||
service |
The default service module, if not otherwise specified salt will fall back | ||
shadow |
Manage the shadow file | ||
slack_notify |
Module for sending messages to Slack | ||
smartos_imgadm |
Module for running imgadm command on SmartOS | ||
smartos_vmadm |
Module for managing VMs on SmartOS | ||
smbios |
Interface to SMBIOS/DMI | ||
smf |
Service support for Solaris 10 and 11, should work with other systems that use SMF also. | ||
smtp |
Module for Sending Messages via SMTP | ||
softwareupdate |
Support for the softwareupdate command on MacOS. | ||
solaris_group |
Manage groups on Solaris | ||
solaris_shadow |
Manage the password database on Solaris systems | ||
solaris_user |
Manage users with the useradd command | ||
solarisips |
IPS pkg support for Solaris | ||
solarispkg |
Package support for Solaris | ||
solr |
Apache Solr Salt Module | ||
splunk_search |
Module for interop with the Splunk API | ||
sqlite3 |
Support for SQLite3 | ||
ssh |
Manage client ssh components | ||
state |
Control the state system on the minion | ||
status |
Module for returning various status data about a minion. | ||
sudo |
Allow for the calling of execution modules via sudo | ||
supervisord |
Provide the service module for system supervisord or supervisord in a | ||
svn |
Subversion SCM | ||
swift |
Module for handling OpenStack Swift calls | ||
sysbench |
The 'sysbench' module is used to analyze the performance of the minions, right from the master! It measures various system parameters such as CPU, Memory, File I/O, Threads and Mutex. | ||
syslog_ng |
Module for getting information about syslog-ng | ||
sysmod |
The sys module provides information about the available functions on the minion | ||
sysrc |
sysrc module for FreeBSD | ||
system |
Support for reboot, shutdown, etc | ||
systemd |
Provide the service module for systemd | ||
test |
Module for running arbitrary tests | ||
test_virtual |
Module for running arbitrary tests with a __virtual__ function | ||
timezone |
Module for managing timezone on POSIX-like systems. | ||
tls |
A salt module for SSL/TLS. | ||
tomcat |
Support for Tomcat | ||
tuned |
|
||
twilio_notify |
Module for notifications via Twilio | ||
upstart |
Module for the management of upstart systems. | ||
uptime |
Wrapper around uptime API | ||
useradd |
Manage users with the useradd command | ||
uwsgi |
uWSGI stats server http://uwsgi-docs.readthedocs.org/en/latest/StatsServer.html | ||
varnish |
Support for Varnish | ||
vbox_guest |
VirtualBox Guest Additions installer | ||
virt |
Work with virtual machines managed by libvirt | ||
virtualenv_mod |
Create virtualenv environments | ||
win_autoruns |
Module for listing programs that automatically run on startup | ||
win_dacl |
Manage DACLs on Windows | ||
win_disk |
Module for gathering disk information on Windows | ||
win_dns_client |
Module for configuring DNS Client on Windows systems | ||
win_file |
Manage information about files on the minion, set/read user, group | ||
win_firewall |
Module for configuring Windows Firewall | ||
win_groupadd |
Manage groups on Windows | ||
win_ip |
The networking module for Windows based systems | ||
win_network |
Module for gathering and managing network information | ||
win_ntp |
Management of NTP servers on Windows | ||
win_path |
Manage the Windows System PATH | ||
win_pkg |
A module to manage software on Windows | ||
win_repo |
Module to manage Windows software repo on a Standalone Minion | ||
win_servermanager |
Manage Windows features via the ServerManager powershell module | ||
win_service |
Windows Service module. | ||
win_shadow |
Manage the shadow file | ||
win_status |
Module for returning various status data about a minion. | ||
win_system |
Support for reboot, shutdown, etc | ||
win_timezone |
Module for managing timezone on Windows systems. | ||
win_update |
Module for running windows updates. | ||
win_useradd |
Manage Windows users with the net user command | ||
x509 |
Manage X509 certificates | ||
xapi |
This module (mostly) uses the XenAPI to manage Xen virtual machines. | ||
xfs |
Module for managing XFS file systems. | ||
xmpp |
Module for Sending Messages via XMPP (a.k.a. | ||
yumpkg |
Support for YUM | ||
zcbuildout |
Management of zc.buildout | ||
zfs |
Salt interface to ZFS commands | ||
zk_concurrency |
Concurrency controls in zookeeper | ||
znc |
znc - An advanced IRC bouncer | ||
zpool |
Module for running ZFS zpool command | ||
zypper |
Package support for openSUSE via the zypper package manager |
New in version 2014.7.0.
depends: |
recommended due to a known SSL error introduced in version 3.2.5. The issue was reportedly resolved with CherryPy milestone 3.3, but the patch was committed for version 3.6.1.
|
---|---|
optdepends: |
|
client_libraries: | |
configuration: | All authentication is done through Salt's external auth system which requires additional configuration not described here. Example production-ready configuration; add to the Salt master config file
and restart the rest_cherrypy:
port: 8000
ssl_crt: /etc/pki/tls/certs/localhost.crt
ssl_key: /etc/pki/tls/certs/localhost.key
Using only a secure HTTPS connection is strongly recommended since Salt authentication credentials will be sent over the wire. A self-signed certificate can be generated using the
salt-call --local tls.create_self_signed_cert
All available configuration options are detailed below. These settings configure the CherryPy HTTP server and do not apply when using an external server such as Apache or Nginx.
|
Authentication is performed by passing a session token with each request.
Tokens are generated via the Login
URL.
The token may be sent in one of two ways:
Include a custom header named X-Auth-Token.
For example, using curl:
curl -sSk https://localhost:8000/login -H 'Accept: application/x-yaml' -d username=saltdev -d password=saltdev -d eauth=auto
Copy the token
value from the output and include it in subsequent
requests:
curl -sSk https://localhost:8000 -H 'Accept: application/x-yaml' -H 'X-Auth-Token: 697adbdc8fe971d09ae4c2a3add7248859c87079' -d client=local -d tgt='*' -d fun=test.ping
Sent via a cookie. This option is a convenience for HTTP clients that automatically handle cookie support (such as browsers).
For example, using curl:
# Write the cookie file:
curl -sSk https://localhost:8000/login -c ~/cookies.txt -H 'Accept: application/x-yaml' -d username=saltdev -d password=saltdev -d eauth=auto
# Read the cookie file:
curl -sSk https://localhost:8000 -b ~/cookies.txt -H 'Accept: application/x-yaml' -d client=local -d tgt='*' -d fun=test.ping
See also
You can bypass the session handling via the Run
URL.
Commands are sent to a running Salt master via this module by sending HTTP requests to the URLs detailed below.
Content negotiation
This REST interface is flexible in what data formats it will accept as well as what formats it will return (e.g., JSON, YAML, x-www-form-urlencoded).
Data sent in POST and PUT requests must be in the format of a list of lowstate dictionaries. This allows multiple commands to be executed in a single HTTP request. The order of commands in the request corresponds to the return for each command in the response.
Lowstate, broadly, is a dictionary of values that are mapped to a function call. This pattern is used pervasively throughout Salt. The functions called from netapi modules are described in Client Interfaces.
The following example (in JSON format) causes Salt to execute two commands, a command sent to minions as well as a runner function on the master:
[{
"client": "local",
"tgt": "*",
"fun": "test.fib",
"arg": ["10"]
},
{
"client": "runner",
"fun": "jobs.lookup_jid",
"jid": "20130603122505459265"
}]
x-www-form-urlencoded
Sending JSON or YAML in the request body is simple and most flexible, however sending data in urlencoded format is also supported with the caveats below. It is the default format for HTML forms, many JavaScript libraries, and the curl command.
For example, the equivalent to running salt '*' test.ping
is sending
fun=test.ping&arg&client=local&tgt=*
in the HTTP request body.
Caveats:
Only a single command may be sent per HTTP request.
Repeating the arg
parameter multiple times will cause those
parameters to be combined into a single list.
Note, some popular frameworks and languages (notably jQuery, PHP, and
Ruby on Rails) will automatically append empty brackets onto repeated
parameters. E.g., arg=one
, arg=two
will be sent as arg[]=one
,
arg[]=two
. This is not supported; send JSON or YAML instead.
The rest_cherrypy
netapi module is a standard Python WSGI app. It can be
deployed one of two ways.
The default configuration is to run this module using salt-api to start the Python-based CherryPy server. This server is lightweight, multi-threaded, encrypted with SSL, and should be considered production-ready.
This module may be deployed on any WSGI-compliant server such as Apache with mod_wsgi or Nginx with FastCGI, to name just two (there are many).
Note, external WSGI servers handle URLs, paths, and SSL certs directly. The
rest_cherrypy
configuration options are ignored and the salt-api
daemon
does not need to be running at all. Remember Salt authentication credentials
are sent in the clear unless SSL is being enforced!
An example Apache virtual host configuration:
<VirtualHost *:80>
ServerName example.com
ServerAlias *.example.com
ServerAdmin webmaster@example.com
LogLevel warn
ErrorLog /var/www/example.com/logs/error.log
CustomLog /var/www/example.com/logs/access.log combined
DocumentRoot /var/www/example.com/htdocs
WSGIScriptAlias / /path/to/salt/netapi/rest_cherrypy/wsgi.py
</VirtualHost>
/
¶salt.netapi.rest_cherrypy.app.
LowDataAdapter
¶The primary entry point to Salt's REST API
GET
()¶An explanation of the API with links of where to go next
GET
/
¶Request Headers: | |
---|---|
|
|
Status Codes: |
|
Example request:
curl -i localhost:8000
GET / HTTP/1.1
Host: localhost:8000
Accept: application/json
Example response:
HTTP/1.1 200 OK
Content-Type: application/json
POST
¶Mock out specified imports
This allows autodoc to do its thing without having oodles of req'd
installed libs. This doesn't work with import *
imports.
/login
¶salt.netapi.rest_cherrypy.app.
Login
(*args, **kwargs)¶Log in to receive a session token
GET
()¶Present the login interface
GET
/login
¶An explanation of how to log in.
Status Codes: |
|
---|
Example request:
curl -i localhost:8000/login
GET /login HTTP/1.1
Host: localhost:8000
Accept: text/html
Example response:
HTTP/1.1 200 OK
Content-Type: text/html
POST
(**kwargs)¶Authenticate against Salt's eauth system
POST
/login
¶Request Headers: | |
---|---|
|
|
Form Parameters: | |
|
|
Status Codes: |
|
Example request:
curl -si localhost:8000/login \
-H "Accept: application/json" \
-d username='saltuser' \
-d password='saltpass' \
-d eauth='pam'
POST / HTTP/1.1
Host: localhost:8000
Content-Length: 42
Content-Type: application/x-www-form-urlencoded
Accept: application/json
username=saltuser&password=saltpass&eauth=pam
Example response:
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 206
X-Auth-Token: 6d1b722e
Set-Cookie: session_id=6d1b722e; expires=Sat, 17 Nov 2012 03:23:52 GMT; Path=/
{"return": {
"token": "6d1b722e",
"start": 1363805943.776223,
"expire": 1363849143.776224,
"user": "saltuser",
"eauth": "pam",
"perms": [
"grains.*",
"status.*",
"sys.*",
"test.*"
]
}}
/minions
¶salt.netapi.rest_cherrypy.app.
Minions
¶Convenience URLs for working with minions
GET
(mid=None)¶A convenience URL for getting lists of minions or getting minion details
GET
/minions/
(mid)¶Request Headers: | |
---|---|
|
|
Status Codes: |
|
Example request:
curl -i localhost:8000/minions/ms-3
GET /minions/ms-3 HTTP/1.1
Host: localhost:8000
Accept: application/x-yaml
Example response:
HTTP/1.1 200 OK
Content-Length: 129005
Content-Type: application/x-yaml
return:
- ms-3:
grains.items:
...
POST
(**kwargs)¶Start an execution command and immediately return the job id
POST
/minions
¶Request Headers: | |
---|---|
|
|
Response Headers: | |
|
|
Status Codes: |
|
lowstate data describing Salt commands must be sent in the
request body. The client
option will be set to
local_async()
.
Example request:
curl -sSi localhost:8000/minions \
-H "Accept: application/x-yaml" \
-d tgt='*' \
-d fun='status.diskusage'
POST /minions HTTP/1.1
Host: localhost:8000
Accept: application/x-yaml
Content-Length: 26
Content-Type: application/x-www-form-urlencoded
tgt=*&fun=status.diskusage
Example response:
HTTP/1.1 202 Accepted
Content-Length: 86
Content-Type: application/x-yaml
return:
- jid: '20130603122505459265'
minions: [ms-4, ms-3, ms-2, ms-1, ms-0]
_links:
jobs:
- href: /jobs/20130603122505459265
/jobs
¶salt.netapi.rest_cherrypy.app.
Jobs
¶GET
(jid=None)¶A convenience URL for getting lists of previously run jobs or getting the return from a single job
GET
/jobs/
(jid)¶List jobs or show a single job from the job cache.
Status Codes: |
|
---|
Example request:
curl -i localhost:8000/jobs
GET /jobs HTTP/1.1
Host: localhost:8000
Accept: application/x-yaml
Example response:
HTTP/1.1 200 OK
Content-Length: 165
Content-Type: application/x-yaml
return:
- '20121130104633606931':
Arguments:
- '3'
Function: test.fib
Start Time: 2012, Nov 30 10:46:33.606931
Target: jerry
Target-type: glob
Example request:
curl -i localhost:8000/jobs/20121130104633606931
GET /jobs/20121130104633606931 HTTP/1.1
Host: localhost:8000
Accept: application/x-yaml
Example response:
HTTP/1.1 200 OK
Content-Length: 73
Content-Type: application/x-yaml
info:
- Arguments:
- '3'
Function: test.fib
Minions:
- jerry
Start Time: 2012, Nov 30 10:46:33.606931
Target: '*'
Target-type: glob
User: saltdev
jid: '20121130104633606931'
return:
- jerry:
- - 0
- 1
- 1
- 2
- 6.9141387939453125e-06
/run
¶salt.netapi.rest_cherrypy.app.
Run
¶Class to run commands without normal session handling
POST
(**kwargs)¶Run commands bypassing the normal session handling
POST
/run
¶This entry point is primarily for "one-off" commands. Each request
must pass full Salt authentication credentials. Otherwise this URL
is identical to the root URL (/)
.
lowstate data describing Salt commands must be sent in the request body.
Status Codes: |
|
---|
Example request:
curl -sS localhost:8000/run \
-H 'Accept: application/x-yaml' \
-d client='local' \
-d tgt='*' \
-d fun='test.ping' \
-d username='saltdev' \
-d password='saltdev' \
-d eauth='pam'
POST /run HTTP/1.1
Host: localhost:8000
Accept: application/x-yaml
Content-Length: 75
Content-Type: application/x-www-form-urlencoded
client=local&tgt=*&fun=test.ping&username=saltdev&password=saltdev&eauth=pam
Example response:
HTTP/1.1 200 OK
Content-Length: 73
Content-Type: application/x-yaml
return:
- ms-0: true
ms-1: true
ms-2: true
ms-3: true
ms-4: true
The /run enpoint can also be used to issue commands using the salt-ssh subsystem.
When using salt-ssh, eauth credentials should not be supplied. Instad, authentication should be handled by the SSH layer itself. The use of the salt-ssh client does not require a salt master to be running. Instead, only a roster file must be present in the salt configuration directory.
All SSH client requests are synchronous.
** Example SSH client request:**
curl -sS localhost:8000/run \
-H 'Accept: application/x-yaml' \
-d client='ssh' \
-d tgt='*' \
-d fun='test.ping'
POST /run HTTP/1.1
Host: localhost:8000
Accept: application/x-yaml
Content-Length: 75
Content-Type: application/x-www-form-urlencoded
client=ssh&tgt=*&fun=test.ping
Example SSH response:
return:
- silver:
fun: test.ping
fun_args: []
id: silver
jid: '20141203103525666185'
retcode: 0
return: true
success: true
/events
¶salt.netapi.rest_cherrypy.app.
Events
¶Expose the Salt event bus
The event bus on the Salt master exposes a large variety of things, notably when executions are started on the master and also when minions ultimately return their results. This URL provides a real-time window into a running Salt infrastructure.
See also
events
GET
(token=None, salt_token=None)¶An HTTP stream of the Salt master event bus
This stream is formatted per the Server Sent Events (SSE) spec. Each event is formatted as JSON.
GET
/events
¶Status Codes: |
|
---|---|
Query Parameters: | |
|
Example request:
curl -NsS localhost:8000/events
GET /events HTTP/1.1
Host: localhost:8000
Example response:
Note, the tag
field is not part of the spec. SSE compliant clients
should ignore unknown fields. This addition allows non-compliant
clients to only watch for certain tags without having to deserialze the
JSON object each time.
HTTP/1.1 200 OK
Connection: keep-alive
Cache-Control: no-cache
Content-Type: text/event-stream;charset=utf-8
retry: 400
tag: salt/job/20130802115730568475/new
data: {'tag': 'salt/job/20130802115730568475/new', 'data': {'minions': ['ms-4', 'ms-3', 'ms-2', 'ms-1', 'ms-0']}}
tag: salt/job/20130802115730568475/ret/jerry
data: {'tag': 'salt/job/20130802115730568475/ret/jerry', 'data': {'jid': '20130802115730568475', 'return': True, 'retcode': 0, 'success': True, 'cmd': '_return', 'fun': 'test.ping', 'id': 'ms-1'}}
The event stream can be easily consumed via JavaScript:
var source = new EventSource('/events');
source.onopen = function() { console.debug('opening') };
source.onerror = function(e) { console.debug('error!', e) };
source.onmessage = function(e) {
console.debug('Tag: ', e.data.tag)
console.debug('Data: ', e.data.data)
};
Or using CORS:
var source = new EventSource('/events?token=ecd589e4e01912cf3c4035afad73426dbb8dba75', {withCredentials: true});
It is also possible to consume the stream via the shell.
Records are separated by blank lines; the data:
and tag:
prefixes will need to be removed manually before attempting to
unserialize the JSON.
curl's -N
flag turns off input buffering which is required to
process the stream incrementally.
Here is a basic example of printing each event as it comes in:
curl -NsS localhost:8000/events |\
while IFS= read -r line ; do
echo $line
done
Here is an example of using awk to filter events based on tag:
curl -NsS localhost:8000/events |\
awk '
BEGIN { RS=""; FS="\\n" }
$1 ~ /^tag: salt\/job\/[0-9]+\/new$/ { print $0 }
'
tag: salt/job/20140112010149808995/new
data: {"tag": "salt/job/20140112010149808995/new", "data": {"tgt_type": "glob", "jid": "20140112010149808995", "tgt": "jerry", "_stamp": "2014-01-12_01:01:49.809617", "user": "shouse", "arg": [], "fun": "test.ping", "minions": ["jerry"]}}
tag: 20140112010149808995
data: {"tag": "20140112010149808995", "data": {"fun_args": [], "jid": "20140112010149808995", "return": true, "retcode": 0, "success": true, "cmd": "_return", "_stamp": "2014-01-12_01:01:49.819316", "fun": "test.ping", "id": "jerry"}}
/hook
¶salt.netapi.rest_cherrypy.app.
Webhook
¶A generic web hook entry point that fires an event on Salt's event bus
External services can POST data to this URL to trigger an event in Salt. For example, Amazon SNS, Jenkins-CI or Travis-CI, or GitHub web hooks.
Note
Be mindful of security
Salt's Reactor can run any code. A Reactor SLS that responds to a hook event is responsible for validating that the event came from a trusted source and contains valid data.
This is a generic interface and securing it is up to you!
This URL requires authentication however not all external services can be configured to authenticate. For this reason authentication can be selectively disabled for this URL. Follow best practices -- always use SSL, pass a secret key, configure the firewall to only allow traffic from a known source, etc.
The event data is taken from the request body. The Content-Type header is respected for the payload.
The event tag is prefixed with salt/netapi/hook
and the URL path is
appended to the end. For example, a POST
request sent to
/hook/mycompany/myapp/mydata
will produce a Salt event with the tag
salt/netapi/hook/mycompany/myapp/mydata
.
The following is an example .travis.yml
file to send notifications to
Salt of successful test runs:
language: python
script: python -m unittest tests
after_success:
- |
curl -sSk https://saltapi-url.example.com:8000/hook/travis/build/success -d branch="${TRAVIS_BRANCH}" -d commit="${TRAVIS_COMMIT}"
See also
events, reactor
POST
(*args, **kwargs)¶Fire an event in Salt with a custom event tag and data
POST
/hook
¶Status Codes: |
|
---|
Example request:
curl -sS localhost:8000/hook -d foo='Foo!' -d bar='Bar!'
POST /hook HTTP/1.1
Host: localhost:8000
Content-Length: 16
Content-Type: application/x-www-form-urlencoded
foo=Foo&bar=Bar!
Example response:
HTTP/1.1 200 OK
Content-Length: 14
Content-Type: application/json
{"success": true}
As a practical example, an internal continuous-integration build
server could send an HTTP POST request to the URL
https://localhost:8000/hook/mycompany/build/success
which contains
the result of a build and the SHA of the version that was built as
JSON. That would then produce the following event in Salt that could be
used to kick off a deployment via Salt's Reactor:
Event fired at Fri Feb 14 17:40:11 2014
*************************
Tag: salt/netapi/hook/mycompany/build/success
Data:
{'_stamp': '2014-02-14_17:40:11.440996',
'headers': {
'X-My-Secret-Key': 'F0fAgoQjIT@W',
'Content-Length': '37',
'Content-Type': 'application/json',
'Host': 'localhost:8000',
'Remote-Addr': '127.0.0.1'},
'post': {'revision': 'aa22a3c4b2e7', 'result': True}}
Salt's Reactor could listen for the event:
reactor:
- 'salt/netapi/hook/mycompany/build/*':
- /srv/reactor/react_ci_builds.sls
And finally deploy the new build:
{% set secret_key = data.get('headers', {}).get('X-My-Secret-Key') %}
{% set build = data.get('post', {}) %}
{% if secret_key == 'F0fAgoQjIT@W' and build.result == True %}
deploy_my_app:
cmd.state.sls:
- tgt: 'application*'
- arg:
- myapp.deploy
- kwarg:
pillar:
revision: {{ revision }}
{% endif %}
/keys
¶salt.netapi.rest_cherrypy.app.
Keys
¶Convenience URLs for working with minion keys
New in version 2014.7.0.
These URLs wrap the functionality provided by the key wheel
module
functions.
GET
(mid=None)¶Show the list of minion keys or detail on a specific key
New in version 2014.7.0.
GET
/keys/
(mid)¶List all keys or show a specific key
Status Codes: |
|
---|
Example request:
curl -i localhost:8000/keys
GET /keys HTTP/1.1
Host: localhost:8000
Accept: application/x-yaml
Example response:
HTTP/1.1 200 OK
Content-Length: 165
Content-Type: application/x-yaml
return:
local:
- master.pem
- master.pub
minions:
- jerry
minions_pre: []
minions_rejected: []
Example request:
curl -i localhost:8000/keys/jerry
GET /keys/jerry HTTP/1.1
Host: localhost:8000
Accept: application/x-yaml
Example response:
HTTP/1.1 200 OK
Content-Length: 73
Content-Type: application/x-yaml
return:
minions:
jerry: 51:93:b3:d0:9f:3a:6d:e5:28:67:c2:4b:27:d6:cd:2b
POST
(mid, keysize=None, force=None, **kwargs)¶Easily generate keys for a minion and auto-accept the new key
New in version 2014.7.0.
Example partial kickstart script to bootstrap a new minion:
%post
mkdir -p /etc/salt/pki/minion
curl -sSk https://localhost:8000/keys \
-d mid=jerry \
-d username=kickstart \
-d password=kickstart \
-d eauth=pam \
| tar -C /etc/salt/pki/minion -xf -
mkdir -p /etc/salt/minion.d
printf 'master: 10.0.0.5\nid: jerry' > /etc/salt/minion.d/id.conf
%end
POST
/keys
¶Generate a public and private key and return both as a tarball
Authentication credentials must be passed in the request.
Status Codes: |
|
---|
Example request:
curl -sSk https://localhost:8000/keys \
-d mid=jerry \
-d username=kickstart \
-d password=kickstart \
-d eauth=pam \
-o jerry-salt-keys.tar
POST /keys HTTP/1.1
Host: localhost:8000
Example response:
HTTP/1.1 200 OK
Content-Length: 10240
Content-Disposition: attachment; filename="saltkeys-jerry.tar"
Content-Type: application/x-tar
jerry.pub0000644000000000000000000000070300000000000010730 0ustar 00000000000000
/ws
¶salt.netapi.rest_cherrypy.app.
WebsocketEndpoint
¶Open a WebSocket connection to Salt's event bus
The event bus on the Salt master exposes a large variety of things, notably when executions are started on the master and also when minions ultimately return their results. This URL provides a real-time window into a running Salt infrastructure. Uses websocket as the transport mechanism.
See also
events
GET
(token=None, **kwargs)¶Return a websocket connection of Salt's event stream
GET
/ws/
(token)¶Query Parameters: | |
---|---|
|
|
Request Headers: | |
|
|
Status Codes: |
|
Example request:
- curl -NsSk
- -H 'X-Auth-Token: ffedf49d' -H 'Host: localhost:8000' -H 'Connection: Upgrade' -H 'Upgrade: websocket' -H 'Origin: https://localhost:8000' -H 'Sec-WebSocket-Version: 13' -H 'Sec-WebSocket-Key: '"$(echo -n $RANDOM | base64)" localhost:8000/ws
GET /ws HTTP/1.1
Connection: Upgrade
Upgrade: websocket
Host: localhost:8000
Origin: https://localhost:8000
Sec-WebSocket-Version: 13
Sec-WebSocket-Key: s65VsgHigh7v/Jcf4nXHnA==
X-Auth-Token: ffedf49d
Example response:
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: mWZjBV9FCglzn1rIKJAxrTFlnJE=
Sec-WebSocket-Version: 13
An authentication token may optionally be passed as part of the URL for browsers that cannot be configured to send the authentication header or cookie:
curl -NsS <...snip...> localhost:8000/ws/ffedf49d
The event stream can be easily consumed via JavaScript:
// Note, you must be authenticated!
var source = new Websocket('ws://localhost:8000/ws/d0ce6c1a');
source.onerror = function(e) { console.debug('error!', e); };
source.onmessage = function(e) { console.debug(e.data); };
source.send('websocket client ready')
source.close();
Or via Python, using the Python module websocket-client for example.
# Note, you must be authenticated!
from websocket import create_connection
ws = create_connection('ws://localhost:8000/ws/d0ce6c1a')
ws.send('websocket client ready')
# Look at https://pypi.python.org/pypi/websocket-client/ for more
# examples.
while listening_to_events:
print ws.recv()
ws.close()
Above examples show how to establish a websocket connection to Salt and
activating real time updates from Salt's event stream by signaling
websocket client ready
.
/stats
¶salt.netapi.rest_cherrypy.app.
Stats
¶Expose statistics on the running CherryPy server
GET
()¶Return a dump of statistics collected from the CherryPy server
GET
/stats
¶Request Headers: | |
---|---|
|
|
Response Headers: | |
|
|
Status Codes: |
|
depends: |
|
---|---|
configuration: | All authentication is done through Salt's external auth system which requires additional configuration not described here. |
In order to run rest_tornado with the salt-master add the following to the Salt master config file.
rest_tornado:
# can be any port
port: 8000
# address to bind to (defaults to 0.0.0.0)
address: 0.0.0.0
# socket backlog
backlog: 128
ssl_crt: /etc/pki/api/certs/server.crt
# no need to specify ssl_key if cert and key
# are in one single file
ssl_key: /etc/pki/api/certs/server.key
debug: False
disable_ssl: False
webhook_disable_auth: False
Authentication is performed by passing a session token with each request.
Tokens are generated via the SaltAuthHandler
URL.
The token may be sent in one of two ways:
See also
You can bypass the session handling via the RunSaltAPIHandler
URL.
Commands are sent to a running Salt master via this module by sending HTTP requests to the URLs detailed below.
Content negotiation
This REST interface is flexible in what data formats it will accept as well as what formats it will return (e.g., JSON, YAML, x-www-form-urlencoded).
Data sent in POST and PUT requests must be in the format of a list of lowstate dictionaries. This allows multiple commands to be executed in a single HTTP request.
A dictionary containing various keys that instruct Salt which command to run, where that command lives, any parameters for that command, any authentication credentials, what returner to use, etc.
Salt uses the lowstate data format internally in many places to pass command data between functions. Salt also uses lowstate for the LocalClient() Python API interface.
The following example (in JSON format) causes Salt to execute two commands:
[{
"client": "local",
"tgt": "*",
"fun": "test.fib",
"arg": ["10"]
},
{
"client": "runner",
"fun": "jobs.lookup_jid",
"jid": "20130603122505459265"
}]
Multiple commands in a Salt API request will be executed in serial and makes no gaurantees that all commands will run. Meaning that if test.fib (from the example above) had an exception, the API would still execute "jobs.lookup_jid".
Responses to these lowstates are an in-order list of dicts containing the return data, a yaml response could look like:
- ms-1: true
ms-2: true
- ms-1: foo
ms-2: bar
In the event of an exception while executing a command the return for that lowstate will be a string, for example if no minions matched the first lowstate we would get a return like:
- No minions matched the target. No command was sent, no jid was assigned.
- ms-1: true
ms-2: true
x-www-form-urlencoded
Sending JSON or YAML in the request body is simple and most flexible, however sending data in urlencoded format is also supported with the caveats below. It is the default format for HTML forms, many JavaScript libraries, and the curl command.
For example, the equivalent to running salt '*' test.ping
is sending
fun=test.ping&arg&client=local&tgt=*
in the HTTP request body.
Caveats:
Only a single command may be sent per HTTP request.
Repeating the arg
parameter multiple times will cause those
parameters to be combined into a single list.
Note, some popular frameworks and languages (notably jQuery, PHP, and
Ruby on Rails) will automatically append empty brackets onto repeated
parameters. E.g., arg=one
, arg=two
will be sent as arg[]=one
,
arg[]=two
. This is not supported; send JSON or YAML instead.
depends: |
|
---|
In order to enable saltnado_websockets you must add websockets: True to your saltnado config block.
rest_tornado:
# can be any port
port: 8000
ssl_crt: /etc/pki/api/certs/server.crt
# no need to specify ssl_key if cert and key
# are in one single file
ssl_key: /etc/pki/api/certs/server.key
debug: False
disable_ssl: False
websockets: True
Exposes all
"real-time" events from Salt's event bus on a websocket connection.
It should be noted that "Real-time" here means these events are made available
to the server as soon as any salt related action (changes to minions, new jobs etc) happens.
Clients are however assumed to be able to tolerate any network transport related latencies.
Functionality provided by this endpoint is similar to the /events
end point.
The event bus on the Salt master exposes a large variety of things, notably when executions are started on the master and also when minions ultimately return their results. This URL provides a real-time window into a running Salt infrastructure. Uses websocket as the transport mechanism.
Exposes GET method to return websocket connections. All requests should include an auth token. A way to obtain obtain authentication tokens is shown below.
% curl -si localhost:8000/login \
-H "Accept: application/json" \
-d username='salt' \
-d password='salt' \
-d eauth='pam'
Which results in the response
{
"return": [{
"perms": [".*", "@runner", "@wheel"],
"start": 1400556492.277421,
"token": "d0ce6c1a37e99dcc0374392f272fe19c0090cca7",
"expire": 1400599692.277422,
"user": "salt",
"eauth": "pam"
}]
}
In this example the token
returned is d0ce6c1a37e99dcc0374392f272fe19c0090cca7
and can be included
in subsequent websocket requests (as part of the URL).
The event stream can be easily consumed via JavaScript:
// Note, you must be authenticated!
// Get the Websocket connection to Salt
var source = new Websocket('wss://localhost:8000/all_events/d0ce6c1a37e99dcc0374392f272fe19c0090cca7');
// Get Salt's "real time" event stream.
source.onopen = function() { source.send('websocket client ready'); };
// Other handlers
source.onerror = function(e) { console.debug('error!', e); };
// e.data represents Salt's "real time" event data as serialized JSON.
source.onmessage = function(e) { console.debug(e.data); };
// Terminates websocket connection and Salt's "real time" event stream on the server.
source.close();
Or via Python, using the Python module websocket-client for example. Or the tornado client.
# Note, you must be authenticated!
from websocket import create_connection
# Get the Websocket connection to Salt
ws = create_connection('wss://localhost:8000/all_events/d0ce6c1a37e99dcc0374392f272fe19c0090cca7')
# Get Salt's "real time" event stream.
ws.send('websocket client ready')
# Simple listener to print results of Salt's "real time" event stream.
# Look at https://pypi.python.org/pypi/websocket-client/ for more examples.
while listening_to_events:
print ws.recv() # Salt's "real time" event data as serialized JSON.
# Terminates websocket connection and Salt's "real time" event stream on the server.
ws.close()
# Please refer to https://github.com/liris/websocket-client/issues/81 when using a self signed cert
Above examples show how to establish a websocket connection to Salt and activating
real time updates from Salt's event stream by signaling websocket client ready
.
Exposes formatted
"real-time" events from Salt's event bus on a websocket connection.
It should be noted that "Real-time" here means these events are made available
to the server as soon as any salt related action (changes to minions, new jobs etc) happens.
Clients are however assumed to be able to tolerate any network transport related latencies.
Functionality provided by this endpoint is similar to the /events
end point.
The event bus on the Salt master exposes a large variety of things, notably when executions are started on the master and also when minions ultimately return their results. This URL provides a real-time window into a running Salt infrastructure. Uses websocket as the transport mechanism.
Formatted events parses the raw "real time" event stream and maintains a current view of the following:
A change to the minions (such as addition, removal of keys or connection drops)
or jobs is processed and clients are updated.
Since we use salt's presence events to track minions,
please enable presence_events
and set a small value for the loop_interval
in the salt master config file.
Exposes GET method to return websocket connections. All requests should include an auth token. A way to obtain obtain authentication tokens is shown below.
% curl -si localhost:8000/login \
-H "Accept: application/json" \
-d username='salt' \
-d password='salt' \
-d eauth='pam'
Which results in the response
{
"return": [{
"perms": [".*", "@runner", "@wheel"],
"start": 1400556492.277421,
"token": "d0ce6c1a37e99dcc0374392f272fe19c0090cca7",
"expire": 1400599692.277422,
"user": "salt",
"eauth": "pam"
}]
}
In this example the token
returned is d0ce6c1a37e99dcc0374392f272fe19c0090cca7
and can be included
in subsequent websocket requests (as part of the URL).
The event stream can be easily consumed via JavaScript:
// Note, you must be authenticated!
// Get the Websocket connection to Salt
var source = new Websocket('wss://localhost:8000/formatted_events/d0ce6c1a37e99dcc0374392f272fe19c0090cca7');
// Get Salt's "real time" event stream.
source.onopen = function() { source.send('websocket client ready'); };
// Other handlers
source.onerror = function(e) { console.debug('error!', e); };
// e.data represents Salt's "real time" event data as serialized JSON.
source.onmessage = function(e) { console.debug(e.data); };
// Terminates websocket connection and Salt's "real time" event stream on the server.
source.close();
Or via Python, using the Python module websocket-client for example. Or the tornado client.
# Note, you must be authenticated!
from websocket import create_connection
# Get the Websocket connection to Salt
ws = create_connection('wss://localhost:8000/formatted_events/d0ce6c1a37e99dcc0374392f272fe19c0090cca7')
# Get Salt's "real time" event stream.
ws.send('websocket client ready')
# Simple listener to print results of Salt's "real time" event stream.
# Look at https://pypi.python.org/pypi/websocket-client/ for more examples.
while listening_to_events:
print ws.recv() # Salt's "real time" event data as serialized JSON.
# Terminates websocket connection and Salt's "real time" event stream on the server.
ws.close()
# Please refer to https://github.com/liris/websocket-client/issues/81 when using a self signed cert
Above examples show how to establish a websocket connection to Salt and activating
real time updates from Salt's event stream by signaling websocket client ready
.
Minion information
is a dictionary keyed by each connected minion's id
(mid
),
grains information for each minion is also included.
Minion information is sent in response to the following minion events:
manage.present
periodically every loop_interval
secondsminion addition
minon removal
# Not all grains are shown
data: {
"minions": {
"minion1": {
"id": "minion1",
"grains": {
"kernel": "Darwin",
"domain": "local",
"zmqversion": "4.0.3",
"kernelrelease": "13.2.0"
}
}
}
}
Job information
is also tracked and delivered.
Job information is also a dictionary
in which each job's information is keyed by salt's jid
.
data: {
"jobs": {
"20140609153646699137": {
"tgt_type": "glob",
"jid": "20140609153646699137",
"tgt": "*",
"start_time": "2014-06-09T15:36:46.700315",
"state": "complete",
"fun": "test.ping",
"minions": {
"minion1": {
"return": true,
"retcode": 0,
"success": true
}
}
}
}
}
/minions
¶salt.netapi.rest_tornado.saltnado.
MinionSaltAPIHandler
¶alias of <Mock object at 0x7fcfd7bb2bd0>
/jobs
¶salt.netapi.rest_tornado.saltnado.
JobsSaltAPIHandler
¶alias of <Mock object at 0x7fcfd7bb2950>
This rest_wsgi
module provides a no-frills REST interface for sending
commands to the Salt master. There are no dependencies.
Extra care must be taken when deploying this module into production. Please read this documentation in entirety.
All authentication is done through Salt's external auth system.
/
).See also
The rest_cherrypy
module is
more full-featured, production-ready, and has builtin security features.
The rest_wsgi
netapi module is a standard Python WSGI app. It can be
deployed one of two ways.
This module may be run via any WSGI-compliant production server such as Apache with mod_wsgi or Nginx with FastCGI.
It is strongly recommended that this app be used with a server that supports HTTPS encryption since raw Salt authentication credentials must be sent with every request. Any apps that access Salt through this interface will need to manually manage authentication credentials (either username and password or a Salt token). Tread carefully.
If run directly via the salt-api daemon it uses the wsgiref.simple_server() that ships in the Python standard library. This is a single-threaded server that is intended for testing and development. This server does not use encryption; please note that raw Salt authentication credentials must be sent with every HTTP request.
Running this module via salt-api is not recommended!
In order to start this module via the salt-api
daemon the following must be
put into the Salt master config:
rest_wsgi:
port: 8001
POST
/
¶Example request for a basic test.ping
:
% curl -sS -i \
-H 'Content-Type: application/json' \
-d '[{"eauth":"pam","username":"saltdev","password":"saltdev","client":"local","tgt":"*","fun":"test.ping"}]' localhost:8001
Example response:
HTTP/1.0 200 OK
Content-Length: 89
Content-Type: application/json
{"return": [{"ms--4": true, "ms--3": true, "ms--2": true, "ms--1": true, "ms--0": true}]}
Example request for an asynchronous test.ping
:
% curl -sS -i \
-H 'Content-Type: application/json' \
-d '[{"eauth":"pam","username":"saltdev","password":"saltdev","client":"local_async","tgt":"*","fun":"test.ping"}]' localhost:8001
Example response:
HTTP/1.0 200 OK
Content-Length: 103
Content-Type: application/json
{"return": [{"jid": "20130412192112593739", "minions": ["ms--4", "ms--3", "ms--2", "ms--1", "ms--0"]}]}
Example request for looking up a job ID:
% curl -sS -i \
-H 'Content-Type: application/json' \
-d '[{"eauth":"pam","username":"saltdev","password":"saltdev","client":"runner","fun":"jobs.lookup_jid","jid":"20130412192112593739"}]' localhost:8001
Example response:
HTTP/1.0 200 OK
Content-Length: 89
Content-Type: application/json
{"return": [{"ms--4": true, "ms--3": true, "ms--2": true, "ms--1": true, "ms--0": true}]}
form lowstate: | A list of lowstate data appropriate for the client interface you are calling. |
---|---|
status 200: | success |
status 401: | authentication required |
Follow one of the below links for further information and examples
compact |
Display compact output data structure |
highstate |
Outputter for displaying results of state runs |
json_out |
Display return data in JSON format |
key |
Display salt-key output |
nested |
Recursively display nested data |
newline_values_only |
Display values only, separated by newlines |
no_out |
Display no output |
no_return |
Display output for minions that did not return |
overstatestage |
Display clean output of an overstate stage |
pprint_out |
Python pretty-print (pprint) |
progress |
Display return data as a progress bar |
raw |
Display raw output data structure |
txt |
Simple text outputter |
virt_query |
virt.query outputter |
yaml_out |
Display return data in YAML format |
Salt 0.9.0 introduced the capability for Salt minions to publish commands. The intent of this feature is not for Salt minions to act as independent brokers one with another, but to allow Salt minions to pass commands to each other.
In Salt 0.10.0 the ability to execute runners from the master was added. This allows for the master to return collective data from runners back to the minions via the peer interface.
The peer interface is configured through two options in the master
configuration file. For minions to send commands from the master the peer
configuration is used. To allow for minions to execute runners from the master
the peer_run
configuration is used.
Since this presents a viable security risk by allowing minions access to the master publisher the capability is turned off by default. The minions can be allowed access to the master publisher on a per minion basis based on regular expressions. Minions with specific ids can be allowed access to certain Salt modules and functions.
The configuration is done under the peer
setting in the Salt master
configuration file, here are a number of configuration possibilities.
The simplest approach is to enable all communication for all minions, this is only recommended for very secure environments.
peer:
.*:
- .*
This configuration will allow minions with IDs ending in example.com access to the test, ps, and pkg module functions.
peer:
.*example.com:
- test.*
- ps.*
- pkg.*
The configuration logic is simple, a regular expression is passed for matching minion ids, and then a list of expressions matching minion functions is associated with the named minion. For instance, this configuration will also allow minions ending with foo.org access to the publisher.
peer:
.*example.com:
- test.*
- ps.*
- pkg.*
.*foo.org:
- test.*
- ps.*
- pkg.*
Configuration to allow minions to execute runners from the master is done via
the peer_run
option on the master. The peer_run
configuration follows
the same logic as the peer
option. The only difference is that access is
granted to runner modules.
To open up access to all minions to all runners:
peer_run:
.*:
- .*
This configuration will allow minions with IDs ending in example.com access to the manage and jobs runner functions.
peer_run:
.*example.com:
- manage.*
- jobs.*
The publish module was created to manage peer communication. The publish module comes with a number of functions to execute peer communication in different ways. Currently there are three functions in the publish module. These examples will show how to test the peer system via the salt-call command.
To execute test.ping on all minions:
# salt-call publish.publish \* test.ping
To execute the manage.up runner:
# salt-call publish.runner manage.up
To match minions using other matchers, use expr_form
:
# salt-call publish.publish 'webserv* and not G@os:Ubuntu' test.ping expr_form='compound'
Salt includes a number of built-in external pillars, listed at Full list of builtin pillar modules.
You may also wish to look at the standard pillar documentation, at Pillar Configuration
The source for the built-in Salt pillars can be found here: https://github.com/saltstack/salt/blob/develop/salt/pillar
cmd_json |
Execute a command and read the output as JSON. |
cmd_yaml |
Execute a command and read the output as YAML. |
cmd_yamlex |
Execute a command and read the output as YAMLEX. |
cobbler |
A module to pull data from Cobbler via its API into the Pillar dictionary |
django_orm |
Generate Pillar data from Django models through the Django ORM |
ec2_pillar |
Retrieve EC2 instance data for minions. |
etcd_pillar |
Use etcd data as a Pillar source |
file_tree |
Recursively iterate over directories and add all files as Pillar data. |
foreman |
A module to pull data from Foreman via its API into the Pillar dictionary |
git_pillar |
Clone a remote git repository and use the filesystem as a Pillar source |
hg_pillar |
Use remote Mercurial repository as a Pillar source. |
hiera |
Use hiera data as a Pillar source |
libvirt |
Load up the libvirt keys into Pillar for a given minion if said keys have been generated using the libvirt key runner |
mongo |
Read Pillar data from a mongodb collection |
mysql |
Retrieve Pillar data by doing a MySQL query |
pepa |
Pepa |
pillar_ldap |
Use LDAP data as a Pillar source |
puppet |
Execute an unmodified puppet_node_classifier and read the output as YAML. |
reclass_adapter |
Use the "reclass" database as a Pillar source |
redismod |
Read pillar data from a Redis backend |
s3 |
Copy pillar data from a bucket in Amazon S3 |
svn_pillar |
Clone a remote SVN repository and use the filesystem as a Pillar source |
varstack_pillar |
Use Varstack data as a Pillar source |
virtkey |
Accept a key from a hypervisor if the virt runner has already submitted an authorization request |
The Salt state system operates by gathering information from common data types such as lists, dictionaries, and strings that would be familiar to any developer.
SLS files are translated from whatever data templating format they are written in back into Python data types to be consumed by Salt.
By default SLS files are rendered as Jinja templates and then parsed as YAML documents. But since the only thing the state system cares about is raw data, the SLS files can be any structured format that can be dreamed up.
Currently there is support for Jinja + YAML
, Mako + YAML
,
Wempy + YAML
, Jinja + json
, Mako + json
and Wempy + json
.
Renderers can be written to support any template type. This means that the Salt states could be managed by XML files, HTML files, Puppet files, or any format that can be translated into the Pythonic data structure used by the state system.
A default renderer is selected in the master configuration file by providing
a value to the renderer
key.
When evaluating an SLS, more than one renderer can be used.
When rendering SLS files, Salt checks for the presence of a Salt-specific shebang line.
The shebang line directly calls the name of the renderer as it is specified
within Salt. One of the most common reasons to use multiple renderers is to
use the Python or py
renderer.
Below, the first line is a shebang that references the py
renderer.
#!py
def run():
'''
Install the python-mako package
'''
return {'include': ['python'],
'python-mako': {'pkg': ['installed']}}
A renderer can be composed from other renderers by connecting them in a series
of pipes(|
).
In fact, the default Jinja + YAML
renderer is implemented by connecting a YAML
renderer to a Jinja renderer. Such renderer configuration is specified as: jinja | yaml
.
Other renderer combinations are possible:
yaml
- i.e, just YAML, no templating.
mako | yaml
- pass the input to the
mako
renderer, whose output is then fed into theyaml
renderer.jinja | mako | yaml
- This one allows you to use both jinja and mako templating syntax in the input and then parse the final rendered output as YAML.
The following is a contrived example SLS file using the jinja | mako | yaml
renderer:
#!jinja|mako|yaml
An_Example:
cmd.run:
- name: |
echo "Using Salt ${grains['saltversion']}" \
"from path {{grains['saltpath']}}."
- cwd: /
<%doc> ${...} is Mako's notation, and so is this comment. </%doc>
{# Similarly, {{...}} is Jinja's notation, and so is this comment. #}
For backward compatibility, jinja | yaml
can also be written as
yaml_jinja
, and similarly, the yaml_mako
, yaml_wempy
,
json_jinja
, json_mako
, and json_wempy
renderers are all supported.
Keep in mind that not all renderers can be used alone or with any other renderers. For example, the template renderers shouldn't be used alone as their outputs are just strings, which still need to be parsed by another renderer to turn them into highstate data structures.
For example, it doesn't make sense to specify yaml | jinja
because the
output of the YAML renderer is a highstate data structure (a dict in Python), which
cannot be used as the input to a template renderer. Therefore, when combining
renderers, you should know what each renderer accepts as input and what it returns
as output.
A custom renderer must be a Python module placed in the renderers directory and the
module implement the render
function.
The render
function will be passed the path of the SLS file as an argument.
The purpose of of render
function is to parse the passed file and to return
the Python data structure derived from the file.
Custom renderers must be placed in a _renderers
directory within the
file_roots
specified by the master config file.
Any custom renderers which have been synced to a minion, that are named the same as one of Salt's default set of renderers, will take the place of the default renderer with the same name.
The best place to find examples of renderers is in the Salt source code.
Documentation for renderers included with Salt can be found here:
https://github.com/saltstack/salt/blob/develop/salt/renderers
Here is a simple YAML renderer example:
import yaml
def render(yaml_data, env='', sls='', **kws):
if not isinstance(yaml_data, basestring):
yaml_data = yaml_data.read()
data = yaml.load(yaml_data)
return data if data else {}
cheetah |
Cheetah Renderer for Salt |
genshi |
Genshi Renderer for Salt |
gpg |
Renderer that will decrypt GPG ciphers |
hjson |
Hjson Renderer for Salt |
jinja |
Jinja loading utils to enable a more powerful backend for jinja templates |
json |
JSON Renderer for Salt |
mako |
Mako Renderer for Salt |
msgpack |
|
py |
Pure python state renderer |
pydsl |
A Python-based DSL |
pyobjects |
Python renderer that includes a Pythonic Object based interface |
stateconf |
A flexible renderer that takes a templating engine and a data format |
wempy |
|
yaml |
YAML Renderer for Salt |
yamlex |
By default the return values of the commands sent to the Salt minions are returned to the Salt master, however anything at all can be done with the results data.
By using a Salt returner, results data can be redirected to external data-stores for analysis and archival.
Returners pull their configuration values from the Salt minions. Returners are only configured once, which is generally at load time.
The returner interface allows the return data to be sent to any system that can receive data. This means that return data can be sent to a Redis server, a MongoDB server, a MySQL server, or any system.
See also
All Salt commands will return the command data back to the master. Specifying returners will ensure that the data is _also_ sent to the specified returner interfaces.
Specifying what returners to use is done when the command is invoked:
salt '*' test.ping --return redis_return
This command will ensure that the redis_return returner is used.
It is also possible to specify multiple returners:
salt '*' test.ping --return mongo_return,redis_return,cassandra_return
In this scenario all three returners will be called and the data from the test.ping command will be sent out to the three named returners.
A returner is a Python module containing at minimum a returner
function.
Other optional functions can be included to add support for
Master Job Cache, External Job Cache, and Event Returners.
returner
returner
function must accept a single argument. The argument
contains return data from the called minion function. If the minion
function test.ping
is called, the value of the argument will be a
dictionary. Run the following command from a Salt master to get a sample
of the dictionary:salt-call --local --metadata test.ping --out=pprint
import redis
import json
def returner(ret):
'''
Return information to a redis server
'''
# Get a redis connection
serv = redis.Redis(
host='redis-serv.example.com',
port=6379,
db='0')
serv.sadd("%(id)s:jobs" % ret, ret['jid'])
serv.set("%(jid)s:%(id)s" % ret, json.dumps(ret['return']))
serv.sadd('jobs', ret['jid'])
serv.sadd(ret['jid'], ret['id'])
The above example of a returner set to send the data to a Redis server serializes the data as JSON and sets it in redis.
Master Job Cache, External Job Cache, and Event Returners. Salt's Master Job Cache allows returners to be used as a pluggable replacement for the Default Job Cache. In order to do so, a returner must implement the following functions:
Note
The code samples contained in this section were taken from the cassandra_cql returner.
prep_jid
Ensures that job ids (jid) don't collide, unless passed_jid is provided.
nochache
is an optional boolean that indicates if return data
should be cached. passed_jid
is a caller provided jid which should be
returned unconditionally.
def prep_jid(nocache, passed_jid=None): # pylint: disable=unused-argument
'''
Do any work necessary to prepare a JID, including sending a custom id
'''
return passed_jid if passed_jid is not None else salt.utils.jid.gen_jid()
save_load
jid
is generated by prep_jid
and should
be considered a unique identifier for the job. The jid, for example, could
be used as the primary/unique key in a database. The load
is what is
returned to a Salt master by a minion. The following code example stores
the load as a JSON string in the salt.jids table.def save_load(jid, load):
'''
Save the load to the specified jid id
'''
query = '''INSERT INTO salt.jids (
jid, load
) VALUES (
'{0}', '{1}'
);'''.format(jid, json.dumps(load))
# cassandra_cql.cql_query may raise a CommandExecutionError
try:
__salt__['cassandra_cql.cql_query'](query)
except CommandExecutionError:
log.critical('Could not save load in jids table.')
raise
except Exception as e:
log.critical('''Unexpected error while inserting into
jids: {0}'''.format(str(e)))
raise
get_load
save_load
,
or an empty dictionary when not found.def get_load(jid):
'''
Return the load data that marks a specified jid
'''
query = '''SELECT load FROM salt.jids WHERE jid = '{0}';'''.format(jid)
ret = {}
# cassandra_cql.cql_query may raise a CommandExecutionError
try:
data = __salt__['cassandra_cql.cql_query'](query)
if data:
load = data[0].get('load')
if load:
ret = json.loads(load)
except CommandExecutionError:
log.critical('Could not get load from jids table.')
raise
except Exception as e:
log.critical('''Unexpected error while getting load from
jids: {0}'''.format(str(e)))
raise
return ret
Salt's External Job Cache extends the Master Job Cache. External Job Cache support requires the following functions in addition to what is required for Master Job Cache support:
get_jid
Sample:
{
"local": {
"master_minion": {
"fun_args": [],
"jid": "20150330121011408195",
"return": true,
"retcode": 0,
"success": true,
"cmd": "_return",
"_stamp": "2015-03-30T12:10:12.708663",
"fun": "test.ping",
"id": "master_minion"
}
}
}
get_fun
Sample:
{
"local": {
"minion1": "test.ping",
"minion3": "test.ping",
"minion2": "test.ping"
}
}
get_jids
Sample:
{
"local": [
"20150330121011408195",
"20150330195922139916"
]
}
get_minions
Sample:
{
"local": [
"minion3",
"minion2",
"minion1",
"master_minion"
]
}
Please refer to one or more of the existing returners (i.e. mysql, cassandra_cql) if you need further clarification.
An event_return
function must be added to the returner module to allow
events to be logged from a master via the returner. A list of events are passed
to the function by the master.
The following example was taken from the MySQL returner. In this example, each event is inserted into the salt_events table keyed on the event tag. The tag contains the jid and therefore is guaranteed to be unique.
def event_return(events):
'''
Return event to mysql server
Requires that configuration be enabled via 'event_return'
option in master config.
'''
with _get_serv(events, commit=True) as cur:
for event in events:
tag = event.get('tag', '')
data = event.get('data', '')
sql = '''INSERT INTO `salt_events` (`tag`, `data`, `master_id` )
VALUES (%s, %s, %s)'''
cur.execute(sql, (tag, json.dumps(data), __opts__['id']))
Place custom returners in a _returners
directory within the
file_roots
specified by the master config file.
Any custom returners which have been synced to a minion that are named the same as one of Salt's default set of returners will take the place of the default returner with the same name.
Note that a returner's default name is its filename (i.e. foo.py
becomes
returner foo
), but that its name can be overridden by using a
__virtual__ function. A good example of this can be
found in the redis returner, which is named redis_return.py
but is
loaded as simply redis
:
try:
import redis
HAS_REDIS = True
except ImportError:
HAS_REDIS = False
__virtualname__ = 'redis'
def __virtual__():
if not HAS_REDIS:
return False
return __virtualname__
The returner
, prep_jid
, save_load
, get_load
, and
event_return
functions can be tested by configuring the
Master Job Cache and Event Returners in the master config
file and submitting a job to test.ping
each minion from the master.
Once you have successfully exercised the Master Job Cache functions, test the
External Job Cache functions using the ret
execution module.
salt-call ret.get_jids cassandra_cql --output=json
salt-call ret.get_fun cassandra_cql test.ping --output=json
salt-call ret.get_minions cassandra_cql --output=json
salt-call ret.get_jid cassandra_cql 20150330121011408195 --output=json
For maximimum visibility into the history of events across a Salt infrastructure, all events seen by a salt master may be logged to a returner.
To enable event logging, set the event_return
configuration option in the
master config to returner which should be designated as the handler for event
returns.
Note
Not all returners support event returns. Verify a returner has an
event_return()
function before using.
Note
On larger installations, many hundreds of events may be generated on a busy master every second. Be certain to closely monitor the storage of a given returner as Salt can easily overwhealm an underpowered server with thousands of returns.
carbon_return |
Take data from salt and "return" it into a carbon receiver |
cassandra_cql_return |
Return data to a cassandra server |
cassandra_return |
Return data to a Cassandra ColumnFamily |
couchbase_return |
Simple returner for Couchbase. |
couchdb_return |
Simple returner for CouchDB. |
django_return |
A returner that will infor a Django system that returns are available using Django's signal system. |
elasticsearch_return |
Return data to an elasticsearch server for indexing. |
etcd_return |
Return data to an etcd server or cluster |
hipchat_return |
Return salt data via hipchat. |
kafka_return |
Return data to a Kafka topic |
local |
The local returner is used to test the returner interface, it just prints the |
local_cache |
Return data to local job cache |
memcache_return |
Return data to a memcache server |
mongo_future_return |
Return data to a mongodb server |
mongo_return |
Return data to a mongodb server |
multi_returner |
Read/Write multiple returners |
mysql |
Return data to a mysql server |
nagios_return |
Return salt data to Nagios |
odbc |
Return data to an ODBC compliant server. |
postgres |
Return data to a postgresql server |
postgres_local_cache |
Use a postgresql server for the master job cache. |
pushover_returner |
Return salt data via pushover (http://www.pushover.net) |
redis_return |
Return data to a redis server |
sentry_return |
Salt returner that report execution results back to sentry. |
slack_returner |
Return salt data via slack |
sms_return |
Return data by SMS. |
smtp_return |
Return salt data via email |
sqlite3_return |
Insert minion return data into a sqlite3 database |
syslog_return |
Return data to the host operating system's syslog facility |
xmpp_return |
Return salt data via xmpp |
ansible |
Read in an Ansible inventory file or script |
cache |
Use the minion cache on the master to derive IP addresses based on minion ID. |
cloud |
Use the cloud cache on the master to derive IPv4 addresses based on minion ID. |
clustershell |
This roster resolves hostname in a pdsh/clustershell style. |
flat |
Read in the roster from a flat file using the renderer system |
scan |
Scan a netmask or ipaddr for open ssh ports |
Salt runners are convenience applications executed with the salt-run command.
Salt runners work similarly to Salt execution modules however they execute on the Salt master itself instead of remote Salt minions.
A Salt runner can be a simple client call or a complex application.
See also
cache |
Return cached data from minions |
cloud |
The Salt Cloud Runner |
doc |
A runner module to collect and display the inline documentation from the |
drac |
Manage Dell DRAC from the Master |
error |
Error generator to enable integration testing of salt runner error handling |
f5 |
Runner to provide F5 Load Balancer functionality |
fileserver |
Directly manage the Salt fileserver plugins |
git_pillar |
Directly manage the salt git_pillar plugin |
http |
Module for making various web calls. |
jobs |
A convenience system to manage jobs, both active and already run |
launchd |
Manage launchd plist files |
lxc |
Control Linux Containers via Salt |
manage |
General management functions for salt, tools like seeing what hosts are up |
mine |
A runner to access data from the salt mine |
nacl |
This runner helps create encrypted passwords that can be included in pillars. |
network |
Network tools to run from the Master |
pagerduty |
Runner Module for Firing Events via PagerDuty |
pillar |
Functions to interact with the pillar compiler on the master |
pkg |
Package helper functions using salt.modules.pkg |
queue |
General management and processing of queues. |
sdb |
Runner for setting and querying data via the sdb API on the master |
search |
Runner frontend to search system |
state |
Execute overstate functions |
survey |
A general map/reduce style salt runner for aggregating results returned by several different minions. |
test |
This runner is used only for test purposes and servers no production purpose |
thin |
The thin runner is used to manage the salt thin systems. |
virt |
Control virtual machines via Salt |
winrepo |
Runner to manage Windows software repo |
A Salt runner is written in a similar manner to a Salt execution module. Both are Python modules which contain functions and each public function is a runner which may be executed via the salt-run command.
For example, if a Python module named test.py
is created in the runners
directory and contains a function called foo
, the test
runner could be
invoked with the following command:
# salt-run test.foo
Runners have several options for controlling output.
Any print
statement in a runner is automatically also
fired onto the master event bus where. For example:
def a_runner(outputter=None, display_progress=False):
print('Hello world')
...
The above would result in an event fired as follows:
Event fired at Tue Jan 13 15:26:45 2015
*************************
Tag: salt/run/20150113152644070246/print
Data:
{'_stamp': '2015-01-13T15:26:45.078707',
'data': 'hello',
'outputter': 'pprint'}
A runner may also send a progress event, which is displayed to the user during
runner execution and is also passed across the event bus if the display_progress
argument to a runner is set to True.
A custom runner may send its own progress event by using the
__jid_event_.fire_event()
method as shown here:
if display_progress:
__jid_event__.fire_event({'message': 'A progress message', 'progress')
The above would produce output on the console reading: A progress message
as well as an event on the event similar to:
Event fired at Tue Jan 13 15:21:20 2015
*************************
Tag: salt/run/20150113152118341421/progress
Data:
{'_stamp': '2015-01-13T15:21:20.390053',
'message': "A progress message"}
A runner could use the same approach to send an event with a customized tag
onto the event bus by replacing the second argument (progress
) with
whatever tag is desired. However, this will not be shown on the command-line
and will only be fired onto the event bus.
A runner may be fired asychronously which will immediately return control. In
this case, no output will be display to the user if salt-run
is being used
from the command-line. If used programatically, no results will be returned.
If results are desired, they must be gathered either by firing events on the
bus from the runner and then watching for them or by some other means.
Note
When running a runner in asyncronous mode, the --progress
flag will
not deliver output to the salt-run CLI. However, progress events will
still be fired on the bus.
In synchronous mode, which is the default, control will not be returned until the runner has finished executing.
To add custom runners, put them in a directory and add it to
runner_dirs
in the master configuration file.
Examples of runners can be found in the Salt distribution:
https://github.com/saltstack/salt/blob/develop/salt/runners
A simple runner that returns a well-formatted list of the minions that are responding to Salt calls could look like this:
# Import salt modules
import salt.client
def up():
'''
Print a list of all of the minions that are up
'''
client = salt.client.LocalClient(__opts__['conf_file'])
minions = client.cmd('*', 'test.ping', timeout=1)
for minion in sorted(minions):
print minion
Salt offers an optional interface to manage the configuration or "state" of the Salt minions. This interface is a fully capable mechanism used to enforce the state of systems from a central manager.
New in version 2014.7.0.
The mod_aggregate system was added in the 2014.7.0 release of Salt and allows for runtime modification of the executing state data. Simply put, it allows for the data used by Salt's state system to be changed on the fly at runtime, kind of like a configuration management JIT compiler or a runtime import system. All in all, it makes Salt much more dynamic.
The best example is the pkg
state. One of the major requests in Salt has long
been adding the ability to install all packages defined at the same time. The
mod_aggregate system makes this a reality. While executing Salt's state system,
when a pkg
state is reached the mod_aggregate
function in the state module
is called. For pkg
this function scans all of the other states that are slated
to run, and picks up the references to name
and pkgs
, then adds them to
pkgs
in the first state. The result is a single call to yum, apt-get,
pacman, etc as part of the first package install.
Note
Since this option changes the basic behavior of the state runtime, after it is enabled states should be executed using test=True to ensure that the desired behavior is preserved.
The first way to enable aggregation is with a configuration option in either
the master or minion configuration files. Salt will invoke mod_aggregate
the first time it encounters a state module that has aggregate support.
If this option is set in the master config it will apply to all state runs on all minions, if set in the minion config it will only apply to said minion.
Enable for all states:
state_aggregate: True
Enable for only specific state modules:
state_aggregate:
- pkg
The second way to enable aggregation is with the state-level aggregate
keyword. In this configuration, Salt will invoke the mod_aggregate
function
the first time it encounters this keyword. Any additional occurrences of the
keyword will be ignored as the aggregation has already taken place.
The following example will trigger mod_aggregate
when the lamp_stack
state is processed resulting in a single call to the underlying package
manager.
lamp_stack:
pkg.installed:
- pkgs:
- php
- mysql-client
- aggregate: True
memcached:
pkg.installed:
- name: memcached
Adding a mod_aggregate routine to an existing state module only requires adding an additional function to the state module called mod_aggregate.
The mod_aggregate function just needs to accept three parameters and return the low data to use. Since mod_aggregate is working on the state runtime level it does need to manipulate low data.
The three parameters are low, chunks, and running. The low option is the low data for the state execution which is about to be called. The chunks is the list of all of the low data dictionaries which are being executed by the runtime and the running dictionary is the return data from all of the state executions which have already be executed.
This example, simplified from the pkg state, shows how to create mod_aggregate functions:
def mod_aggregate(low, chunks, running):
'''
The mod_aggregate function which looks up all packages in the available
low chunks and merges them into a single pkgs ref in the present low data
'''
pkgs = []
# What functions should we aggregate?
agg_enabled = [
'installed',
'latest',
'removed',
'purged',
]
# The `low` data is just a dict with the state, function (fun) and
# arguments passed in from the sls
if low.get('fun') not in agg_enabled:
return low
# Now look into what other things are set to execute
for chunk in chunks:
# The state runtime uses "tags" to track completed jobs, it may
# look familiar with the _|-
tag = salt.utils.gen_state_tag(chunk)
if tag in running:
# Already ran the pkg state, skip aggregation
continue
if chunk.get('state') == 'pkg':
if '__agg__' in chunk:
continue
# Check for the same function
if chunk.get('fun') != low.get('fun'):
continue
# Pull out the pkg names!
if 'pkgs' in chunk:
pkgs.extend(chunk['pkgs'])
chunk['__agg__'] = True
elif 'name' in chunk:
pkgs.append(chunk['name'])
chunk['__agg__'] = True
if pkgs:
if 'pkgs' in low:
low['pkgs'].extend(pkgs)
else:
low['pkgs'] = pkgs
# The low has been modified and needs to be returned to the state
# runtime for execution
return low
In 0.10.2 a new feature was added for backing up files that are replaced by the file.managed and file.recurse states. The new feature is called the backup mode. Setting the backup mode is easy, but it can be set in a number of places.
The backup_mode can be set in the minion config file:
backup_mode: minion
Or it can be set for each file:
/etc/ssh/sshd_config:
file.managed:
- source: salt://ssh/sshd_config
- backup: minion
The files will be saved in the minion cachedir under the directory named
file_backup
. The files will be in the location relative to where they
were under the root filesystem and be appended with a timestamp. This should
make them easy to browse.
Starting with version 0.17.0, it will be possible to list, restore, and delete previously-created backups.
The backups for a given file can be listed using file.list_backups
:
# salt foo.bar.com file.list_backups /tmp/foo.txt
foo.bar.com:
----------
0:
----------
Backup Time:
Sat Jul 27 2013 17:48:41.738027
Location:
/var/cache/salt/minion/file_backup/tmp/foo.txt_Sat_Jul_27_17:48:41_738027_2013
Size:
13
1:
----------
Backup Time:
Sat Jul 27 2013 17:48:28.369804
Location:
/var/cache/salt/minion/file_backup/tmp/foo.txt_Sat_Jul_27_17:48:28_369804_2013
Size:
35
Restoring is easy using file.restore_backup
, just pass the path and the numeric id
found with file.list_backups
:
# salt foo.bar.com file.restore_backup /tmp/foo.txt 1
foo.bar.com:
----------
comment:
Successfully restored /var/cache/salt/minion/file_backup/tmp/foo.txt_Sat_Jul_27_17:48:28_369804_2013 to /tmp/foo.txt
result:
True
The existing file will be backed up, just in case, as can be seen if
file.list_backups
is run again:
# salt foo.bar.com file.list_backups /tmp/foo.txt
foo.bar.com:
----------
0:
----------
Backup Time:
Sat Jul 27 2013 18:00:19.822550
Location:
/var/cache/salt/minion/file_backup/tmp/foo.txt_Sat_Jul_27_18:00:19_822550_2013
Size:
53
1:
----------
Backup Time:
Sat Jul 27 2013 17:48:41.738027
Location:
/var/cache/salt/minion/file_backup/tmp/foo.txt_Sat_Jul_27_17:48:41_738027_2013
Size:
13
2:
----------
Backup Time:
Sat Jul 27 2013 17:48:28.369804
Location:
/var/cache/salt/minion/file_backup/tmp/foo.txt_Sat_Jul_27_17:48:28_369804_2013
Size:
35
Note
Since no state is being run, restoring a file will not trigger any watches
for the file. So, if you are restoring a config file for a service, it will
likely still be necessary to run a service.restart
.
Deleting backups can be done using file.delete_backup
:
# salt foo.bar.com file.delete_backup /tmp/foo.txt 0
foo.bar.com:
----------
comment:
Successfully removed /var/cache/salt/minion/file_backup/tmp/foo.txt_Sat_Jul_27_18:00:19_822550_2013
result:
True
Note
This tutorial is an intermediate level tutorial. Some basic understanding of the state system and writing Salt Formulas is assumed.
Salt's state system is built to deliver all of the power of configuration management systems without sacrificing simplicity. This tutorial is made to help users understand in detail just how the order is defined for state executions in Salt.
This tutorial is written to represent the behavior of Salt as of version 0.17.0.
To understand ordering in depth some very basic knowledge about the state compiler is very helpful. No need to worry though, this is very high level!
When defining Salt Formulas in YAML the data that is being represented is referred to by the compiler as High Data. When the data is initially loaded into the compiler it is a single large python dictionary, this dictionary can be viewed raw by running:
salt '*' state.show_highstate
This "High Data" structure is then compiled down to "Low Data". The Low Data is what is matched up to create individual executions in Salt's configuration management system. The low data is an ordered list of single state calls to execute. Once the low data is compiled the evaluation order can be seen.
The low data can be viewed by running:
salt '*' state.show_lowstate
Note
The state execution module contains MANY functions for evaluating the state system and is well worth a read! These routines can be very useful when debugging states or to help deepen one's understanding of Salt's state system.
As an example, a state written thusly:
apache:
pkg.installed:
- name: httpd
service.running:
- name: httpd
- watch:
- file: apache_conf
- pkg: apache
apache_conf:
file.managed:
- name: /etc/httpd/conf.d/httpd.conf
- source: salt://apache/httpd.conf
Will have High Data which looks like this represented in json:
{
"apache": {
"pkg": [
{
"name": "httpd"
},
"installed",
{
"order": 10000
}
],
"service": [
{
"name": "httpd"
},
{
"watch": [
{
"file": "apache_conf"
},
{
"pkg": "apache"
}
]
},
"running",
{
"order": 10001
}
],
"__sls__": "blah",
"__env__": "base"
},
"apache_conf": {
"file": [
{
"name": "/etc/httpd/conf.d/httpd.conf"
},
{
"source": "salt://apache/httpd.conf"
},
"managed",
{
"order": 10002
}
],
"__sls__": "blah",
"__env__": "base"
}
}
The subsequent Low Data will look like this:
[
{
"name": "httpd",
"state": "pkg",
"__id__": "apache",
"fun": "installed",
"__env__": "base",
"__sls__": "blah",
"order": 10000
},
{
"name": "httpd",
"watch": [
{
"file": "apache_conf"
},
{
"pkg": "apache"
}
],
"state": "service",
"__id__": "apache",
"fun": "running",
"__env__": "base",
"__sls__": "blah",
"order": 10001
},
{
"name": "/etc/httpd/conf.d/httpd.conf",
"source": "salt://apache/httpd.conf",
"state": "file",
"__id__": "apache_conf",
"fun": "managed",
"__env__": "base",
"__sls__": "blah",
"order": 10002
}
]
This tutorial discusses the Low Data evaluation and the state runtime.
Salt defines 2 order interfaces which are evaluated in the state runtime and defines these orders in a number of passes.
Note
The Definition Order system can be disabled by turning the option state_auto_order to False in the master configuration file.
The top level of ordering is the Definition Order. The Definition Order
is the order in which states are defined in salt formulas. This is very
straightforward on basic states which do not contain include
statements
or a top
file, as the states are just ordered from the top of the file,
but the include system starts to bring in some simple rules for how the
Definition Order is defined.
Looking back at the "Low Data" and "High Data" shown above, the order key has been transparently added to the data to enable the Definition Order.
Basically, if there is an include statement in a formula, then the formulas which are included will be run BEFORE the contents of the formula which is including them. Also, the include statement is a list, so they will be loaded in the order in which they are included.
In the following case:
foo.sls
include:
- bar
- baz
bar.sls
include:
- quo
baz.sls
include:
- qux
In the above case if state.sls foo were called then the formulas will be loaded in the following order:
The Definition Order happens transparently in the background, but the ordering can be explicitly overridden using the order flag in states:
apache:
pkg.installed:
- name: httpd
- order: 1
This order flag will over ride the definition order, this makes it very simple to create states that are always executed first, last or in specific stages, a great example is defining a number of package repositories that need to be set up before anything else, or final checks that need to be run at the end of a state run by using order: last or order: -1.
When the order flag is explicitly set the Definition Order system will omit setting an order for that state and directly use the order flag defined.
Salt states were written to ALWAYS execute in the same order. Before the introduction of Definition Order in version 0.17.0 everything was ordered lexicographically according to the name of the state, then function then id.
This is the way Salt has always ensured that states always run in the same order regardless of where they are deployed, the addition of the Definition Order method mealy makes this finite ordering easier to follow.
The lexicographical ordering is still applied but it only has any effect when two order statements collide. This means that if multiple states are assigned the same order number that they will fall back to lexicographical ordering to ensure that every execution still happens in a finite order.
Note
If running with state_auto_order: False the order key is not set automatically, since the Lexicographical order can be derived from other keys.
Salt states are fully declarative, in that they are written to declare the state in which a system should be. This means that components can require that other components have been set up successfully. Unlike the other ordering systems, the Requisite system in Salt is evaluated at runtime.
The requisite system is also built to ensure that the ordering of execution never changes, but is always the same for a given set of states. This is accomplished by using a runtime that processes states in a completely predictable order instead of using an event loop based system like other declarative configuration management systems.
The requisite system is evaluated as the components are found, and the requisites are always evaluated in the same order. This explanation will be followed by an example, as the raw explanation may be a little dizzying at first as it creates a linear dependency evaluation sequence.
The "Low Data" is an ordered list or dictionaries, the state runtime evaluates each dictionary in the order in which they are arranged in the list. When evaluating a single dictionary it is checked for requisites, requisites are evaluated in order, require then watch then prereq.
Note
If using requisite in statements like require_in and watch_in these will be compiled down to require and watch statements before runtime evaluation.
Each requisite contains an ordered list of requisites, these requisites are looked up in the list of dictionaries and then executed. Once all requisites have been evaluated and executed then the requiring state can safely be run (or not run if requisites have not been met).
This means that the requisites are always evaluated in the same order, again ensuring one of the core design principals of Salt's State system to ensure that execution is always finite is intact.
Given the above "Low Data" the states will be evaluated in the following order:
The best practice in Salt is to choose a method and stick with it, official
states are written using requisites for all associations since requisites
create clean, traceable dependency trails and make for the most portable
formulas. To accomplish something similar to how classical imperative
systems function all requisites can be omitted and the failhard
option
then set to True in the master configuration, this will stop all state runs at
the first instance of a failure.
In the end, using requisites creates very tight and fine grained states, not using requisites makes full sequence runs and while slightly easier to write, and gives much less control over the executions.
Sometimes a state defined in one SLS file will need to be modified from a separate SLS file. A good example of this is when an argument needs to be overwritten or when a service needs to watch an additional state.
The standard way to extend is via the extend declaration. The extend
declaration is a top level declaration like include
and encapsulates ID
declaration data included from other SLS files. A standard extend looks like
this:
include:
- http
- ssh
extend:
apache:
file:
- name: /etc/httpd/conf/httpd.conf
- source: salt://http/httpd2.conf
ssh-server:
service:
- watch:
- file: /etc/ssh/banner
/etc/ssh/banner:
file.managed:
- source: salt://ssh/banner
A few critical things happened here, first off the SLS files that are going to be extended are included, then the extend dec is defined. Under the extend dec 2 IDs are extended, the apache ID's file state is overwritten with a new name and source. Than the ssh server is extended to watch the banner file in addition to anything it is already watching.
This means that extend
can only be called once in an sls, if if is used
twice then only one of the extend blocks will be read. So this is WRONG:
include:
- http
- ssh
extend:
apache:
file:
- name: /etc/httpd/conf/httpd.conf
- source: salt://http/httpd2.conf
# Second extend will overwrite the first!! Only make one
extend:
ssh-server:
service:
- watch:
- file: /etc/ssh/banner
Since one of the most common things to do when extending another SLS is to add states for a service to watch, or anything for a watcher to watch, the requisite in statement was added to 0.9.8 to make extending the watch and require lists easier. The ssh-server extend statement above could be more cleanly defined like so:
include:
- ssh
/etc/ssh/banner:
file.managed:
- source: salt://ssh/banner
- watch_in:
- service: ssh-server
There are a few rules to remember when extending states:
Normally, when a state fails Salt continues to execute the remainder of the defined states and will only refuse to execute states that require the failed state.
But the situation may exist, where you would want all state execution to stop
if a single state execution fails. The capability to do this is called
failing hard
.
A single state can have a failhard set, this means that if this individual state fails that all state execution will immediately stop. This is a great thing to do if there is a state that sets up a critical config file and setting a require for each state that reads the config would be cumbersome. A good example of this would be setting up a package manager early on:
/etc/yum.repos.d/company.repo:
file.managed:
- source: salt://company/yumrepo.conf
- user: root
- group: root
- mode: 644
- order: 1
- failhard: True
In this situation, the yum repo is going to be configured before other states, and if it fails to lay down the config file, than no other states will be executed.
It may be desired to have failhard be applied to every state that is executed, if this is the case, then failhard can be set in the master configuration file. Setting failhard in the master configuration file will result in failing hard when any minion gathering states from the master have a state fail.
This is NOT the default behavior, normally Salt will only fail states that require a failed state.
Using the global failhard is generally not recommended, since it can result in states not being executed or even checked. It can also be confusing to see states failhard if an admin is not actively aware that the failhard has been set.
To use the global failhard set failhard: True in the master configuration file.
A state tree is a collection of SLS
files that live under the directory
specified in file_roots
. A state tree can be organized into
SLS modules
.
The main state file that instructs minions what environment and modules to use during state execution.
Configurable via state_top
.
Defines a list of Module reference strings to include in this SLS
.
Occurs only in the top level of the highstate structure.
Example:
include:
- edit.vim
- http.server
The name of a SLS module defined by a separate SLS file and residing on
the Salt Master. A module named edit.vim
is a reference to the SLS
file salt://edit/vim.sls
.
Defines an individual highstate component. Always references a value of a dictionary containing keys referencing State declaration and Requisite declaration. Can be overridden by a Name declaration or a Names declaration.
Occurs on the top level or under the Extend declaration.
Must be unique across entire state tree. If the same ID declaration is used twice, only the first one matched will be used. All subsequent ID declarations with the same name will be ignored.
Note
Naming gotchas
In Salt versions earlier than 0.9.7, ID declarations containing dots would result in unpredictable highstate output.
Extends a Name declaration from an included SLS module
. The
keys of the extend declaration always define existing :ref`ID declaration`
which have been defined in included
SLS modules
.
Occurs only in the top level and defines a dictionary.
States cannot be extended more than once in a single state run.
Extend declarations are useful for adding-to or overriding parts of a
State declaration that is defined in another SLS
file. In the
following contrived example, the shown mywebsite.sls
file is include
-ing and extend
-ing the apache.sls
module in order to add a watch
declaration that will restart Apache whenever the Apache configuration file,
mywebsite
changes.
include:
- apache
extend:
apache:
service:
- watch:
- file: mywebsite
mywebsite:
file.managed:
- name: /var/www/mysite
See also
watch_in and require_in
Sometimes it is more convenient to use the watch_in or require_in syntax
instead of extending another SLS
file.
A list which contains one string defining the Function declaration and any number of Function arg declaration dictionaries.
Can, optionally, contain a number of additional components like the name override components — name and names. Can also contain requisite declarations.
Occurs under an ID declaration.
A list containing requisite references.
Used to build the action dependency tree. While Salt states are made to execute in a deterministic order, this order is managed by requiring and watching other Salt states.
Occurs as a list component under a State declaration or as a key under an ID declaration.
A single key dictionary. The key is the name of the referenced State declaration and the value is the ID of the referenced ID declaration.
Occurs as a single index in a Requisite declaration list.
The name of the function to call within the state. A state declaration can contain only a single function declaration.
For example, the following state declaration calls the installed
function in the pkg
state module:
httpd:
pkg.installed: []
The function can be declared inline with the state as a shortcut. The actual data structure is compiled to this form:
httpd:
pkg:
- installed
Where the function is a string in the body of the state declaration. Technically when the function is declared in dot notation the compiler converts it to be a string in the state declaration list. Note that the use of the first example more than once in an ID declaration is invalid yaml.
INVALID:
httpd:
pkg.installed
service.running
When passing a function without arguments and another state declaration within a single ID declaration, then the long or "standard" format needs to be used since otherwise it does not represent a valid data structure.
VALID:
httpd:
pkg.installed: []
service.running: []
Occurs as the only index in the State declaration list.
A single key dictionary referencing a Python type which is to be passed to the named Function declaration as a parameter. The type must be the data type expected by the function.
Occurs under a Function declaration.
For example in the following state declaration user
, group
, and
mode
are passed as arguments to the managed
function in the file
state module:
/etc/http/conf/http.conf:
file.managed:
- user: root
- group: root
- mode: 644
Overrides the name
argument of a State declaration. If
name
is not specified the ID declaration satisfies the
name
argument.
The name is always a single key dictionary referencing a string.
Overriding name
is useful for a variety of scenarios.
For example, avoiding clashing ID declarations. The following two state
declarations cannot both have /etc/motd
as the ID declaration:
motd_perms:
file.managed:
- name: /etc/motd
- mode: 644
motd_quote:
file.append:
- name: /etc/motd
- text: "Of all smells, bread; of all tastes, salt."
Another common reason to override name
is if the ID declaration is long and
needs to be referenced in multiple places. In the example below it is much
easier to specify mywebsite
than to specify
/etc/apache2/sites-available/mywebsite.com
multiple times:
mywebsite:
file.managed:
- name: /etc/apache2/sites-available/mywebsite.com
- source: salt://mywebsite.com
a2ensite mywebsite.com:
cmd.wait:
- unless: test -L /etc/apache2/sites-enabled/mywebsite.com
- watch:
- file: mywebsite
apache2:
service.running:
- watch:
- file: mywebsite
Expands the contents of the containing State declaration into multiple state declarations, each with its own name.
For example, given the following state declaration:
python-pkgs:
pkg.installed:
- names:
- python-django
- python-crypto
- python-yaml
Once converted into the lowstate data structure the above state declaration will be expanded into the following three state declarations:
python-django:
pkg.installed
python-crypto:
pkg.installed
python-yaml:
pkg.installed
Other values can be overridden during the expansion by providing an additional dictionary level.
New in version 2014.7.0.
ius:
pkgrepo.managed:
- humanname: IUS Community Packages for Enterprise Linux 6 - $basearch
- gpgcheck: 1
- baseurl: http://mirror.rackspace.com/ius/stable/CentOS/6/$basearch
- gpgkey: http://dl.iuscommunity.org/pub/ius/IUS-COMMUNITY-GPG-KEY
- names:
- ius
- ius-devel:
- baseurl: http://mirror.rackspace.com/ius/development/CentOS/6/$basearch
Here is the layout in yaml using the names of the highdata structure components.
<Include Declaration>:
- <Module Reference>
- <Module Reference>
<Extend Declaration>:
<ID Declaration>:
[<overrides>]
# standard declaration
<ID Declaration>:
<State Module>:
- <Function>
- <Function Arg>
- <Function Arg>
- <Function Arg>
- <Name>: <name>
- <Requisite Declaration>:
- <Requisite Reference>
- <Requisite Reference>
# inline function and names
<ID Declaration>:
<State Module>.<Function>:
- <Function Arg>
- <Function Arg>
- <Function Arg>
- <Names>:
- <name>
- <name>
- <name>
- <Requisite Declaration>:
- <Requisite Reference>
- <Requisite Reference>
# multiple states for single id
<ID Declaration>:
<State Module>:
- <Function>
- <Function Arg>
- <Name>: <name>
- <Requisite Declaration>:
- <Requisite Reference>
<State Module>:
- <Function>
- <Function Arg>
- <Names>:
- <name>
- <name>
- <Requisite Declaration>:
- <Requisite Reference>
Salt sls files can include other sls files and exclude sls files that have been otherwise included. This allows for an sls file to easily extend or manipulate other sls files.
When other sls files are included, everything defined in the included sls file will be added to the state run. When including define a list of sls formulas to include:
include:
- http
- libvirt
The include statement will include sls formulas from the same environment that the including sls formula is in. But the environment can be explicitly defined in the configuration to override the running environment, therefore if an sls formula needs to be included from an external environment named "dev" the following syntax is used:
include:
- dev: http
NOTE: include does not simply inject the states where you place it in the sls file. If you need to guarantee order of execution, consider using requisites.
Do not use dots in SLS file names
The initial implementation of top.sls
and
Include declaration followed the python import model where a slash
is represented as a period. This means that a SLS file with a period in
the name ( besides the suffix period) can not be referenced. For example,
webserver_1.0.sls is not referenceable because webserver_1.0 would refer
to the directory/file webserver_1/0.sls
In Salt 0.16.0 the capability to include sls formulas which are relative to the running sls formula was added, simply precede the formula name with a .:
include:
- .virt
- .virt.hyper
The exclude statement, added in Salt 0.10.3 allows an sls to hard exclude another sls file or a specific id. The component is excluded after the high data has been compiled, so nothing should be able to override an exclude.
Since the exclude can remove an id or an sls the type of component to exclude needs to be defined. an exclude statement that verifies that the running highstate does not contain the http sls and the /etc/vimrc id would look like this:
exclude:
- sls: http
- id: /etc/vimrc
The Salt state system is comprised of multiple layers. While using Salt does not require an understanding of the state layers, a deeper understanding of how Salt compiles and manages states can be very beneficial.
The lowest layer of functionality in the state system is the direct state
function call. State executions are executions of single state functions at
the core. These individual functions are defined in state modules and can
be called directly via the state.single
command.
salt '*' state.single pkg.installed name='vim'
The low chunk is the bottom of the Salt state compiler. This is a data representation of a single function call. The low chunk is sent to the state caller and used to execute a single state function.
A single low chunk can be executed manually via the state.low
command.
salt '*' state.low '{name: vim, state: pkg, fun: installed}'
The passed data reflects what the state execution system gets after compiling the data down from sls formulas.
The Low State layer is the list of low chunks "evaluated" in order. To see what the low state looks like for a highstate, run:
salt '*' state.show_lowstate
This will display the raw lowstate in the order which each low chunk will be evaluated. The order of evaluation is not necessarily the order of execution, since requisites are evaluated at runtime. Requisite execution and evaluation is finite; this means that the order of execution can be ascertained with 100% certainty based on the order of the low state.
High data is the data structure represented in YAML via SLS files. The High
data structure is created by merging the data components rendered inside sls
files (or other render systems). The High data can be easily viewed by
executing the state.show_highstate
or state.show_sls
functions. Since
this data is a somewhat complex data structure, it may be easier to read using
the json, yaml, or pprint outputters:
salt '*' state.show_highstate --out yaml
salt '*' state.show_sls edit.vim --out pprint
Above "High Data", the logical layers are no longer technically required to be executed, or to be executed in a hierarchy. This means that how the High data is generated is optional and very flexible. The SLS layer allows for many mechanisms to be used to render sls data from files or to use the fileserver backend to generate sls and file data from external systems.
The SLS layer can be called directly to execute individual sls formulas.
Note
SLS Formulas have historically been called "SLS files". This is because a single SLS was only constituted in a single file. Now the term "SLS Formula" better expresses how a compartmentalized SLS can be expressed in a much more dynamic way by combining pillar and other sources, and the SLS can be dynamically generated.
To call a single SLS formula named edit.vim
, execute state.sls
:
salt '*' state.sls edit.vim
Calling SLS directly logically assigns what states should be executed from the context of the calling minion. The Highstate layer is used to allow for full contextual assignment of what is executed where to be tied to groups of, or individual, minions entirely from the master. This means that the environment of a minion, and all associated execution data pertinent to said minion, can be assigned from the master without needing to execute or configure anything on the target minion. This also means that the minion can independently retrieve information about its complete configuration from the master.
To execute the High State call state.highstate
:
salt '*' state.highstate
The overstate layer expresses the highest functional layer of Salt's automated logic systems. The Overstate allows for stateful and functional orchestration of routines from the master. The overstate defines in data execution stages which minions should execute states, or functions, and in what order using requisite logic.
The way in which configuration management systems are executed is a hotly debated topic in the configuration management world. Two major philosophies exist on the subject, to either execute in an imperative fashion where things are executed in the order in which they are defined, or in a declarative fashion where dependencies need to be mapped between objects.
Imperative ordering is finite and generally considered easier to write, but declarative ordering is much more powerful and flexible but generally considered more difficult to create.
Salt has been created to get the best of both worlds. States are evaluated in a finite order, which guarantees that states are always executed in the same order, and the states runtime is declarative, making Salt fully aware of dependencies via the requisite system.
Salt always executes states in a finite manner, meaning that they will always
execute in the same order regardless of the system that is executing them.
But in Salt 0.17.0, the state_auto_order
option was added. This option
makes states get evaluated in the order in which they are defined in sls
files.
The evaluation order makes it easy to know what order the states will be
executed in, but it is important to note that the requisite system will
override the ordering defined in the files, and the order
option described
below will also override the order in which states are defined in sls files.
If the classic ordering is preferred (lexicographic), then set
state_auto_order
to False
in the master configuration file.
Note
This document represents behavior exhibited by Salt requisites as of version 0.9.7 of Salt.
Often when setting up states any single action will require or depend on another action. Salt allows for the building of relationships between states with requisite statements. A requisite statement ensures that the named state is evaluated before the state requiring it. There are three types of requisite statements in Salt, require, watch, and prereq.
These requisite statements are applied to a specific state declaration:
httpd:
pkg.installed: []
file.managed:
- name: /etc/httpd/conf/httpd.conf
- source: salt://httpd/httpd.conf
- require:
- pkg: httpd
In this example, the require requisite is used to declare that the file /etc/httpd/conf/httpd.conf should only be set up if the pkg state executes successfully.
The requisite system works by finding the states that are required and executing them before the state that requires them. Then the required states can be evaluated to see if they have executed correctly.
Require statements can refer to any state defined in Salt. The basic examples are pkg, service, and file, but any used state can be referenced.
In addition to state declarations such as pkg, file, etc., sls type requisites are also recognized, and essentially allow 'chaining' of states. This provides a mechanism to ensure the proper sequence for complex state formulas, especially when the discrete states are split or groups into separate sls files:
include:
- network
httpd:
pkg.installed: []
service.running:
- require:
- pkg: httpd
- sls: network
In this example, the httpd service running state will not be applied (i.e., the httpd service will not be started) unless both the httpd package is installed AND the network state is satisfied.
Note
Requisite matching
Requisites match on both the ID Declaration and the name
parameter.
Therefore, if using the pkgs
or sources
argument to install
a list of packages in a pkg state, it's important to note that it is
impossible to match an individual package in the list, since all packages
are installed as a single state.
The requisite statement is passed as a list, allowing for the easy addition of more requisites. Both requisite types can also be separately declared:
httpd:
pkg.installed: []
service.running:
- enable: True
- watch:
- file: /etc/httpd/conf/httpd.conf
- require:
- pkg: httpd
- user: httpd
- group: httpd
file.managed:
- name: /etc/httpd/conf/httpd.conf
- source: salt://httpd/httpd.conf
- require:
- pkg: httpd
user.present: []
group.present: []
In this example, the httpd service is only going to be started if the package, user, group, and file are executed successfully.
For detailed information on each of the individual requisites, please look here.
Before using the order option, remember that the majority of state ordering should be done with a Requisite declaration, and that a requisite declaration will override an order option, so a state with order option should not require or required by other states.
The order option is used by adding an order number to a state declaration with the option order:
vim:
pkg.installed:
- order: 1
By adding the order option to 1 this ensures that the vim package will be installed in tandem with any other state declaration set to the order 1.
Any state declared without an order option will be executed after all states with order options are executed.
But this construct can only handle ordering states from the beginning.
Certain circumstances will present a situation where it is desirable to send
a state to the end of the line. To do this, set the order to last
:
vim:
pkg.installed:
- order: last
New in version 0.9.8.
Salt predetermines what modules should be mapped to what uses based on the properties of a system. These determinations are generally made for modules that provide things like package and service management.
Sometimes in states, it may be necessary to use an alternative module to provide the needed functionality. For instance, an older Arch Linux system may not be running systemd, so instead of using the systemd service module, you can revert to the default service module:
httpd:
service.running:
- enable: True
- provider: service
In this instance, the basic service
module (which
manages sysvinit-based services) will replace the
systemd
module which is used by default on Arch Linux.
However, if it is necessary to make this override for most or every service, it is better to just override the provider in the minion config file, as described in the section below.
Sometimes, when running Salt on custom Linux spins, or distribution that are derived from other distributions, Salt does not successfully detect providers. The providers which are most likely to be affected by this are:
When something like this happens, rather than specifying the provider manually
in each state, it easier to use the providers
parameter in the
minion config file to set the provider.
If you end up needing to override a provider because it was not detected,
please let us know! File an issue on the issue tracker, and provide the
output from the grains.items
function,
taking care to sanitize any sensitive information.
Below are tables that should help with deciding which provider to use if one needs to be overridden.
pkg
¶Execution Module | Used for |
---|---|
apt | Debian/Ubuntu-based distros which use apt-get(8)
for package management |
brew | Mac OS software management using Homebrew |
ebuild | Gentoo-based systems (utilizes the portage python
module as well as emerge(1) ) |
freebsdpkg | FreeBSD-based OSes using pkg_add(1) |
openbsdpkg | OpenBSD-based OSes using pkg_add(1) |
pacman | Arch Linux-based distros using pacman(8) |
pkgin | NetBSD-based OSes using pkgin(1) |
pkgng | FreeBSD-based OSes using pkg(8) |
pkgutil | Solaris-based OSes using OpenCSW's pkgutil(1) |
solarispkg | Solaris-based OSes using pkgadd(1M) |
solarisips | Solaris-based OSes using IPS pkg(1) |
win_pkg | Windows |
yumpkg | RedHat-based distros and derivatives (wraps yum(8) ) |
zypper | SUSE-based distros using zypper(8) |
service
¶Execution Module | Used for |
---|---|
debian_service | Debian (non-systemd) |
freebsdservice | FreeBSD-based OSes using service(8) |
gentoo_service | Gentoo Linux using sysvinit and
rc-update(8) |
launchctl | Mac OS hosts using launchctl(1) |
netbsdservice | NetBSD-based OSes |
openbsdservice | OpenBSD-based OSes |
rh_service | RedHat-based distros and derivatives using
service(8) and chkconfig(8) . Supports both
pure sysvinit and mixed sysvinit/upstart systems. |
service | Fallback which simply wraps sysvinit scripts |
smf | Solaris-based OSes which use SMF |
systemd | Linux distros which use systemd |
upstart | Ubuntu-based distros using upstart |
win_service | Windows |
user
¶Execution Module | Used for |
---|---|
useradd | Linux, NetBSD, and OpenBSD systems using
useradd(8) , userdel(8) , and usermod(8) |
pw_user | FreeBSD-based OSes using pw(8) |
solaris_user | Solaris-based OSes using useradd(1M) ,
userdel(1M) , and usermod(1M) |
win_useradd | Windows |
group
¶Execution Module | Used for |
---|---|
groupadd | Linux, NetBSD, and OpenBSD systems using
groupadd(8) , groupdel(8) , and groupmod(8) |
pw_group | FreeBSD-based OSes using pw(8) |
solaris_group | Solaris-based OSes using groupadd(1M) ,
groupdel(1M) , and groupmod(1M) |
win_groupadd | Windows |
The provider statement can also be used for more powerful means, instead of overwriting or extending the module used for the named service an arbitrary module can be used to provide certain functionality.
emacs:
pkg.installed:
- provider:
- cmd: customcmd
In this example, the state is being instructed to use a custom module to invoke commands.
Arbitrary module redirects can be used to dramatically change the behavior of a given state.
New in version Beryllium.
The fire_event option in a state will cause the minion to send an event to the Salt Master upon completion of that individual state.
The following example will cause the minion to send an event to the Salt Master with a tag of salt/state_result/20150505121517276431/dasalt/nano and the result of the state will be the data field of the event. Notice that the name of the state gets added to the tag.
nano_stuff:
pkg.installed:
- name: nano
- fire_event: True
In the following example instead of setting fire_event to True, fire_event is set to an arbitrary string, which will cause the event to be sent with this tag: salt/state_result/20150505121725642845/dasalt/custom/tag/nano/finished
nano_stuff:
pkg.installed:
- name: nano
- fire_event: custom/tag/nano/finished
The Salt requisite system is used to create relationships between states. The core idea being that, when one state is dependent somehow on another, that inter-dependency can be easily defined.
Requisites come in two types: Direct requisites (such as require
),
and requisite_ins (such as require_in
). The relationships are
directional: a direct requisite requires something from another state.
However, a requisite_in inserts a requisite into the targeted state pointing to
the targeting state. The following example demonstrates a direct requisite:
vim:
pkg.installed: []
/etc/vimrc:
file.managed:
- source: salt://edit/vimrc
- require:
- pkg: vim
In the example above, the file /etc/vimrc
depends on the vim package.
Requisite_in statements are the opposite. Instead of saying "I depend on something", requisite_ins say "Someone depends on me":
vim:
pkg.installed:
- require_in:
- file: /etc/vimrc
/etc/vimrc:
file.managed:
- source: salt://edit/vimrc
So here, with a requisite_in, the same thing is accomplished as in the first
example, but the other way around. The vim package is saying "/etc/vimrc depends
on me". This will result in a require
being inserted into the
/etc/vimrc
state which targets the vim
state.
In the end, a single dependency map is created and everything is executed in a finite and predictable order.
Note
Requisite matching
Requisites match on both the ID Declaration and the name
parameter.
This means that, in the example above, the require_in
requisite would
also have been matched if the /etc/vimrc
state was written as follows:
vimrc:
file.managed:
- name: /etc/vimrc
- source: salt://edit/vimrc
There are several direct requisite statements that can be used in Salt:
require
watch
prereq
use
onchanges
onfail
Each direct requisite also has a corresponding requisite_in:
require_in
watch_in
prereq_in
use_in
onchanges_in
onfail_in
All of the requisites define specific relationships and always work with the dependency logic defined above.
The use of require
demands that the dependent state executes before the
depending state. The state containing the require
requisite is defined as the
depending state. The state specified in the require
statement is defined as the
dependent state. If the dependent state's execution succeeds, the depending state
will then execute. If the dependent state's execution fails, the depending state
will not execute. In the first example above, the file /etc/vimrc
will only
execute after the vim package is installed successfully.
As of Salt 0.16.0, it is possible to require an entire sls file. Do this first by
including the sls file and then setting a state to require
the included sls file:
include:
- foo
bar:
pkg.installed:
- require:
- sls: foo
watch
statements are used to add additional behavior when there are changes
in other states.
Note
If a state should only execute when another state has changes, and
otherwise do nothing, the new onchanges
requisite should be used
instead of watch
. watch
is designed to add additional behavior
when there are changes, but otherwise execute normally.
The state containing the watch
requisite is defined as the watching
state. The state specified in the watch
statement is defined as the watched
state. When the watched state executes, it will return a dictionary containing
a key named "changes". Here are two examples of state return dictionaries,
shown in json for clarity:
"local": {
"file_|-/tmp/foo_|-/tmp/foo_|-directory": {
"comment": "Directory /tmp/foo updated",
"__run_num__": 0,
"changes": {
"user": "bar"
},
"name": "/tmp/foo",
"result": true
}
}
"local": {
"pkgrepo_|-salt-minion_|-salt-minion_|-managed": {
"comment": "Package repo 'salt-minion' already configured",
"__run_num__": 0,
"changes": {},
"name": "salt-minion",
"result": true
}
}
If the "result" of the watched state is True
, the watching state will
execute normally. This part of watch
mirrors the functionality of the
require
requisite. If the "result" of the watched state is False
, the
watching state will never run, nor will the watching state's mod_watch
function execute.
However, if the "result" of the watched state is True
, and the "changes"
key contains a populated dictionary (changes occurred in the watched state),
then the watch
requisite can add additional behavior. This additional
behavior is defined by the mod_watch
function within the watching state
module. If the mod_watch
function exists in the watching state module, it
will be called in addition to the normal watching state. The return data
from the mod_watch
function is what will be returned to the master in this
case; the return data from the main watching function is discarded.
If the "changes" key contains an empty dictionary, the watch
requisite acts
exactly like the require
requisite (the watching state will execute if
"result" is True
, and fail if "result" is False
in the watched state).
Note
Not all state modules contain mod_watch
. If mod_watch
is absent
from the watching state module, the watch
requisite behaves exactly
like a require
requisite.
A good example of using watch
is with a service.running
state. When a service watches a state, then
the service is reloaded/restarted when the watched state changes, in addition
to Salt ensuring that the service is running.
ntpd:
service.running:
- watch:
- file: /etc/ntp.conf
file.managed:
- name: /etc/ntp.conf
- source: salt://ntp/files/ntp.conf
New in version 0.16.0.
prereq
allows for actions to be taken based on the expected results of
a state that has not yet been executed. The state containing the prereq
requisite is defined as the pre-requiring state. The state specified in the
prereq
statement is defined as the pre-required state.
When a prereq
requisite is evaluated, the pre-required state reports if it
expects to have any changes. It does this by running the pre-required single
state as a test-run by enabling test=True
. This test-run will return a
dictionary containing a key named "changes". (See the watch
section above
for examples of "changes" dictionaries.)
If the "changes" key contains a populated dictionary, it means that the pre-required state expects changes to occur when the state is actually executed, as opposed to the test-run. The pre-requiring state will now actually run. If the pre-requiring state executes successfully, the pre-required state will then execute. If the pre-requiring state fails, the pre-required state will not execute.
If the "changes" key contains an empty dictionary, this means that changes are not expected by the pre-required state. Neither the pre-required state nor the pre-requiring state will run.
The best way to define how prereq
operates is displayed in the following
practical example: When a service should be shut down because underlying code
is going to change, the service should be off-line while the update occurs. In
this example, graceful-down
is the pre-requiring state and site-code
is the pre-required state.
graceful-down:
cmd.run:
- name: service apache graceful
- prereq:
- file: site-code
site-code:
file.recurse:
- name: /opt/site_code
- source: salt://site/code
In this case the apache server will only be shutdown if the site-code state expects to deploy fresh code via the file.recurse call. The site-code deployment will only be executed if the graceful-down run completes successfully.
New in version 2014.7.0.
The onfail
requisite allows for reactions to happen strictly as a response
to the failure of another state. This can be used in a number of ways, such as
executing a second attempt to set up a service or begin to execute a separate
thread of states because of a failure.
The onfail
requisite is applied in the same way as require
as watch
:
primary_mount:
mount.mounted:
- name: /mnt/share
- device: 10.0.0.45:/share
- fstype: nfs
backup_mount:
mount.mounted:
- name: /mnt/share
- device: 192.168.40.34:/share
- fstype: nfs
- onfail:
- mount: primary_mount
New in version 2014.7.0.
The onchanges
requisite makes a state only apply if the required states
generate changes, and if the watched state's "result" is True
. This can be
a useful way to execute a post hook after changing aspects of a system.
The use
requisite is used to inherit the arguments passed in another
id declaration. This is useful when many files need to have the same defaults.
/etc/foo.conf:
file.managed:
- source: salt://foo.conf
- template: jinja
- mkdirs: True
- user: apache
- group: apache
- mode: 755
/etc/bar.conf
file.managed:
- source: salt://bar.conf
- use:
- file: /etc/foo.conf
The use
statement was developed primarily for the networking states but
can be used on any states in Salt. This makes sense for the networking state
because it can define a long list of options that need to be applied to
multiple network interfaces.
The use
statement does not inherit the requisites arguments of the
targeted state. This means also a chain of use
requisites would not
inherit inherited options.
All of the requisites also have corresponding requisite_in versions, which do
the reverse of their normal counterparts. The examples below all use
require_in
as the example, but note that all of the _in
requisites work
the same way: They result in a normal requisite in the targeted state, which
targets the state which has defines the requisite_in. Thus, a require_in
causes the target state to require
the targeting state. Similarly, a
watch_in
causes the target state to watch
the targeting state. This
pattern continues for the rest of the requisites.
If a state declaration needs to be required by another state declaration then
require_in
can accommodate it. Therefore, these two sls files would be the
same in the end:
Using require
httpd:
pkg.installed: []
service.running:
- require:
- pkg: httpd
Using require_in
httpd:
pkg.installed:
- require_in:
- service: httpd
service.running: []
The require_in
statement is particularly useful when assigning a require
in a separate sls file. For instance it may be common for httpd to require
components used to set up PHP or mod_python, but the HTTP state does not need
to be aware of the additional components that require it when it is set up:
http.sls
httpd:
pkg.installed: []
service.running:
- require:
- pkg: httpd
php.sls
include:
- http
php:
pkg.installed:
- require_in:
- service: httpd
mod_python.sls
include:
- http
mod_python:
pkg.installed:
- require_in:
- service: httpd
Now the httpd server will only start if php or mod_python are first verified to be installed. Thus allowing for a requisite to be defined "after the fact".
The state altering system is used to make sure that states are evaluated exactly as the user expects. It can be used to double check that a state preformed exactly how it was expected to, or to make 100% sure that a state only runs under certain conditions. The use of unless or onlyif options help make states even more stateful. The check_cmds option helps ensure that the result of a state is evaluated correctly.
New in version 2014.7.0.
The unless
requisite specifies that a state should only run when any of
the specified commands return False
. The unless
requisite operates
as NOR and is useful in giving more granular control over when a state should
execute.
NOTE: Under the hood unless
calls cmd.retcode
with
python_shell=True
. This means the commands referenced by unless will be
parsed by a shell, so beware of side-effects as this shell will be run with the
same privileges as the salt-minion.
vim:
pkg.installed:
- unless:
- rpm -q vim-enhanced
- ls /usr/bin/vim
In the example above, the state will only run if either the vim-enhanced
package is not installed (returns False
) or if /usr/bin/vim does not
exist (returns False
). The state will run if both commands return
False
.
However, the state will not run if both commands return True
.
Unless checks are resolved for each name to which they are associated.
For example:
deploy_app:
cmd.run:
- names:
- first_deploy_cmd
- second_deploy_cmd
- unless: some_check
In the above case, some_check
will be run prior to _each_ name -- once for
first_deploy_cmd
and a second time for second_deploy_cmd
.
New in version 2014.7.0.
onlyif
is the opposite of unless
. If all of the commands in onlyif
return True
, then the state is run. If any of the specified commands
return False
, the state will not run.
NOTE: Under the hood onlyif
calls cmd.retcode
with
python_shell=True
. This means the commands referenced by unless will be
parsed by a shell, so beware of side-effects as this shell will be run with the
same privileges as the salt-minion.
stop-volume:
module.run:
- name: glusterfs.stop_volume
- m_name: work
- onlyif:
- gluster volume status work
- order: 1
remove-volume:
module.run:
- name: glusterfs.delete
- m_name: work
- onlyif:
- gluster volume info work
- watch:
- cmd: stop-volume
The above example ensures that the stop_volume and delete modules only run if the gluster commands return a 0 ret value.
New in version 2014.7.0.
listen and its counterpart listen_in trigger mod_wait functions for states, when those states succeed and result in changes, similar to how watch its counterpart watch_in. Unlike watch and watch_in, listen, and listen_in will not modify the order of states and can be used to ensure your states are executed in the order they are defined. All listen/listen_in actions will occur at the end of a state run, after all states have completed.
restart-apache2:
service.running:
- name: apache2
- listen:
- file: /etc/apache2/apache2.conf
configure-apache2:
file.managed:
- path: /etc/apache2/apache2.conf
- source: salt://apache2/apache2.conf
This example will cause apache2 to be restarted when the apache2.conf file is changed, but the apache2 restart will happen at the end of the state run.
restart-apache2:
service.running:
- name: apache2
configure-apache2:
file.managed:
- path: /etc/apache2/apache2.conf
- source: salt://apache2/apache2.conf
- listen_in:
- service: apache2
This example does the same as the above example, but puts the state argument on the file resource, rather than the service resource.
New in version 2014.7.0.
Check Command is used for determining that a state did or did not run as expected.
NOTE: Under the hood check_cmd
calls cmd.retcode
with
python_shell=True
. This means the commands referenced by unless will be
parsed by a shell, so beware of side-effects as this shell will be run with the
same privileges as the salt-minion.
comment-repo:
file.replace:
- path: /etc/yum.repos.d/fedora.repo
- pattern: ^enabled=0
- repl: enabled=1
- check_cmd:
- grep 'enabled=0' /etc/yum.repos.d/fedora.repo && return 1 || return 0
This will attempt to do a replace on all enabled=0 in the .repo file, and replace them with enabled=1. The check_cmd is just a bash command. It will do a grep for enabled=0 in the file, and if it finds any, it will return a 0, which will prompt the && portion of the command to return a 1, causing check_cmd to set the state as failed. If it returns a 1, meaning it didn't find any 'enabled=0' it will hit the || portion of the command, returning a 0, and declaring the function succeeded.
There are two commands used for the above checks.
mod_run_check
is used to check for onlyif
and unless
. If the goal is to
override the global check for these to variables, include a mod_run_check
in the
salt/states/ file.
mod_run_check_cmd
is used to check for the check_cmd options. To override
this one, include a mod_run_check_cmd
in the states file for the state.
Sometimes it may be desired that the salt minion execute a state run when it is started. This alleviates the need for the master to initiate a state run on a new minion and can make provisioning much easier.
As of Salt 0.10.3 the minion config reads options that allow for states to be executed at startup. The options are startup_states, sls_list, and top_file.
The startup_states option can be passed one of a number of arguments to define how to execute states. The available options are:
state.highstate
sls_list
option and execute the named sls filestop_file
option and execute states based on that top file
on the Salt MasterExecute state.highstate
when starting the minion:
startup_states: highstate
Execute the sls files edit.vim and hyper:
startup_states: sls
sls_list:
- edit.vim
- hyper
Executing a Salt state run can potentially change many aspects of a system and it may be desirable to first see what a state run is going to change before applying the run.
Salt has a test interface to report on exactly what will be changed, this interface can be invoked on any of the major state run functions:
salt '*' state.highstate test=True
salt '*' state.sls test=True
salt '*' state.single test=True
The test run is mandated by adding the test=True
option to the states. The
return information will show states that will be applied in yellow and the
result is reported as None
.
If the value test
is set to True
in the minion configuration file then
states will default to being executed in test mode. If this value is set then
states can still be run by calling test=False:
salt '*' state.highstate test=False
salt '*' state.sls test=False
salt '*' state.single test=False
The top file (top.sls) is used to map what SLS modules get loaded onto what minions via the state system. The top file creates a few general abstractions. First it maps what nodes should pull from which environments, next it defines which matches systems should draw from.
Environments allow conceptually organizing state tree directories. Environments can be made to be self-contained or state trees can be made to bleed through environments.
Note
Environments in Salt are very flexible. This section defines how the top file can be used to define what states from what environments are to be used for specific minions.
If the intent is to bind minions to specific environments, then the environment option can be set in the minion configuration file.
The environments in the top file corresponds with the environments defined in
the file_roots
variable. In a simple, single environment setup
you only have the base
environment, and therefore only one state tree. Here
is a simple example of file_roots
in the master configuration:
file_roots:
base:
- /srv/salt
This means that the top file will only have one environment to pull from, here is a simple, single environment top file:
base:
'*':
- core
- edit
This also means that /srv/salt
has a state tree. But if you want to use
multiple environments, or partition the file server to serve more than
just the state tree, then the file_roots
option can be expanded:
file_roots:
base:
- /srv/salt/base
dev:
- /srv/salt/dev
qa:
- /srv/salt/qa
prod:
- /srv/salt/prod
Then our top file could reference the environments:
dev:
'webserver*dev*':
- webserver
'db*dev*':
- db
qa:
'webserver*qa*':
- webserver
'db*qa*':
- db
prod:
'webserver*prod*':
- webserver
'db*prod*':
- db
In this setup we have state trees in three of the four environments, and no
state tree in the base
environment. Notice that the targets for the minions
specify environment data. In Salt the master determines who is in what
environment, and many environments can be crossed together. For instance, a
separate global state tree could be added to the base
environment if it
suits your deployment:
base:
'*':
- global
dev:
'webserver*dev*':
- webserver
'db*dev*':
- db
qa:
'webserver*qa*':
- webserver
'db*qa*':
- db
prod:
'webserver*prod*':
- webserver
'db*prod*':
- db
In this setup all systems will pull the global SLS from the base environment, as well as pull from their respective environments. If you assign only one SLS to a system, as in this example, a shorthand is also available:
base:
'*': global
dev:
'webserver*dev*': webserver
'db*dev*': db
qa:
'webserver*qa*': webserver
'db*qa*': db
prod:
'webserver*prod*': webserver
'db*prod*': db
Note
The top files from all defined environments will be compiled into a single top file for all states. Top files are environment agnostic.
Remember, that since everything is a file in Salt, the environments are primarily file server environments, this means that environments that have nothing to do with states can be defined and used to distribute other files.
A clean and recommended setup for multiple environments would look like this:
# Master file_roots configuration:
file_roots:
base:
- /srv/salt/base
dev:
- /srv/salt/dev
qa:
- /srv/salt/qa
prod:
- /srv/salt/prod
Then only place state trees in the dev, qa, and prod environments, leaving the base environment open for generic file transfers. Then the top.sls file would look something like this:
dev:
'webserver*dev*':
- webserver
'db*dev*':
- db
qa:
'webserver*qa*':
- webserver
'db*qa*':
- db
prod:
'webserver*prod*':
- webserver
'db*prod*':
- db
In addition to globs, minions can be specified in top files a few other ways. Some common ones are compound matches and node groups.
Here is a slightly more complex top file example, showing the different types of matches you can perform:
base:
'*':
- ldap-client
- networking
- salt.minion
'salt-master*':
- salt.master
'^(memcache|web).(qa|prod).loc$':
- match: pcre
- nagios.mon.web
- apache.server
'os:Ubuntu':
- match: grain
- repos.ubuntu
'os:(RedHat|CentOS)':
- match: grain_pcre
- repos.epel
'foo,bar,baz':
- match: list
- database
'somekey:abc':
- match: pillar
- xyz
'nag1* or G@role:monitoring':
- match: compound
- nagios.server
In this example top.sls
, all minions get the ldap-client, networking, and
salt.minion states. Any minion with an id matching the salt-master*
glob
will get the salt.master state. Any minion with ids matching the regular
expression ^(memcache|web).(qa|prod).loc$
will get the nagios.mon.web and
apache.server states. All Ubuntu minions will receive the repos.ubuntu state,
while all RHEL and CentOS minions will receive the repos.epel state. The
minions foo
, bar
, and baz
will receive the database state. Any
minion with a pillar named somekey
, having a value of abc
will receive
the xyz state. Finally, minions with ids matching the nag1* glob or with a
grain named role
equal to monitoring
will receive the nagios.server
state.
Warning
There is currently a known issue with the topfile compilation. The below may not be completely valid until https://github.com/saltstack/salt/issues/12483#issuecomment-64181598 is closed.
As mentioned earlier, the top files in the different environments are compiled
into a single set of data. The way in which this is done follows a few rules,
which are important to understand when arranging top files in different
environments. The examples below all assume that the file_roots
are set as in the above multi-environment example.
base
environment's top file is processed first. Any environment which
is defined in the base
top.sls as well as another environment's top file,
will use the instance of the environment configured in base
and ignore
all other instances. In other words, the base
top file is
authoritative when defining environments. Therefore, in the example below,
the dev
section in /srv/salt/dev/top.sls
would be completely
ignored./srv/salt/base/top.sls:
base:
'*':
- common
dev:
'webserver*dev*':
- webserver
'db*dev*':
- db
/srv/salt/dev/top.sls:
dev:
'10.10.100.0/24':
- match: ipcidr
- deployments.dev.site1
'10.10.101.0/24':
- match: ipcidr
- deployments.dev.site2
Note
The rules below assume that the environments being discussed were not
defined in the base
top file.
base
environment is not configured in the
base
environment's top file, then the other environments will be checked
in alphabetical order. The first top file found to contain a section for the
base
environment wins, and the other top files' base
sections are
ignored. So, provided there is no base
section in the base
top file,
with the below two top files the dev
environment would win out, and the
common.centos
SLS would not be applied to CentOS hosts./srv/salt/dev/top.sls:
base:
'os:Ubuntu':
- common.ubuntu
dev:
'webserver*dev*':
- webserver
'db*dev*':
- db
/srv/salt/qa/top.sls:
base:
'os:Ubuntu':
- common.ubuntu
'os:CentOS':
- common.centos
qa:
'webserver*qa*':
- webserver
'db*qa*':
- db
base
, the top file in a given environment
will be checked for a section matching the environment's name. If one is
found, then it is used. Otherwise, the remaining (non-base
) environments
will be checked in alphabetical order. In the below example, the qa
section in /srv/salt/dev/top.sls
will be ignored, but if
/srv/salt/qa/top.sls
were cleared or removed, then the states configured
for the qa
environment in /srv/salt/dev/top.sls
will be applied./srv/salt/dev/top.sls:
dev:
'webserver*dev*':
- webserver
'db*dev*':
- db
qa:
'10.10.200.0/24':
- match: ipcidr
- deployments.qa.site1
'10.10.201.0/24':
- match: ipcidr
- deployments.qa.site2
/srv/salt/qa/top.sls:
qa:
'webserver*qa*':
- webserver
'db*qa*':
- db
Note
When in doubt, the simplest way to configure your states is with a single
top.sls in the base
environment.
The template engines available to sls files and file templates come loaded with a number of context variables. These variables contain information and functions to assist in the generation of templates. See each variable below for its availability -- not all variables are available in all templating contexts.
The salt variable is available to abstract the salt library functions. This variable is a python dictionary containing all of the functions available to the running salt minion. It is available in all salt templates.
{% for file in salt['cmd.run']('ls -1 /opt/to_remove').splitlines() %}
/opt/to_remove/{{ file }}:
file.absent
{% endfor %}
The opts variable abstracts the contents of the minion's configuration file directly to the template. The opts variable is a dictionary. It is available in all templates.
{{ opts['cachedir'] }}
The config.get
function also searches for values in the opts dictionary.
The pillar dictionary can be referenced directly, and is available in all templates:
{{ pillar['key'] }}
Using the pillar.get
function via the salt variable is generally
recommended since a default can be safely set in the event that the value
is not available in pillar and dictionaries can be traversed directly:
{{ salt['pillar.get']('key', 'failover_value') }}
{{ salt['pillar.get']('stuff:more:deeper') }}
The grains dictionary makes the minion's grains directly available, and is available in all templates:
{{ grains['os'] }}
The grains.get
function can be used to traverse deeper grains and set
defaults:
{{ salt['grains.get']('os') }}
The env variable is available in only in sls files when gathering the sls from an environment.
{{ env }}
The sls variable contains the sls reference value, and is only available in the actual SLS file (not in any files referenced in that SLS). The sls reference value is the value used to include the sls in top files or via the include option.
{{ sls }}
State Modules are the components that map to actual enforcement and management of Salt states.
State Modules should be easy to write and straightforward. The information passed to the SLS data structures will map directly to the states modules.
Mapping the information from the SLS data is simple, this example should illustrate:
/etc/salt/master: # maps to "name"
file.managed: # maps to <filename>.<function> - e.g. "managed" in https://github.com/saltstack/salt/tree/develop/salt/states/file.py
- user: root # one of many options passed to the manage function
- group: root
- mode: 644
- source: salt://salt/master
Therefore this SLS data can be directly linked to a module, function, and arguments passed to that function.
This does issue the burden, that function names, state names and function arguments should be very human readable inside state modules, since they directly define the user interface.
Keyword Arguments
Salt passes a number of keyword arguments to states when rendering them,
including the environment, a unique identifier for the state, and more.
Additionally, keep in mind that the requisites for a state are part of the
keyword arguments. Therefore, if you need to iterate through the keyword
arguments in a state, these must be considered and handled appropriately.
One such example is in the pkgrepo.managed
state, which needs to be able to handle
arbitrary keyword arguments and pass them to module execution functions.
An example of how these keyword arguments can be handled can be found
here.
Place your custom state modules inside a _states
directory within the
file_roots
specified by the master config file. These custom
state modules can then be distributed in a number of ways. Custom state modules
are distributed when state.highstate
is
run, or by executing the saltutil.sync_states
or saltutil.sync_all
functions.
Any custom states which have been synced to a minion, that are named the
same as one of Salt's default set of states, will take the place of the default
state with the same name. Note that a state's default name is its filename
(i.e. foo.py
becomes state foo
), but that its name can be overridden
by using a __virtual__ function.
As with Execution Modules, State Modules can also make use of the __salt__
and __grains__
data.
It is important to note that the real work of state management should not be done in the state module unless it is needed. A good example is the pkg state module. This module does not do any package management work, it just calls the pkg execution module. This makes the pkg state module completely generic, which is why there is only one pkg state module and many backend pkg execution modules.
On the other hand some modules will require that the logic be placed in the state module, a good example of this is the file module. But in the vast majority of cases this is not the best approach, and writing specific execution modules to do the backend work will be the optimal solution.
A State Module must return a dict containing the following keys/values:
All states should check for and support test
being passed in the options.
This will return data about what changes would occur if the state were actually
run. An example of such a check could look like this:
# Return comment of changes if test.
if __opts__['test']:
ret['result'] = None
ret['comment'] = 'State Foo will execute with param {0}'.format(bar)
return ret
Make sure to test and return before performing any real actions on the minion.
If the state being written should support the watch requisite then a watcher function needs to be declared. The watcher function is called whenever the watch requisite is invoked and should be generic to the behavior of the state itself.
The watcher function should accept all of the options that the normal state functions accept (as they will be passed into the watcher function).
A watcher function typically is used to execute state specific reactive behavior, for instance, the watcher for the service module restarts the named service and makes it useful for the watcher to make the service react to changes in the environment.
The watcher function also needs to return the same data that a normal state function returns.
Some states need to execute something only once to ensure that an environment has been set up, or certain conditions global to the state behavior can be predefined. This is the realm of the mod_init interface.
A state module can have a function called mod_init which executes when the
first state of this type is called. This interface was created primarily to
improve the pkg state. When packages are installed the package metadata needs
to be refreshed, but refreshing the package metadata every time a package is
installed is wasteful. The mod_init function for the pkg state sets a flag down
so that the first, and only the first, package installation attempt will refresh
the package database (the package database can of course be manually called to
refresh via the refresh
option in the pkg state).
The mod_init function must accept the Low State Data for the given executing state as an argument. The low state data is a dict and can be seen by executing the state.show_lowstate function. Then the mod_init function must return a bool. If the return value is True, then the mod_init function will not be executed again, meaning that the needed behavior has been set up. Otherwise, if the mod_init function returns False, then the function will be called the next time.
A good example of the mod_init function is found in the pkg state module:
def mod_init(low):
'''
Refresh the package database here so that it only needs to happen once
'''
if low['fun'] == 'installed' or low['fun'] == 'latest':
rtag = __gen_rtag()
if not os.path.exists(rtag):
open(rtag, 'w+').write('')
return True
else:
return False
The mod_init function in the pkg state accepts the low state data as low
and then checks to see if the function being called is going to install
packages, if the function is not going to install packages then there is no
need to refresh the package database. Therefore if the package database is
prepared to refresh, then return True and the mod_init will not be called
the next time a pkg state is evaluated, otherwise return False and the mod_init
will be called next time a pkg state is evaluated.
The following is a simplistic example of a full state module and function. Remember to call out to execution modules to perform all the real work. The state module should only perform "before" and "after" checks.
Make a custom state module by putting the code into a file at the following path: /srv/salt/_states/my_custom_state.py.
Distribute the custom state module to the minions:
salt '*' saltutil.sync_states
Write a new state to use the custom state by making a new state file, for instance /srv/salt/my_custom_state.sls.
Add the following SLS configuration to the file created in Step 3:
human_friendly_state_id: # An arbitrary state ID declaration.
my_custom_state: # The custom state module name.
- enforce_custom_thing # The function in the custom state module.
- name: a_value # Maps to the ``name`` parameter in the custom function.
- foo: Foo # Specify the required ``foo`` parameter.
- bar: False # Override the default value for the ``bar`` parameter.
import salt.exceptions
def enforce_custom_thing(name, foo, bar=True):
'''
Enforce the state of a custom thing
This state module does a custom thing. It calls out to the execution module
``my_custom_module`` in order to check the current system and perform any
needed changes.
name
The thing to do something to
foo
A required argument
bar : True
An argument with a default value
'''
ret = {'name': name, 'changes': {}, 'result': False, 'comment': ''}
# Start with basic error-checking. Do all the passed parameters make sense
# and agree with each-other?
if bar == True and foo.startswith('Foo'):
raise salt.exceptions.SaltInvocationError(
'Argument "foo" cannot start with "Foo" if argument "bar" is True.')
# Check the current state of the system. Does anything need to change?
current_state = __salt__['my_custom_module.current_state'](name)
if current_state == foo:
ret['result'] = True
ret['comment'] = 'System already in the correct state'
return ret
# The state of the system does need to be changed. Check if we're running
# in ``test=true`` mode.
if __opts__['test'] == True:
ret['comment'] = 'The state of "{0}" will be changed.'.format(name)
ret['changes'] = {
'old': current_state,
'new': 'Description, diff, whatever of the new state',
}
# Return ``None`` when running with ``test=true``.
ret['result'] = None
return ret
# Finally, make the actual change and return the result.
new_state = __salt__['my_custom_module.change_state'](name, foo)
ret['comment'] = 'The state of "{0}" was changed!'.format(name)
ret['changes'] = {
'old': current_state,
'new': new_state,
}
ret['result'] = True
return ret
State management, also frequently called Software Configuration Management (SCM), is a program that puts and keeps a system into a predetermined state. It installs software packages, starts or restarts services or puts configuration files in place and watches them for changes.
Having a state management system in place allows one to easily and reliably configure and manage a few servers or a few thousand servers. It allows configurations to be kept under version control.
Salt States is an extension of the Salt Modules that we discussed in the previous remote execution tutorial. Instead of calling one-off executions the state of a system can be easily defined and then enforced.
The Salt state system is comprised of a number of components. As a user, an understanding of the SLS and renderer systems are needed. But as a developer, an understanding of Salt states and how to write the states is needed as well.
Note
States are compiled and executed only on minions that have been targeted. To execute functions directly on masters, see runners.
The primary system used by the Salt state system is the SLS system. SLS stands for SaLt State.
The Salt States are files which contain the information about how to configure Salt minions. The states are laid out in a directory tree and can be written in many different formats.
The contents of the files and they way they are laid out is intended to be as simple as possible while allowing for maximum flexibility. The files are laid out in states and contains information about how the minion needs to be configured.
SLS files are laid out in the Salt file server.
A simple layout can look like this:
top.sls
ssh.sls
sshd_config
users/init.sls
users/admin.sls
salt/master.sls
web/init.sls
The top.sls
file is a key component. The top.sls
files
is used to determine which SLS files should be applied to which minions.
The rest of the files with the .sls
extension in the above example are
state files.
Files without a .sls
extensions are seen by the Salt master as
files that can be downloaded to a Salt minion.
States are translated into dot notation. For example, the ssh.sls
file is
seen as the ssh state and the users/admin.sls
file is seen as the
users.admin state.
Files named init.sls
are translated to be the state name of the parent
directory, so the web/init.sls
file translates to the web
state.
In Salt, everything is a file; there is no "magic translation" of files and file types. This means that a state file can be distributed to minions just like a plain text or binary file.
The Salt state files are simple sets of data. Since SLS files are just data they can be represented in a number of different ways.
The default format is YAML generated from a Jinja template. This allows for the states files to have all the language constructs of Python and the simplicity of YAML.
State files can then be complicated Jinja templates that translate down to YAML, or just plain and simple YAML files.
The State files are simply common data structures such as dictionaries and lists, constructed using a templating language such as YAML.
Here is an example of a Salt State:
vim:
pkg.installed: []
salt:
pkg.latest:
- name: salt
service.running:
- names:
- salt-master
- salt-minion
- require:
- pkg: salt
- watch:
- file: /etc/salt/minion
/etc/salt/minion:
file.managed:
- source: salt://salt/minion
- user: root
- group: root
- mode: 644
- require:
- pkg: salt
This short stanza will ensure that vim is installed, Salt is installed and up to date, the salt-master and salt-minion daemons are running and the Salt minion configuration file is in place. It will also ensure everything is deployed in the right order and that the Salt services are restarted when the watched file updated.
The top file controls the mapping between minions and the states which should be applied to them.
The top file specifies which minions should have which SLS files applied and which environments they should draw those SLS files from.
The top file works by specifying environments on the top-level.
Each environment contains globs to match minions. Finally, each glob contains a list of lists of Salt states to apply to matching minions:
base:
'*':
- salt
- users
- users.admin
'saltmaster.*':
- match: pcre
- salt.master
This above example uses the base environment which is built into the default Salt setup.
The base environment has two globs. First, the '*' glob contains a list of SLS files to apply to all minions.
The second glob contains a regular expression that will match all minions with an ID matching saltmaster.* and specifies that for those minions, the salt.master state should be applied.
Some Salt states require that specific packages be installed in order for the
module to load. As an example the pip
state
module requires the pip package for proper name and version parsing.
In most of the common cases, Salt is clever enough to transparently reload the modules. For example, if you install a package, Salt reloads modules because some other module or state might require just that package which was installed.
On some edge-cases salt might need to be told to reload the modules. Consider
the following state file which we'll call pep8.sls
:
python-pip:
cmd.run:
- name: |
easy_install --script-dir=/usr/bin -U pip
- cwd: /
pep8:
pip.installed:
- require:
- cmd: python-pip
The above example installs pip using easy_install
from setuptools and
installs pep8 using pip
, which, as told
earlier, requires pip to be installed system-wide. Let's execute this state:
salt-call state.sls pep8
The execution output would be something like:
----------
State: - pip
Name: pep8
Function: installed
Result: False
Comment: State pip.installed found in sls pep8 is unavailable
Changes:
Summary
------------
Succeeded: 1
Failed: 1
------------
Total: 2
If we executed the state again the output would be:
----------
State: - pip
Name: pep8
Function: installed
Result: True
Comment: Package was successfully installed
Changes: pep8==1.4.6: Installed
Summary
------------
Succeeded: 2
Failed: 0
------------
Total: 2
Since we installed pip using cmd
, Salt has no way
to know that a system-wide package was installed.
On the second execution, since the required pip package was installed, the state executed correctly.
Note
Salt does not reload modules on every state run because doing so would greatly slow down state execution.
So how do we solve this edge-case? reload_modules
!
reload_modules
is a boolean option recognized by salt on all available
states which forces salt to reload its modules once a given state finishes.
The modified state file would now be:
python-pip:
cmd.run:
- name: |
easy_install --script-dir=/usr/bin -U pip
- cwd: /
- reload_modules: true
pep8:
pip.installed:
- require:
- cmd: python-pip
Let's run it, once:
salt-call state.sls pep8
The output is:
----------
State: - pip
Name: pep8
Function: installed
Result: True
Comment: Package was successfully installed
Changes: pep8==1.4.6: Installed
Summary
------------
Succeeded: 2
Failed: 0
------------
Total: 2
alias |
Configuration of email aliases | ||
alternatives |
Configuration of the alternatives system | ||
apache |
Apache state | ||
apache_module |
Manage Apache Modules | ||
apt |
Package management operations specific to APT- and DEB-based systems | ||
archive |
Extract an archive | ||
artifactory |
This state downloads artifacts from artifactory. | ||
at |
Configuration disposable regularly scheduled tasks for at. | ||
augeas |
Configuration management using Augeas | ||
aws_sqs |
Manage SQS Queues | ||
blockdev |
Management of Block Devices | ||
boto_asg |
Manage Autoscale Groups | ||
boto_cfn |
Connection module for Amazon Cloud Formation | ||
boto_cloudwatch_alarm |
Manage Cloudwatch alarms | ||
boto_dynamodb |
Manage DynamoDB Tables | ||
boto_ec2 |
Manage EC2 | ||
boto_elasticache |
Manage Elasticache ================== replication_group_description .. | ||
boto_elb |
Manage ELBs | ||
boto_iam |
Manage IAM roles. | ||
boto_iam_role |
Manage IAM roles | ||
boto_kms |
Manage KMS keys, key policies and grants. | ||
boto_lc |
Manage Launch Configurations | ||
boto_rds |
Manage RDSs | ||
boto_route53 |
Manage Route53 records | ||
boto_secgroup |
Manage Security Groups | ||
boto_sns |
Manage SNS Topics | ||
boto_sqs |
Manage SQS Queues | ||
boto_vpc |
Manage VPCs | ||
bower |
Installation of Bower Packages | ||
cabal |
Installation of Cabal Packages | ||
chef |
Execute Chef client runs | ||
cloud |
Using states instead of maps to deploy clouds | ||
cmd |
Execution of arbitrary commands | ||
composer |
Installation of Composer Packages | ||
cron |
Management of cron, the Unix command scheduler | ||
cyg |
Installation of Cygwin packages. | ||
ddns |
Dynamic DNS updates | ||
debconfmod |
Management of debconf selections | ||
disk |
Disk monitoring state | ||
dockerio |
Manage Docker containers | ||
dockerng |
Management of Docker containers | ||
drac |
Management of Dell DRAC | ||
environ |
Support for getting and setting the environment variables of the current salt process. | ||
eselect |
Management of Gentoo configuration using eselect | ||
event |
Send events through Salt's event system during state runs | ||
file |
Operations on regular files, special files, directories, and symlinks | ||
gem |
Installation of Ruby modules packaged as gems | ||
git |
Interaction with Git repositories | ||
glusterfs |
Manage glusterfs pool. | ||
gnomedesktop |
Configuration of the GNOME desktop | ||
grafana |
Manage Grafana Dashboards | ||
grains |
Manage grains on the minion | ||
group |
Management of user groups | ||
hg |
Interaction with Mercurial repositories | ||
hipchat |
Send a message to Hipchat | ||
host |
Management of addresses and names in hosts file | ||
htpasswd |
Support for htpasswd module | ||
http |
HTTP monitoring states | ||
incron |
Management of incron, the inotify cron | ||
influxdb_database |
Management of InfluxDB databases | ||
influxdb_user |
Management of InfluxDB users | ||
ini_manage |
Manage ini files | ||
ipmi |
Manage IPMI devices over LAN | ||
ipset |
Management of ipsets | ||
iptables |
Management of iptables | ||
jboss7 |
Manage JBoss 7 Application Server via CLI interface | ||
keyboard |
Management of keyboard layouts | ||
keystone |
Management of Keystone users | ||
kmod |
Loading and unloading of kernel modules | ||
layman |
Management of Gentoo Overlays using layman | ||
libvirt |
Manage libvirt certificates | ||
linux_acl |
Linux File Access Control Lists | ||
locale |
Management of languages/locales | ||
lvm |
Management of Linux logical volumes | ||
lvs_server |
Management of LVS (Linux Virtual Server) Real Server | ||
lvs_service |
Management of LVS (Linux Virtual Server) Service | ||
lxc |
Manage Linux Containers | ||
makeconf |
Management of Gentoo make.conf | ||
mdadm |
Managing software RAID with mdadm | ||
memcached |
States for Management of Memcached Keys | ||
modjk |
State to control Apache modjk | ||
modjk_worker |
Manage modjk workers | ||
module |
Execution of Salt modules from within states | ||
mongodb_database |
Management of Mongodb databases | ||
mongodb_user |
Management of Mongodb users | ||
monit |
Monit state | ||
mount |
Mounting of filesystems | ||
mysql_database |
Management of MySQL databases (schemas) | ||
mysql_grants |
Management of MySQL grants (user permissions) | ||
mysql_query |
Execution of MySQL queries | ||
mysql_user |
Management of MySQL users | ||
network |
Configuration of network interfaces | ||
nftables |
Management of nftables | ||
npm |
Installation of NPM Packages | ||
ntp |
Management of NTP servers | ||
openstack_config |
Manage OpenStack configuration file settings. | ||
pagerduty |
Create an Event in PagerDuty | ||
pagerduty_escalation_policy |
Manage PagerDuty escalation policies. | ||
pagerduty_schedule |
Manage PagerDuty schedules. | ||
pagerduty_service |
Manage PagerDuty services | ||
pagerduty_user |
Manage PagerDuty users. | ||
pecl |
Installation of PHP Extensions Using pecl | ||
pip_state |
Installation of Python Packages Using pip | ||
pkg |
Installation of packages using OS package managers such as yum or apt-get | ||
pkgng |
Manage package remote repo using FreeBSD pkgng | ||
pkgrepo |
Management of APT/YUM package repos | ||
portage_config |
Management of Portage package configuration on Gentoo | ||
ports |
Manage software from FreeBSD ports | ||
postgres_database |
Management of PostgreSQL databases | ||
postgres_extension |
Management of PostgreSQL extensions (e.g.: postgis) | ||
postgres_group |
Management of PostgreSQL groups (roles) | ||
postgres_schema |
Management of PostgreSQL schemas | ||
postgres_user |
Management of PostgreSQL users (roles) | ||
powerpath |
Powerpath configuration support | ||
process |
Process Management | ||
pushover |
Send a message to PushOver | ||
pyenv |
Managing python installations with pyenv | ||
pyrax_queues |
Manage Rackspace Queues | ||
quota |
Management of POSIX Quotas | ||
rabbitmq_cluster |
Manage RabbitMQ Clusters | ||
rabbitmq_plugin |
Manage RabbitMQ Plugins | ||
rabbitmq_policy |
Manage RabbitMQ Policies | ||
rabbitmq_user |
Manage RabbitMQ Users | ||
rabbitmq_vhost |
Manage RabbitMQ Virtual Hosts | ||
rbenv |
Managing Ruby installations with rbenv | ||
rdp |
Manage RDP Service on Windows servers | ||
redismod |
Management of Redis server | ||
reg |
Manage the registry on Windows | ||
rvm |
Managing Ruby installations and gemsets with Ruby Version Manager (RVM) | ||
saltmod |
Control the Salt command interface | ||
schedule |
Management of the Salt scheduler | ||
selinux |
Management of SELinux rules | ||
serverdensity_device |
Monitor Server with Server Density | ||
service |
Starting or restarting of services and daemons | ||
slack |
Send a message to Slack | ||
smtp |
Sending Messages via SMTP | ||
splunk_search |
Splunk Search State Module | ||
ssh_auth |
Control of entries in SSH authorized_key files | ||
ssh_known_hosts |
Control of SSH known_hosts entries | ||
stateconf |
Stateconf System | ||
status |
Minion status monitoring | ||
supervisord |
Interaction with the Supervisor daemon | ||
svn |
Manage SVN repositories | ||
sysctl |
Configuration of the Linux kernel using sysctl | ||
syslog_ng |
State module for syslog_ng | ||
sysrc |
|||
test |
Test States | ||
timezone |
Management of timezones | ||
tomcat |
This state uses the manager webapp to manage Apache tomcat webapps | ||
tuned |
|
||
uptime |
Monitor Web Server with Uptime | ||
user |
Management of user accounts | ||
vbox_guest |
VirtualBox Guest Additions installer state | ||
virtualenv_mod |
Setup of Python virtualenv sandboxes | ||
win_dacl |
Windows Object Access Control Lists | ||
win_dns_client |
Module for configuring DNS Client on Windows systems | ||
win_firewall |
State for configuring Windows Firewall | ||
win_network |
Configuration of network interfaces on Windows hosts | ||
win_path |
Manage the Windows System PATH | ||
win_servermanager |
Manage Windows features via the ServerManager powershell module | ||
win_system |
Management of Windows system information | ||
win_update |
Management of the windows update agent | ||
winrepo |
Manage Windows Package Repository | ||
x509 |
Manage X509 Certificates | ||
xmpp |
Sending Messages over XMPP | ||
zcbuildout |
Management of zc.buildout | ||
zk_concurrency |
Control concurrency of steps within state execution using zookeeper |
Salt execution modules are the functions called by the salt command.
Note
Salt execution modules are different from state modules and cannot be
called directly within state files. You must use the module
state module to call execution modules within state runs.
See also
Salt ships with many modules that cover a wide variety of tasks.
Writing Salt execution modules is straightforward.
A Salt execution modules is a Python or Cython module
placed in a directory called _modules/
within the file_roots
as specified by the master config file. By
default this is /srv/salt/_modules on Linux systems.
Modules placed in _modules/
will be synced to the minions when any of the following
Salt functions are called:
Note that a module's default name is its filename
(i.e. foo.py
becomes module foo
), but that its name can be overridden
by using a __virtual__ function.
If a Salt module has errors and cannot be imported, the Salt minion will continue to load without issue and the module with errors will simply be omitted.
If adding a Cython module the file must be named <modulename>.pyx
so that
the loader knows that the module needs to be imported as a Cython module. The
compilation of the Cython module is automatic and happens when the minion
starts, so only the *.pyx
file is required.
All of the Salt execution modules are available to each other and modules can call functions available in other execution modules.
The variable __salt__
is packed into the modules after they are loaded into
the Salt minion.
The __salt__
variable is a Python dictionary
containing all of the Salt functions. Dictionary keys are strings representing the
names of the modules and the values are the functions themselves.
Salt modules can be cross-called by accessing the value in the __salt__
dict:
def foo(bar):
return __salt__['cmd.run'](bar)
This code will call the run function in the cmd
and pass the argument
bar
to it.
When interacting with execution modules often it is nice to be able to read information dynamically about the minion or to load in configuration parameters for a module.
Salt allows for different types of data to be loaded into the modules by the minion.
The values detected by the Salt Grains on the minion are available in a
dict named __grains__
and can be accessed
from within callable objects in the Python modules.
To see the contents of the grains dictionary for a given system in your deployment
run the grains.items()
function:
salt 'hostname' grains.items --output=pprint
Any value in a grains dictionary can be accessed as any other Python dictionary. For
example, the grain representing the minion ID is stored in the id
key and from
an execution module, the value would be stored in __grains__['id']
.
Since parameters for configuring a module may be desired, Salt allows for configuration information from the minion configuration file to be passed to execution modules.
Since the minion configuration file is a YAML document, arbitrary configuration
data can be passed in the minion config that is read by the modules. It is therefore
strongly recommended that the values passed in the configuration file match
the module name. A value intended for the test
execution module should be named
test.<value>
.
The test execution module contains usage of the module configuration and the default
configuration file for the minion contains the information and format used to
pass data to the modules. salt.modules.test
, conf/minion
.
Since execution module functions can return different data, and the way the data is printed can greatly change the presentation, Salt has a printout configuration.
When writing a module the __outputter__
dictionary can be declared in the module.
The __outputter__
dictionary contains a mapping of function name to Salt
Outputter.
__outputter__ = {
'run': 'txt'
}
This will ensure that the text outputter is used.
Sometimes an execution module should be presented in a generic way. A good example of this can be found in the package manager modules. The package manager changes from one operating system to another, but the Salt execution module that interfaces with the package manager can be presented in a generic way.
The Salt modules for package managers all contain a __virtual__
function
which is called to define what systems the module should be loaded on.
The __virtual__
function is used to return either a
string or False
. If
False is returned then the module is not loaded, if a string is returned then
the module is loaded with the name of the string.
Note
Optionally, modules may additionally return a list of reasons that a module could not be loaded. For example, if a dependency for 'my_mod' was not met, a __virtual__ function could do as follows:
return False, ['My Module must be installed before this module can be used.']
This means that the package manager modules can be presented as the pkg
module
regardless of what the actual module is named.
Since __virtual__
is called before the module is loaded, __salt__
will be
unavailable as it will not have been packed into the module at this point in time.
The package manager modules are among the best example of using the __virtual__
function. Some examples:
Note
Modules which return a string from __virtual__
that is already used by a module that
ships with Salt will _override_ the stock module.
Salt execution modules are documented. The sys.doc()
function will return the
documentation for all available modules:
salt '*' sys.doc
The sys.doc
function simply prints out the docstrings found in the modules; when
writing Salt execution modules, please follow the formatting conventions for docstrings as
they appear in the other modules.
It is strongly suggested that all Salt modules have documentation added.
To add documentation add a Python docstring to the function.
def spam(eggs):
'''
A function to make some spam with eggs!
CLI Example::
salt '*' test.spam eggs
'''
return eggs
Now when the sys.doc call is executed the docstring will be cleanly returned to the calling terminal.
Documentation added to execution modules in docstrings will automatically be added to the online web-based documentation.
When writing a Python docstring for an execution module, add information about the module using the following field lists:
:maintainer: Thomas Hatch <thatch@saltstack.com, Seth House <shouse@saltstack.com>
:maturity: new
:depends: python-mysqldb
:platform: all
The maintainer field is a comma-delimited list of developers who help maintain this module.
The maturity field indicates the level of quality and testing for this module. Standard labels will be determined.
The depends field is a comma-delimited list of modules that this module depends on.
The platform field is a comma-delimited list of platforms that this module is known to run on.
In Salt, Python callable objects contained within an execution module are made available
to the Salt minion for use. The only exception to this rule is a callable
object with a name starting with an underscore _
.
def foo(bar):
return bar
class baz:
def __init__(self, quo):
pass
def _foobar(baz): # Preceded with an _
return baz
cheese = {} # Not a callable Python object
Note
Some callable names also end with an underscore _
, to avoid keyword clashes
with Python keywords. When using execution modules, or state modules, with these
in them the trailing underscore should be omitted.
When writing execution modules there are many times where some of the module will work on all hosts but some functions have an external dependency, such as a service that needs to be installed or a binary that needs to be present on the system.
Instead of trying to wrap much of the code in large try/except blocks, a decorator can be used.
If the dependencies passed to the decorator don't exist, then the salt minion will remove those functions from the module on that host.
If a "fallback_function" is defined, it will replace the function instead of removing it
import logging
from salt.utils.decorators import depends
log = logging.getLogger(__name__)
try:
import dependency_that_sometimes_exists
except ImportError as e:
log.trace('Failed to import dependency_that_sometimes_exists: {0}'.format(e))
@depends('dependency_that_sometimes_exists')
def foo():
'''
Function with a dependency on the "dependency_that_sometimes_exists" module,
if the "dependency_that_sometimes_exists" is missing this function will not exist
'''
return True
def _fallback():
'''
Fallback function for the depends decorator to replace a function with
'''
return '"dependency_that_sometimes_exists" needs to be installed for this function to exist'
@depends('dependency_that_sometimes_exists', fallback_function=_fallback)
def foo():
'''
Function with a dependency on the "dependency_that_sometimes_exists" module.
If the "dependency_that_sometimes_exists" is missing this function will be
replaced with "_fallback"
'''
return True
In addition to global dependancies the depends decorator also supports raw booleans.
from salt.utils.decorators import depends
HAS_DEP = False
try:
import dependency_that_sometimes_exists
HAS_DEP = True
except ImportError:
pass
@depends(HAS_DEP)
def foo():
return True
Salt includes a number of built-in subsystems to generate top file data, they are listed listed at Full list of builtin master tops modules.
The source for the built-in Salt master tops can be found here: https://github.com/saltstack/salt/blob/develop/salt/tops
cobbler |
Cobbler Tops |
ext_nodes |
External Nodes Classifier |
mongo |
Read tops data from a mongodb collection |
reclass_adapter |
Read tops data from a reclass database |
config |
Manage the master configuration file |
error |
Error generator to enable integration testing of salt wheel error handling |
file_roots |
Read in files from the file_root and save files to the file root |
key |
Wheel system wrapper for key system |
minions |
Wheel system wrapper for connected minions |
pillar_roots |
The pillar_roots wheel module is used to manage files under the pillar roots directories on the master server. |
btmp |
Beacon to fire events at failed login of users |
diskusage |
Beacon to monitor disk usage. |
inotify |
Watch files and translate the changes into salt events |
journald |
A simple beacon to watch journald for specific entries |
load |
Beacon to emit system load averages |
network_info |
Beacon to monitor statistics from ethernet adapters |
service |
Send events covering service status |
sh |
Watch the shell commands being executed actively. |
twilio_txt_msg |
Beacon to emit Twilio text messages |
wtmp |
Beacon to fire events at login of users as registered in the wtmp file |
logstash |
An engine that reads messages from the salt event bus and pushes them onto a logstash endpoint. |
sqs_events |
An engine that continuously reads messages from SQS and fires them as events. |
test |
A simple test engine, not intended for real use but as an example |
Salt's extreme flexibility leads to many questions concerning the structure of configuration files.
This document exists to clarify these points through examples and code.
When structuring Salt States and Formulas it is important to begin with the directory structure. A proper directory structure clearly defines the functionality of each state to the user via visual inspection of the state's name.
Reviewing the MySQL Salt Formula it is clear to see the benefits to the end-user when reviewing a sample of the available states:
/srv/salt/mysql/files/
/srv/salt/mysql/client.sls
/srv/salt/mysql/map.jinja
/srv/salt/mysql/python.sls
/srv/salt/mysql/server.sls
This directory structure would lead to these states being referenced in a top file in the following way:
base:
'web*':
- mysql.client
- mysql.python
'db*':
- mysql.server
This clear definition ensures that the user is properly informed of what each state will do.
Another example comes from the vim-formula:
/srv/salt/vim/files/
/srv/salt/vim/absent.sls
/srv/salt/vim/init.sls
/srv/salt/vim/map.jinja
/srv/salt/vim/nerdtree.sls
/srv/salt/vim/pyflakes.sls
/srv/salt/vim/salt.sls
Once again viewing how this would look in a top file:
/srv/salt/top.sls:
base:
'web*':
- vim
- vim.nerdtree
- vim.pyflakes
- vim.salt
'db*':
- vim.absent
The usage of a clear top-level directory as well as properly named states reduces the overall complexity and leads a user to both understand what will be included at a glance and where it is located.
In addition Formulas should be used as often as possible.
Note
Formulas repositories on the saltstack-formulas GitHub organization should not be pointed to directly from systems that automatically fetch new updates such as GitFS or similar tooling. Instead formulas repositories should be forked on GitHub or cloned locally, where unintended, automatic changes will not take place.
Pillars are used to store
secure and insecure data pertaining to minions. When designing the structure
of the /srv/pillar
directory, the pillars contained within
should once again be focused on clear and concise data which users can easily
review, modify, and understand.
The /srv/pillar/
directory is primarily controlled by top.sls
. It
should be noted that the pillar top.sls
is not used as a location to
declare variables and their values. The top.sls
is used as a way to
include other pillar files and organize the way they are matched based on
environments or grains.
An example top.sls
may be as simple as the following:
/srv/pillar/top.sls:
base:
'*':
- packages
Or much more complicated, using a variety of matchers:
/srv/pillar/top.sls:
base:
'*':
- apache
dev:
'os:Debian':
- match: grain
- vim
test:
'* and not G@os: Debian':
- match: compound
- emacs
It is clear to see through these examples how the top file provides users with power but when used incorrectly it can lead to confusing configurations. This is why it is important to understand that the top file for pillar is not used for variable definitions.
Each SLS file within the /srv/pillar/
directory should correspond to the
states which it matches.
This would mean that the apache
pillar file should contain data relevant to
Apache. Structuring files in this way once again ensures modularity, and
creates a consistent understanding throughout our Salt environment. Users can
expect that pillar variables found in an Apache state will live inside of an
Apache pillar:
/srv/salt/pillar/apache.sls
:
apache:
lookup:
name: httpd
config:
tmpl: /etc/httpd/httpd.conf
While this pillar file is simple, it shows how a pillar file explicitly relates to the state it is associated with.
Salt allows users to define variables in SLS files. When creating a state variables should provide users with as much flexibility as possible. This means that variables should be clearly defined and easy to manipulate, and that sane defaults should exist in the event a variable is not properly defined. Looking at several examples shows how these different items can lead to extensive flexibility.
Although it is possible to set variables locally, this is generally not preferred:
/srv/salt/apache/conf.sls
:
{% set name = 'httpd' %}
{% set tmpl = 'salt://apache/files/httpd.conf' %}
include:
- apache
apache_conf:
file.managed:
- name: {{ name }}
- source: {{ tmpl }}
- template: jinja
- user: root
- watch_in:
- service: apache
When generating this information it can be easily transitioned to the pillar where data can be overwritten, modified, and applied to multiple states, or locations within a single state:
/srv/pillar/apache.sls
:
apache:
lookup:
name: httpd
config:
tmpl: salt://apache/files/httpd.conf
/srv/salt/apache/conf.sls
:
{% from "apache/map.jinja" import apache with context %}
include:
- apache
apache_conf:
file.managed:
- name: {{ salt['pillar.get']('apache:lookup:name') }}
- source: {{ salt['pillar.get']('apache:lookup:config:tmpl') }}
- template: jinja
- user: root
- watch_in:
- service: apache
This flexibility provides users with a centralized location to modify variables, which is extremely important as an environment grows.
Ensuring that states are modular is one of the key concepts to understand within Salt. When creating a state a user must consider how many times the state could be re-used, and what it relies on to operate. Below are several examples which will iteratively explain how a user can go from a state which is not very modular to one that is:
/srv/salt/apache/init.sls
:
httpd:
pkg.installed: []
service.running:
- enable: True
/etc/httpd/httpd.conf:
file.managed:
- source: salt://apache/files/httpd.conf
- template: jinja
- watch_in:
- service: httpd
The example above is probably the worst-case scenario when writing a state. There is a clear lack of focus by naming both the pkg/service, and managed file directly as the state ID. This would lead to changing multiple requires within this state, as well as others that may depend upon the state.
Imagine if a require was used for the httpd
package in another state, and
then suddenly it's a custom package. Now changes need to be made in multiple
locations which increases the complexity and leads to a more error prone
configuration.
There is also the issue of having the configuration file located in the init, as a user would be unable to simply install the service and use the default conf file.
Our second revision begins to address the referencing by using - name
, as
opposed to direct ID references:
/srv/salt/apache/init.sls
:
apache:
pkg.installed:
- name: httpd
service.running:
- name: httpd
- enable: True
apache_conf:
file.managed:
- name: /etc/httpd/httpd.conf
- source: salt://apache/files/httpd.conf
- template: jinja
- watch_in:
- service: apache
The above init file is better than our original, yet it has several issues which lead to a lack of modularity. The first of these problems is the usage of static values for items such as the name of the service, the name of the managed file, and the source of the managed file. When these items are hard coded they become difficult to modify and the opportunity to make mistakes arises. It also leads to multiple edits that need to occur when changing these items (imagine if there were dozens of these occurrences throughout the state!). There is also still the concern of the configuration file data living in the same state as the service and package.
In the next example steps will be taken to begin addressing these issues. Starting with the addition of a map.jinja file (as noted in the Formula documentation), and modification of static values:
/srv/salt/apache/map.jinja
:
{% set apache = salt['grains.filter_by']({
'Debian': {
'server': 'apache2',
'service': 'apache2',
'conf': '/etc/apache2/apache.conf',
},
'RedHat': {
'server': 'httpd',
'service': 'httpd',
'conf': '/etc/httpd/httpd.conf',
},
}, merge=salt['pillar.get']('apache:lookup')) %}
/srv/pillar/apache.sls:
apache:
lookup:
config:
tmpl: salt://apache/files/httpd.conf
/srv/salt/apache/init.sls
:
{% from "apache/map.jinja" import apache with context %}
apache:
pkg.installed:
- name: {{ apache.server }}
service.running:
- name: {{ apache.service }}
- enable: True
apache_conf:
file.managed:
- name: {{ apache.conf }}
- source: {{ salt['pillar.get']('apache:lookup:config:tmpl') }}
- template: jinja
- user: root
- watch_in:
- service: apache
The changes to this state now allow us to easily identify the location of the variables, as well as ensuring they are flexible and easy to modify. While this takes another step in the right direction, it is not yet complete. Suppose the user did not want to use the provided conf file, or even their own configuration file, but the default apache conf. With the current state setup this is not possible. To attain this level of modularity this state will need to be broken into two states.
/srv/salt/apache/map.jinja
:
{% set apache = salt['grains.filter_by']({
'Debian': {
'server': 'apache2',
'service': 'apache2',
'conf': '/etc/apache2/apache.conf',
},
'RedHat': {
'server': 'httpd',
'service': 'httpd',
'conf': '/etc/httpd/httpd.conf',
},
}, merge=salt['pillar.get']('apache:lookup')) %}
/srv/pillar/apache.sls
:
apache:
lookup:
config:
tmpl: salt://apache/files/httpd.conf
/srv/salt/apache/init.sls
:
{% from "apache/map.jinja" import apache with context %}
apache:
pkg.installed:
- name: {{ apache.server }}
service.running:
- name: {{ apache.service }}
- enable: True
/srv/salt/apache/conf.sls
:
{% from "apache/map.jinja" import apache with context %}
include:
- apache
apache_conf:
file.managed:
- name: {{ apache.conf }}
- source: {{ salt['pillar.get']('apache:lookup:config:tmpl') }}
- template: jinja
- user: root
- watch_in:
- service: apache
This new structure now allows users to choose whether they only wish to install the default Apache, or if they wish, overwrite the default package, service, configuration file location, or the configuration file itself. In addition to this the data has been broken between multiple files allowing for users to identify where they need to change the associated data.
Secure data refers to any information that you would not wish to share with anyone accessing a server. This could include data such as passwords, keys, or other information.
As all data within a state is accessible by EVERY server that is connected it is important to store secure data within pillar. This will ensure that only those servers which require this secure data have access to it. In this example a use can go from an insecure configuration to one which is only accessible by the appropriate hosts:
/srv/salt/mysql/testerdb.sls
:
testdb:
mysql_database.present::
- name: testerdb
/srv/salt/mysql/user.sls
:
include:
- mysql.testerdb
testdb_user:
mysql_user.present:
- name: frank
- password: "test3rdb"
- host: localhost
- require:
- sls: mysql.testerdb
Many users would review this state and see that the password is there in plain text, which is quite problematic. It results in several issues which may not be immediately visible.
The first of these issues is clear to most users -- the password being visible in this state. This means that any minion will have a copy of this, and therefore the password which is a major security concern as minions may not be locked down as tightly as the master server.
The other issue that can be encountered is access by users on the master. If everyone has access to the states (or their repository), then they are able to review this password. Keeping your password data accessible by only a few users is critical for both security and peace of mind.
There is also the issue of portability. When a state is configured this way it results in multiple changes needing to be made. This was discussed in the sections above but it is a critical idea to drive home. If states are not portable it may result in more work later!
Fixing this issue is relatively simple, the content just needs to be moved to the associated pillar:
/srv/pillar/mysql.sls
:
mysql:
lookup:
name: testerdb
password: test3rdb
user: frank
host: localhost
/srv/salt/mysql/testerdb.sls
:
testdb:
mysql_database.present:
- name: {{ salt['pillar.get']('mysql:lookup:name') }}
/srv/salt/mysql/user.sls
:
include:
- mysql.testerdb
testdb_user:
mysql_user.present:
- name: {{ salt['pillar.get']('mysql:lookup:user') }}
- password: {{ salt['pillar.get']('mysql:lookup:password') }}
- host: {{ salt['pillar.get']('mysql:lookup:host') }}
- require:
- sls: mysql.testerdb
Now that the database details have been moved to the associated pillar file, only machines which are targeted via pillar will have access to these details. Access to users who should not be able to review these details can also be prevented while ensuring that they are still able to write states which take advantage of this information.
The intent of the troubleshooting section is to introduce solutions to a number of common issues encountered by users and the tools that are available to aid in developing States and Salt code.
If your Salt master is having issues such as minions not returning data, slow execution times, or a variety of other issues, the following links contain details on troubleshooting the most common issues encountered:
A great deal of information is available via the debug logging system, if you are having issues with minions connecting or not starting run the master in the foreground:
# salt-master -l debug
Anyone wanting to run Salt daemons via a process supervisor such as monit,
runit, or supervisord, should omit the -d
argument to the daemons and
run them in the foreground.
For the master, TCP ports 4505 and 4506 need to be open. If you've put both your Salt master and minion in debug mode and don't see an acknowledgment that your minion has connected, it could very well be a firewall interfering with the connection. See our firewall configuration page for help opening the firewall on various platforms.
If you've opened the correct TCP ports and still aren't seeing connections, check that no additional access control system such as SELinux or AppArmor is blocking Salt.
The salt-master needs at least 2 sockets per host that connects to it, one for the Publisher and one for response port. Thus, large installations may, upon scaling up the number of minions accessing a given master, encounter:
12:45:29,289 [salt.master ][INFO ] Starting Salt worker process 38
Too many open files
sock != -1 (tcp_listener.cpp:335)
The solution to this would be to check the number of files allowed to be opened by the user running salt-master (root by default):
[root@salt-master ~]# ulimit -n
1024
If this value is not equal to at least twice the number of minions, then it
will need to be raised. For example, in an environment with 1800 minions, the
nofile
limit should be set to no less than 3600. This can be done by
creating the file /etc/security/limits.d/99-salt.conf
, with the following
contents:
root hard nofile 4096
root soft nofile 4096
Replace root
with the user under which the master runs, if different.
If your master does not have an /etc/security/limits.d
directory, the lines
can simply be appended to /etc/security/limits.conf
.
As with any change to resource limits, it is best to stay logged into your
current shell and open another shell to run ulimit -n
again and verify that
the changes were applied correctly. Additionally, if your master is running
upstart, it may be necessary to specify the nofile
limit in
/etc/default/salt-master
if upstart isn't respecting your resource limits:
limit nofile 4096 4096
Note
The above is simply an example of how to set these values, and you may wish to increase them even further if your Salt master is doing more than just running Salt.
There are known bugs with ZeroMQ versions less than 2.1.11 which can cause the
Salt master to not respond properly. If you're running a ZeroMQ version greater
than or equal to 2.1.9, you can work around the bug by setting the sysctls
net.core.rmem_max
and net.core.wmem_max
to 16777216. Next, set the third
field in net.ipv4.tcp_rmem
and net.ipv4.tcp_wmem
to at least 16777216.
You can do it manually with something like:
# echo 16777216 > /proc/sys/net/core/rmem_max
# echo 16777216 > /proc/sys/net/core/wmem_max
# echo "4096 87380 16777216" > /proc/sys/net/ipv4/tcp_rmem
# echo "4096 87380 16777216" > /proc/sys/net/ipv4/tcp_wmem
Or with the following Salt state:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | net.core.rmem_max:
sysctl:
- present
- value: 16777216
net.core.wmem_max:
sysctl:
- present
- value: 16777216
net.ipv4.tcp_rmem:
sysctl:
- present
- value: 4096 87380 16777216
net.ipv4.tcp_wmem:
sysctl:
- present
- value: 4096 87380 16777216
|
If the master seems to be unresponsive, a SIGUSR1 can be passed to the salt-master threads to display what piece of code is executing. This debug information can be invaluable in tracking down bugs.
To pass a SIGUSR1 to the master, first make sure the minion is running in the foreground. Stop the service if it is running as a daemon, and start it in the foreground like so:
# salt-master -l debug
Then pass the signal to the master when it seems to be unresponsive:
# killall -SIGUSR1 salt-master
When filing an issue or sending questions to the mailing list for a problem with an unresponsive daemon, be sure to include this information if possible.
When faced with performance problems one can turn on master process profiling by sending it SIGUSR2.
# killall -SIGUSR2 salt-master
This will activate yappi
profiler inside salt-master code, then after some
time one must send SIGUSR2 again to stop profiling and save results to file. If
run in foreground salt-master will report filename for the results, which are
usually located under /tmp
on Unix-based OSes and c:\temp
on windows.
Results can then be analyzed with kcachegrind or similar tool.
Depending on your OS (this is most common on Ubuntu due to apt-get) you may sometimes encounter times where your highstate, or other long running commands do not return output.
Note
A number of timing issues were resolved in the 2014.1 release of Salt. Upgrading to at least this version is strongly recommended if timeouts persist.
By default the timeout is set to 5 seconds. The timeout value can easily be
increased by modifying the timeout
line within your /etc/salt/master
configuration file.
Using the -c
option with the Salt command modifies the configuration
directory. When the configuration file is read it will still base data off of
the root_dir
setting. This can result in unintended behavior if you are
expecting files such as /etc/salt/pki
to be pulled from the location
specified with -c
. Modify the root_dir
setting to address this
behavior.
When a command being run via Salt takes a very long time to return
(package installations, certain scripts, etc.) the master may drop you back
to the shell. In most situations the job is still running but Salt has
exceeded the set timeout before returning. Querying the job queue will
provide the data of the job but is inconvenient. This can be resolved by
either manually using the -t
option to set a longer timeout when running
commands (by default it is 5 seconds) or by modifying the master
configuration file: /etc/salt/master
and setting the timeout
value to
change the default timeout for all commands, and then restarting the
salt-master service.
In large installations, care must be taken not to overwhealm the master with authentication requests. Several options can be set on the master which mitigate the chances of an authentication flood from causing an interuption in service.
Note
recon_default:
The average number of seconds to wait between reconnection attempts.
To debug the states, you can use call locally.
salt-call -l trace --local state.highstate
The top.sls file is used to map what SLS modules get loaded onto what minions via the state system.
It is located in the file defined in the file_roots
variable of the salt master
configuration file which is defined by found in CONFIG_DIR/master
, normally /etc/salt/master
The default configuration for the file_roots
is:
file_roots:
base:
- /srv/salt
So the top file is defaulted to the location /srv/salt/top.sls
In the event that your Salt minion is having issues, a variety of solutions and suggestions are available. Please refer to the following links for more information:
A great deal of information is available via the debug logging system, if you are having issues with minions connecting or not starting run the minion in the foreground:
# salt-minion -l debug
Anyone wanting to run Salt daemons via a process supervisor such as monit,
runit, or supervisord, should omit the -d
argument to the daemons and
run them in the foreground.
No ports need to be opened on the minion, as it makes outbound connections to the master. If you've put both your Salt master and minion in debug mode and don't see an acknowledgment that your minion has connected, it could very well be a firewall interfering with the connection. See our firewall configuration page for help opening the firewall on various platforms.
If you have netcat installed, you can check port connectivity from the minion
with the nc
command:
$ nc -v -z salt.master.ip.addr 4505
Connection to salt.master.ip.addr 4505 port [tcp/unknown] succeeded!
$ nc -v -z salt.master.ip.addr 4506
Connection to salt.master.ip.addr 4506 port [tcp/unknown] succeeded!
The Nmap utility can also be used to check if these ports are open:
# nmap -sS -q -p 4505-4506 salt.master.ip.addr
Starting Nmap 6.40 ( http://nmap.org ) at 2013-12-29 19:44 CST
Nmap scan report for salt.master.ip.addr (10.0.0.10)
Host is up (0.0026s latency).
PORT STATE SERVICE
4505/tcp open unknown
4506/tcp open unknown
MAC Address: 00:11:22:AA:BB:CC (Intel)
Nmap done: 1 IP address (1 host up) scanned in 1.64 seconds
If you've opened the correct TCP ports and still aren't seeing connections, check that no additional access control system such as SELinux or AppArmor is blocking Salt.
The salt-call
command was originally developed for aiding in the development
of new Salt modules. Since then, many applications have been developed for
running any Salt module locally on a minion. These range from the original
intent of salt-call, development assistance, to gathering more verbose output
from calls like state.highstate
.
When initially creating your state tree, it is generally recommended to invoke
state.highstate
from the minion with
salt-call
. This displays far more information about the highstate execution
than calling it remotely. For even more verbosity, increase the loglevel with
the same argument as salt-minion
:
# salt-call -l debug state.highstate
The main difference between using salt
and using salt-call
is that
salt-call
is run from the minion, and it only runs the selected function on
that minion. By contrast, salt
is run from the master, and requires you to
specify the minions on which to run the command using salt's targeting
system.
If the minion seems to be unresponsive, a SIGUSR1 can be passed to the process to display what piece of code is executing. This debug information can be invaluable in tracking down bugs.
To pass a SIGUSR1 to the minion, first make sure the minion is running in the foreground. Stop the service if it is running as a daemon, and start it in the foreground like so:
# salt-minion -l debug
Then pass the signal to the minion when it seems to be unresponsive:
# killall -SIGUSR1 salt-minion
When filing an issue or sending questions to the mailing list for a problem with an unresponsive daemon, be sure to include this information if possible.
As is outlined in github issue #6300, Salt cannot use python's multiprocessing pipes and queues from execution modules. Multiprocessing from the execution modules is perfectly viable, it is just necessary to use Salt's event system to communicate back with the process.
The reason for this difficulty is that python attempts to pickle all objects in memory when communicating, and it cannot pickle function objects. Since the Salt loader system creates and manages function objects this causes the pickle operation to fail.
When a command being run via Salt takes a very long time to return
(package installations, certain scripts, etc.) the minion may drop you back
to the shell. In most situations the job is still running but Salt has
exceeded the set timeout before returning. Querying the job queue will
provide the data of the job but is inconvenient. This can be resolved by
either manually using the -t
option to set a longer timeout when running
commands (by default it is 5 seconds) or by modifying the minion
configuration file: /etc/salt/minion
and setting the timeout
value to
change the default timeout for all commands, and then restarting the
salt-minion service.
Note
Modifying the minion timeout value is not required when running commands from a Salt Master. It is only required when running commands locally on the minion.
A great deal of information is available via the debug logging system, if you are having issues with minions connecting or not starting run the minion and/or master in the foreground:
salt-master -l debug
salt-minion -l debug
Anyone wanting to run Salt daemons via a process supervisor such as monit,
runit, or supervisord, should omit the -d
argument to the daemons and
run them in the foreground.
No ports need to be opened up on each minion. For the master, TCP ports 4505 and 4506 need to be open. If you've put both your Salt master and minion in debug mode and don't see an acknowledgment that your minion has connected, it could very well be a firewall.
You can check port connectivity from the minion with the nc command:
nc -v -z salt.master.ip 4505
nc -v -z salt.master.ip 4506
There is also a firewall configuration document that might help as well.
If you've enabled the right TCP ports on your operating system or Linux distribution's firewall and still aren't seeing connections, check that no additional access control system such as SELinux or AppArmor is blocking Salt.
The salt-call
command was originally developed for aiding in the development
of new Salt modules. Since then, many applications have been developed for
running any Salt module locally on a minion. These range from the original
intent of salt-call, development assistance, to gathering more verbose output
from calls like state.highstate
.
When creating your state tree, it is generally recommended to invoke
state.highstate
with salt-call
. This
displays far more information about the highstate execution than calling it
remotely. For even more verbosity, increase the loglevel with the same argument
as salt-minion
:
salt-call -l debug state.highstate
The main difference between using salt
and using salt-call
is that
salt-call
is run from the minion, and it only runs the selected function on
that minion. By contrast, salt
is run from the master, and requires you to
specify the minions on which to run the command using salt's targeting
system.
The salt-master needs at least 2 sockets per host that connects to it, one for the Publisher and one for response port. Thus, large installations may, upon scaling up the number of minions accessing a given master, encounter:
12:45:29,289 [salt.master ][INFO ] Starting Salt worker process 38
Too many open files
sock != -1 (tcp_listener.cpp:335)
The solution to this would be to check the number of files allowed to be opened by the user running salt-master (root by default):
[root@salt-master ~]# ulimit -n
1024
And modify that value to be at least equal to the number of minions x 2. This setting can be changed in limits.conf as the nofile value(s), and activated upon new a login of the specified user.
So, an environment with 1800 minions, would need 1800 x 2 = 3600 as a minimum.
There are known bugs with ZeroMQ versions less than 2.1.11 which can cause the
Salt master to not respond properly. If you're running a ZeroMQ version greater
than or equal to 2.1.9, you can work around the bug by setting the sysctls
net.core.rmem_max
and net.core.wmem_max
to 16777216. Next, set the third
field in net.ipv4.tcp_rmem
and net.ipv4.tcp_wmem
to at least 16777216.
You can do it manually with something like:
# echo 16777216 > /proc/sys/net/core/rmem_max
# echo 16777216 > /proc/sys/net/core/wmem_max
# echo "4096 87380 16777216" > /proc/sys/net/ipv4/tcp_rmem
# echo "4096 87380 16777216" > /proc/sys/net/ipv4/tcp_wmem
Or with the following Salt state:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | net.core.rmem_max:
sysctl:
- present
- value: 16777216
net.core.wmem_max:
sysctl:
- present
- value: 16777216
net.ipv4.tcp_rmem:
sysctl:
- present
- value: 4096 87380 16777216
net.ipv4.tcp_wmem:
sysctl:
- present
- value: 4096 87380 16777216
|
Currently there are no SELinux policies for Salt. For the most part Salt runs
without issue when SELinux is running in Enforcing mode. This is because when
the minion executes as a daemon the type context is changed to initrc_t
.
The problem with SELinux arises when using salt-call or running the minion in
the foreground, since the type context stays unconfined_t
.
This problem is generally manifest in the rpm install scripts when using the
pkg module. Until a full SELinux Policy is available for Salt the solution
to this issue is to set the execution context of salt-call
and
salt-minion
to rpm_exec_t:
# CentOS 5 and RHEL 5:
chcon -t system_u:system_r:rpm_exec_t:s0 /usr/bin/salt-minion
chcon -t system_u:system_r:rpm_exec_t:s0 /usr/bin/salt-call
# CentOS 6 and RHEL 6:
chcon system_u:object_r:rpm_exec_t:s0 /usr/bin/salt-minion
chcon system_u:object_r:rpm_exec_t:s0 /usr/bin/salt-call
This works well, because the rpm_exec_t
context has very broad control over
other types.
Salt requires Python 2.6 or 2.7. Red Hat Enterprise Linux 5 and its variants
come with Python 2.4 installed by default. When installing on RHEL 5 from the
EPEL repository this is handled for you. But, if you run Salt from git, be
advised that its dependencies need to be installed from EPEL and that Salt
needs to be run with the python26
executable.
An extensive list of YAML idiosyncrasies has been compiled:
One of Salt's strengths, the use of existing serialization systems for representing SLS data, can also backfire. YAML is a general purpose system and there are a number of things that would seem to make sense in an sls file that cause YAML issues. It is wise to be aware of these issues. While reports or running into them are generally rare they can still crop up at unexpected times.
YAML uses spaces, period. Do not use tabs in your SLS files! If strange
errors are coming up in rendering SLS files, make sure to check that
no tabs have crept in! In Vim, after enabling search highlighting
with: :set hlsearch
, you can check with the following key sequence in
normal mode(you can hit ESC twice to be sure): /
, Ctrl-v, Tab, then
hit Enter. Also, you can convert tabs to 2 spaces by these commands in Vim:
:set tabstop=2 expandtab
and then :retab
.
The suggested syntax for YAML files is to use 2 spaces for indentation, but YAML will follow whatever indentation system that the individual file uses. Indentation of two spaces works very well for SLS files given the fact that the data is uniform and not deeply nested.
When dicts are nested within other data
structures (particularly lists), the indentation logic sometimes changes.
Examples of where this might happen include context
and default
options
from the file.managed state:
/etc/http/conf/http.conf:
file:
- managed
- source: salt://apache/http.conf
- user: root
- group: root
- mode: 644
- template: jinja
- context:
custom_var: "override"
- defaults:
custom_var: "default value"
other_var: 123
Notice that while the indentation is two spaces per level, for the values under
the context
and defaults
options there is a four-space indent. If only
two spaces are used to indent, then those keys will be considered part of the
same dictionary that contains the context
key, and so the data will not be
loaded correctly. If using a double indent is not desirable, then a
deeply-nested dict can be declared with curly braces:
/etc/http/conf/http.conf:
file:
- managed
- source: salt://apache/http.conf
- user: root
- group: root
- mode: 644
- template: jinja
- context: {
custom_var: "override" }
- defaults: {
custom_var: "default value",
other_var: 123 }
Here is a more concrete example of how YAML actually handles these indentations, using the Python interpreter on the command line:
>>> import yaml
>>> yaml.safe_load('''mystate:
... file.managed:
... - context:
... some: var''')
{'mystate': {'file.managed': [{'context': {'some': 'var'}}]}}
>>> yaml.safe_load('''mystate:
... file.managed:
... - context:
... some: var''')
{'mystate': {'file.managed': [{'some': 'var', 'context': None}]}}
Note that in the second example, some
is added as another key in the same
dictionary, whereas in the first example, it's the start of a new dictionary.
That's the distinction. context
is a common example because it is a keyword
arg for many functions, and should contain a dictionary.
PyYAML will load these values as boolean True
or False
. Un-capitalized
versions will also be loaded as booleans (true
, false
, yes
, no
,
on
, and off
). This can be especially problematic when constructing
Pillar data. Make sure that your Pillars which need to use the string versions
of these values are enclosed in quotes.
NOTE: This has been fixed in salt 0.10.0, as of this release passing an integer that is preceded by a 0 will be correctly parsed
When passing integers
into an SLS file, they are
passed as integers. This means that if a state accepts a string value
and an integer is passed, that an integer will be sent. The solution here
is to send the integer as a string.
This is best explained when setting the mode for a file:
/etc/vimrc:
file:
- managed
- source: salt://edit/vimrc
- user: root
- group: root
- mode: 644
Salt manages this well, since the mode is passed as 644, but if the mode is zero padded as 0644, then it is read by YAML as an integer and evaluated as an octal value, 0644 becomes 420. Therefore, if the file mode is preceded by a 0 then it needs to be passed as a string:
/etc/vimrc:
file:
- managed
- source: salt://edit/vimrc
- user: root
- group: root
- mode: '0644'
If I can find a way to make YAML accept "Double Short Decs" then I will, since I think that double short decs would be awesome. So what is a "Double Short Dec"? It is when you declare a multiple short decs in one ID. Here is a standard short dec, it works great:
vim:
pkg.installed
The short dec means that there are no arguments to pass, so it is not required to add any arguments, and it can save space.
YAML though, gets upset when declaring multiple short decs, for the record...
THIS DOES NOT WORK:
vim:
pkg.installed
user.present
Similarly declaring a short dec in the same ID dec as a standard dec does not work either...
ALSO DOES NOT WORK:
fred:
user.present
ssh_auth.present:
- name: AAAAB3NzaC...
- user: fred
- enc: ssh-dss
- require:
- user: fred
The correct way is to define them like this:
vim:
pkg.installed: []
user.present: []
fred:
user.present: []
ssh_auth.present:
- name: AAAAB3NzaC...
- user: fred
- enc: ssh-dss
- require:
- user: fred
Alternatively, they can be defined the "old way", or with multiple "full decs":
vim:
pkg:
- installed
user:
- present
fred:
user:
- present
ssh_auth:
- present
- name: AAAAB3NzaC...
- user: fred
- enc: ssh-dss
- require:
- user: fred
According to YAML specification, only ASCII characters can be used.
Within double-quotes, special characters may be represented with C-style escape sequences starting with a backslash ( \ ).
Examples:
- micro: "\u00b5"
- copyright: "\u00A9"
- A: "\x41"
- alpha: "\u0251"
- Alef: "\u05d0"
List of usable Unicode characters will help you to identify correct numbers.
Python can also be used to discover the Unicode number for a character:
repr(u"Text with wrong characters i need to figure out")
This shell command can find wrong characters in your SLS files:
find . -name '*.sls' -exec grep --color='auto' -P -n '[^\x00-\x7F]' \{} \;
If a definition only includes numbers and underscores, it is parsed by YAML as an integer and all underscores are stripped. To ensure the object becomes a string, it should be surrounded by quotes. More information here.
Here's an example:
>>> import yaml
>>> yaml.safe_load('2013_05_10')
20130510
>>> yaml.safe_load('"2013_05_10"')
'2013_05_10'
datetime
conversion¶If there is a value in a YAML file formatted 2014-01-20 14:23:23
or
similar, YAML will automatically convert this to a Python datetime
object.
These objects are not msgpack serializable, and so may break core salt
functionality. If values such as these are needed in a salt YAML file
(specifically a configuration file), they should be formatted with surrounding
strings to force YAML to serialize them as strings:
>>> import yaml
>>> yaml.safe_load('2014-01-20 14:23:23')
datetime.datetime(2014, 1, 20, 14, 23, 23)
>>> yaml.safe_load('"2014-01-20 14:23:23"')
'2014-01-20 14:23:23'
Additionally, numbers formatted like XXXX-XX-XX
will also be converted (or
YAML will attempt to convert them, and error out if it doesn't think the date
is a real one). Thus, for example, if a minion were to have an ID of
4017-16-20
the minion would not start because YAML would complain that the
date was out of range. The workaround is the same, surround the offending
string with quotes:
>>> import yaml
>>> yaml.safe_load('4017-16-20')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/yaml/__init__.py", line 93, in safe_load
return load(stream, SafeLoader)
File "/usr/local/lib/python2.7/site-packages/yaml/__init__.py", line 71, in load
return loader.get_single_data()
File "/usr/local/lib/python2.7/site-packages/yaml/constructor.py", line 39, in get_single_data
return self.construct_document(node)
File "/usr/local/lib/python2.7/site-packages/yaml/constructor.py", line 43, in construct_document
data = self.construct_object(node)
File "/usr/local/lib/python2.7/site-packages/yaml/constructor.py", line 88, in construct_object
data = constructor(self, node)
File "/usr/local/lib/python2.7/site-packages/yaml/constructor.py", line 312, in construct_yaml_timestamp
return datetime.date(year, month, day)
ValueError: month must be in 1..12
>>> yaml.safe_load('"4017-16-20"')
'4017-16-20'
If the minion or master seems to be unresponsive, a SIGUSR1 can be passed to the processes to display where in the code they are running. If encountering a situation like this, this debug information can be invaluable. First make sure the master of minion are running in the foreground:
salt-master -l debug
salt-minion -l debug
Then pass the signal to the master or minion when it seems to be unresponsive:
killall -SIGUSR1 salt-master
killall -SIGUSR1 salt-minion
Also under BSD and Mac OS X in addition to SIGUSR1 signal, debug subroutine set up for SIGINFO which has an advantage of being sent by Ctrl+T shortcut.
When filing an issue or sending questions to the mailing list for a problem with an unresponsive daemon this information can be invaluable.
As of release 0.17.1 you can no longer run different versions of Salt on your Master and Minion servers. This is due to a protocol change for security purposes. The Salt team will continue to attempt to ensure versions are as backwards compatible as possible.
In its most typical use, Salt is a software application in which clients, called "minions" can be commanded and controlled from a central command server called a "master".
Commands are normally issued to the minions (via the master) by calling a client script simply called, 'salt'.
Salt features a pluggable transport system to issue commands from a master to minions. The default transport is ZeroMQ.
The salt client is run on the same machine as the Salt Master and communicates with the salt-master to issue commands and to receive the results and display them to the user.
The primary abstraction for the salt client is called 'LocalClient'.
When LocalClient wants to publish a command to minions, it connects to the master by issuing a request to the master's ReqServer (TCP: 4506)
The LocalClient system listens to responses for its requests by listening to the master event bus publisher (master_event_pub.ipc).
The salt-master deamon runs on the designated Salt master and performs functions such as authenticating minions, sending, and receiving requests from connected minions and sending and receiving requests and replies to the 'salt' CLI.
When a Salt master starts up, a number of processes are started, all of which are called 'salt-master' in a process-list but have various role categories.
Among those categories are:
- Publisher
- EventPublisher
- MWorker
The Publisher process is responsible for sending commands over the designated transport to connected minions. The Publisher is bound to the following:
- TCP: port 4505
- IPC: publish_pull.ipc
Each salt minion establishes a connection to the master Publisher.
The EventPublisher publishes events onto the event bus. It is bound to the following:
- IPC: master_event_pull.ipc
- IPC: master_event_pub.ipc
Worker processes manage the back-end operations for the Salt Master.
The number of workers is equivalent to the number of 'worker_threads' specified in the master configuration and is always at least one.
Workers are bound to the following:
- IPC: workers.ipc
The Salt request server takes requests and distributes them to available MWorker processes for processing. It also receives replies back from minions.
Each salt minion establishes a connection to the master ReqServer.
The Salt master works by always publishing commands to all connected minions and the minions decide if the command is meant for them by checking themselves against the command target.
The typical lifecycle of a salt job from the perspective of the master might be as follows:
2) The 'salt' command uses LocalClient to generate a request to the salt master by connecting to the ReqServer on TCP:4506 and issuing the job.
3) The salt-master ReqServer sees the request and passes it to an available MWorker over workers.ipc.
4) A worker picks up the request and handles it. First, it checks to ensure that the requested user has permissions to issue the command. Then, it sends the publish command to all connected minions. For the curious, this happens in ClearFuncs.publish().
5) The worker announces on the master event bus that it is about to publish a job to connected minions. This happens by placing the event on the master event bus (master_event_pull.ipc) where the EventPublisher picks it up and distributes it to all connected event listeners on master_event_pub.ipc.
6) The message to the minions is encrypted and sent to the Publisher via IPC on publish_pull.ipc.
7) Connected minions have a TCP session established with the Publisher on TCP port 4505 where they await commands. When the Publisher receives the job over publish_pull, it sends the jobs across the wire to the minions for processing.
8) After the minions receive the request, they decrypt it and perform any requested work, if they determine that they are targeted to do so.
9) When the minion is ready to respond, it publishes the result of its job back to the master by sending the encrypted result back to the master on TCP 4506 where it is again picked up by the ReqServer and forwarded to an available MWorker for processing. (Again, this happens by passing this message across workers.ipc to an available worker.)
10) When the MWorker receives the job it decrypts it and fires an event onto the master event bus (master_event_pull.ipc). (Again for the curious, this happens in AESFuncs._return().
11) The EventPublisher sees this event and re-publishes it on the bus to all connected listeners of the master event bus (on master_event_pub.ipc). This is where the LocalClient has been waiting, listening to the event bus for minion replies. It gathers the job and stores the result.
12) When all targeted minions have replied or the timeout has been exceeded, the salt client displays the results of the job to the user on the CLI.
The salt-minion is a single process that sits on machines to be managed by Salt. It can either operate as a stand-alone daemon which accepts commands locally via 'salt-call' or it can connect back to a master and receive commands remotely.
When starting up, salt minions connect _back_ to a master defined in the minion config file. The connect to two ports on the master:
- TCP: 4505
This is the connection to the master Publisher. It is on this port that the minion receives jobs from the master.
- TCP: 4506
This is the connection to the master ReqServer. It is on this port that the minion sends job results back to the master.
Similar to the master, a salt-minion has its own event system that operates over IPC by default. The minion event system operates on a push/pull system with IPC files at minion_event_<unique_id>_pub.ipc and minion_event_<unique_id>_pull.ipc.
The astute reader might ask why have an event bus at all with a single-process daemon. The answer is that the salt-minion may fork other processes as required to do the work without blocking the main salt-minion process and this necessitates a mechanism by which those processes can communicate with each other. Secondarily, this provides a bus by which any user with sufficient permissions can read or write to the bus as a common interface with the salt minion.
When a salt minion starts up, it attempts to connect to the Publisher and the ReqServer on the salt master. It then attempts to authenticate and once the minion has successfully authenticated, it simply listens for jobs.
Jobs normally come either come from the 'salt-call' script run by a local user on the salt minion or they can come directly from a master.
1) A master publishes a job that is received by a minion as outlined by the master's job flow above.
2) The minion is polling its receive socket that's connected to the master Publisher (TCP 4505 on master). When it detects an incoming message, it picks it up from the socket and decrypts it.
3) A new minion process or thread is created and provided with the contents of the decrypted message. The _thread_return() method is provided with the contents of the received message.
4) The new minion thread is created. The _thread_return() function starts up and actually calls out to the requested function contained in the job.
6) The result of the function that's run is encrypted and returned to the master's ReqServer (TCP 4506 on master). [Still in thread.]
7) Thread exits. Because the main thread was only blocked for the time that it took to initialize the worker thread, many other requests could have been received and processed during this time.
A common source of confusion is determining when messages are passed in the clear and when they are passed using encryption. There are two rules governing this behaviour:
1) ClearFuncs is used for intra-master communication and during the initial authentication handshake between a minion and master during the key exhange.
There is a great need for contributions to Salt and patches are welcome! The goal here is to make contributions clear, make sure there is a trail for where the code has come from, and most importantly, to give credit where credit is due!
There are a number of ways to contribute to Salt development.
For details on how to contribute documentation improvements please review Writing Salt Documentation.
Sending pull requests on GitHub is the preferred method for receiving contributions. The workflow advice below mirrors GitHub's own guide and is well worth reading.
Fork saltstack/salt on GitHub.
Make a local clone of your fork.
git clone git@github.com:my-account/salt.git
cd salt
Add saltstack/salt as a git remote.
git remote add upstream https://github.com/saltstack/salt.git
Create a new branch in your clone.
Note
A branch should have one purpose. For example, "Fix bug X," or "Add feature Y". Multiple unrelated fixes and/or features should be isolated into separate branches.
If you're working on a fix, create your branch from the oldest release branch having the bug. See Which Salt Branch?.
git fetch upstream
git checkout -b fix-broken-thing upstream/2015.5
If you're working on a feature, create your branch from the develop branch.
git fetch upstream
git checkout -b add-cool-feature upstream/develop
Edit and commit changes to your branch.
vim path/to/file1 path/to/file2
git diff
git add path/to/file1 path/to/file2
git commit
Write a short, descriptive commit title and a longer commit message if necessary.
Note
If your change fixes a bug or implements a feature already filed in the issue tracker, be sure to reference the issue number in the commit message body.
fix broken things in file1 and file2
Fixes #31337. The issue is now eradicated from file1 and file2.
# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
# On branch fix-broken-thing
# Changes to be committed:
# modified: path/to/file1
# modified: path/to/file2
If you get stuck, there are many introductory Git resources on http://help.github.com.
Push your locally-committed changes to your GitHub fork,
Note
You may want to rebase before pushing to work out any potential conflicts.
git fetch upstream
git rebase upstream/2015.5 fix-broken-thing
git push --set-upstream origin fix-broken-thing
or,
git fetch upstream
git rebase upstream/develop add-cool-feature
git push --set-upstream origin add-cool-feature
Find the branch on your GitHub salt fork.
https://github.com/my-account/salt/branches/fix-broken-thing
Open a new pull request.
Click on Pull Request
on the right near the top of the page,
https://github.com/my-account/salt/pull/new/fix-broken-thing
If your branch is a fix for a release branch, choose that as the base
branch (e.g. 2015.5
),
https://github.com/my-account/salt/compare/saltstack:2015.5...fix-broken-thing
If your branch is a feature, choose develop
as the base branch,
https://github.com/my-account/salt/compare/saltstack:develop...add-cool-feature
Review that the proposed changes are what you expect.
Write a descriptive comment. Include links to related issues (e.g. 'Fixes #31337.') in the comment field.
Click Create pull request
.
Salt project members will review your pull request and automated tests will run on it.
If you recognize any test failures as being related to your proposed changes or if a reviewer asks for modifications:
Note
Jenkins
Pull request against saltstack/salt are automatically tested on a variety of operating systems and configurations. On average these tests take 30 minutes. Depending on your GitHub notification settings you may also receive an email message about the test results.
Test progress and results can be found at http://jenkins.saltstack.com/.
GitHub will open pull requests against Salt's main branch, develop
, by
default. Ideally features should go into develop
and bug fixes should go
into the oldest supported release branch affected by the bug. See
Sending a GitHub pull request.
If you have a bug fix and have already forked your working branch from
develop
and do not know how to rebase your commits against another branch,
then submit it to develop
anyway and we'll be sure to backport it to the
correct place.
The current release branch is the most recent stable release. Pull requests containing bug fixes should be made against the release branch.
The branch name will be a date-based name such as 2015.5
.
Bug fixes are made on this branch so that minor releases can be cut from this branch without introducing surprises and new features. This approach maximizes stability.
The Salt development team will "merge-forward" any fixes made on the release
branch to the develop
branch once the pull request has been accepted. This
keeps the fix in isolation on the release branch and also keeps the develop
branch up-to-date.
Note
Closing GitHub issues from commits
This "merge-forward" strategy requires that the magic keywords to close a GitHub issue appear in the commit message text directly. Only including the text in a pull request will not close the issue.
GitHub will close the referenced issue once the commit containing the
magic text is merged into the default branch (develop
). Any magic text
input only into the pull request description will not be seen at the
Git-level when those commits are merged-forward. In other words, only the
commits are merged-forward and not the pull request.
develop
branch¶The develop
branch is unstable and bleeding-edge. Pull requests containing
feature additions or non-bug-fix changes should be made against the develop
branch.
The Salt development team will back-port bug fixes made to develop
to the
current release branch if the contributor cannot create the pull request
against that branch.
Salt is advancing quickly. It is therefore critical to pull upstream changes from upstream into your fork on a regular basis. Nothing is worse than putting hard work into a pull request only to see bunches of merge conflicts because it has diverged too far from upstream.
See also
The following assumes origin
is the name of your fork and upstream
is
the name of the main saltstack/salt repository.
View existing remotes.
git remote -v
Add the upstream
remote.
# For ssh github
git remote add upstream git@github.com:saltstack/salt.git
# For https github
git remote add upstream https://github.com/saltstack/salt.git
Pull upstream changes into your clone.
git fetch upstream
Update your copy of the develop
branch.
git checkout develop
git merge --ff-only upstream/develop
If Git complains that a fast-forward merge is not possible, you have local commits.
git pull --rebase origin develop
to rebase your changes on top of
the upstream changes.git branch <branch-name>
to create a new branch with your
commits. You will then need to reset your develop
branch before
updating it with the changes from upstream.If Git complains that local files will be overwritten, you have changes to
files in your working directory. Run git status
to see the files in
question.
Update your fork.
git push origin develop
Repeat the previous two steps for any other branches you work with, such as the current release branch.
Patches will also be accepted by email. Format patches using git format-patch and send them to the salt-users mailing list. The contributor will then get credit for the patch, and the Salt community will have an archive of the patch and a place for discussion.
If a bug is fixed on develop
and the bug is also present on a
currently-supported release branch it will need to be back-ported to all
applicable branches.
Note
Most Salt contributors can skip these instructions
These instructions do not need to be read in order to contribute to the Salt project! The SaltStack team will back-port fixes on behalf of contributors in order to keep the contribution process easy.
These instructions are intended for frequent Salt contributors, advanced Git users, SaltStack employees, or independent souls who wish to back-port changes themselves.
It is often easiest to fix a bug on the oldest supported release branch and
then merge that branch forward into develop
(as described earlier in this
document). When that is not possible the fix must be back-ported, or copied,
into any other affected branches.
These steps assume a pull request #1234
has been merged into develop
.
And upstream
is the name of the remote pointing to the main Salt repo.
Identify the oldest supported release branch that is affected by the bug.
Create a new branch for the back-port by reusing the same branch from the original pull request.
Name the branch bp-<NNNN>
and use the number of the original pull
request.
git fetch upstream refs/pull/1234/head:bp-1234
git checkout bp-1234
Find the parent commit of the original pull request.
The parent commit of the original pull request must be known in order to rebase onto a release branch. The easiest way to find this is on GitHub.
Open the original pull request on GitHub and find the first commit in the
list of commits. Select and copy the SHA for that commit. The parent of
that commit can be specified by appending ~1
to the end.
Rebase the new branch on top of the release branch.
<release-branch>
is the branch identified in step #1.<orig-base>
is the SHA identified in step #3 -- don't forget to add
~1
to the end!git rebase --onto <release-branch> <orig-base> bp-1234
Note, release branches prior to 2015.5
will not be able to make use of
rebase and must use cherry-picking instead.
Push the back-port branch to GitHub and open a new pull request.
Opening a pull request for the back-port allows for the test suite and normal code-review process.
git push -u origin bp-1234
SaltStack uses several labeling schemes to help facilitate code contributions
and bug resolution. See the <labels-and-milestones>
documentation for
more information.
Salt should remain backwards compatible, though sometimes, this backwards compatibility needs to be broken because a specific feature and/or solution is no longer necessary or required. At first one might think, let me change this code, it seems that it's not used anywhere else so it should be safe to remove. Then, once there's a new release, users complain about functionality which was removed and they where using it, etc. This should, at all costs, be avoided, and, in these cases, that specific code should be deprecated.
Depending on the complexity and usage of a specific piece of code, the deprecation time frame should be properly evaluated. As an example, a deprecation warning which is shown for 2 major releases, for example 0.17.0 and 2014.1.0, gives users enough time to stop using the deprecated code and adapt to the new one.
For example, if you're deprecating the usage of a keyword argument to a function, that specific keyword argument should remain in place for the full deprecation time frame and if that keyword argument is used, a deprecation warning should be shown to the user.
To help in this deprecation task, salt provides salt.utils.warn_until
. The idea behind this helper function is to show the
deprecation warning until salt reaches the provided version. Once that provided
version is equaled salt.utils.warn_until
will
raise a RuntimeError
making salt stop its execution. This stoppage
is unpleasant and will remind the developer that the deprecation limit has been
reached and that the code can then be safely removed.
Consider the following example:
def some_function(bar=False, foo=None):
if foo is not None:
salt.utils.warn_until(
(0, 18),
'The \'foo\' argument has been deprecated and its '
'functionality removed, as such, its usage is no longer '
'required.'
)
Consider that the current salt release is 0.16.0
. Whenever foo
is
passed a value different from None
that warning will be shown to the user.
This will happen in versions 0.16.2
to 2014.1.0
, after which a
RuntimeError
will be raised making us aware that the deprecated code
should now be removed.
Salt provides several special "dunder" dictionaries as a convenience for Salt
development. These include __opts__
, __context__
, __salt__
, and
others. This document will describe each dictionary and detail where they exist
and what information and/or functionality they provide.
The __opts__
dictionary contains all of the options passed in the
configuration file for the master or minion.
Note
In many places in salt, instead of pulling raw data from the __opts__ dict, configuration data should be pulled from the salt get functions such as config.get, aka - __salt__['config.get']('foo:bar') The get functions also allow for dict traversal via the : delimiter. Consider using get functions whenever using __opts__ or __pillar__ and __grains__ (when using grains for configuration data)
The configuration file data made available in the __opts__
dictionary is the
configuration data relative to the running daemon. If the modules are loaded and
executed by the master, then the master configuration data is available, if the
modules are executed by the minion, then the minion configuration is
available. Any additional information passed into the respective configuration
files is made available
__salt__
contains the execution module functions. This allows for all
functions to be called as they have been set up by the salt loader.
__salt__['cmd.run']('fdisk -l')
__salt__['network.ip_addrs']()
The __grains__
dictionary contains the grains data generated by the minion
that is currently being worked with. In execution modules, state modules and
returners this is the grains of the minion running the calls, when generating
the external pillar the __grains__
is the grains data from the minion that
the pillar is being generated for.
The __pillar__
dictionary contains the pillar for the respective minion.
__context__
exists in state modules and execution modules.
During a state run the __context__
dictionary persists across all states
that are run and then is destroyed when the state ends.
When running an execution module __context__
persists across all module
executions until the modules are refreshed; such as when saltutils.sync_all
or state.highstate
are executed.
A great place to see how to use __context__
is in the cp.py module in
salt/modules/cp.py. The fileclient authenticates with the master when it is
instantiated and then is used to copy files to the minion. Rather than create a
new fileclient for each file that is to be copied down, one instance of the
fileclient is instantiated in the __context__
dictionary and is reused for
each file. Here is an example from salt/modules/cp.py:
if not 'cp.fileclient' in __context__:
__context__['cp.fileclient'] = salt.fileclient.get_file_client(__opts__)
Note
Because __context__ may or may not have been destroyed, always be sure to check for the existence of the key in __context__ and generate the key before using it.
Salt provides a mechanism for generating pillar data by calling external pillar interfaces. This document will describe an outline of an ext_pillar module.
Salt expects to find your ext_pillar
module in the same location where it
looks for other python modules. If the extension_modules
option in your
Salt master configuration is set, Salt will look for a pillar
directory
under there and load all the modules it finds. Otherwise, it will look in
your Python site-packages salt/pillar
directory.
The external pillars that are called when a minion refreshes its pillars is
controlled by the ext_pillar
option in the Salt master configuration. You
can pass a single argument, a list of arguments or a dictionary of arguments
to your pillar:
ext_pillar:
- example_a: some argument
- example_b:
- argumentA
- argumentB
- example_c:
keyA: valueA
keyB: valueB
Import modules your external pillar module needs. You should first include generic modules that come with stock Python:
import logging
And then start logging. This is an idiomatic way of setting up logging in Salt:
log = logging.getLogger(__name__)
Finally, load modules that are specific to what you are doing. You should catch
import errors and set a flag that the __virtual__
function can use later.
try:
import weird_thing
EXAMPLE_A_LOADED = True
except ImportError:
EXAMPLE_A_LOADED = False
If you define an __opts__
dictionary, it will be merged into the
__opts__
dictionary handed to the ext_pillar
function later. This is a
good place to put default configuration items. The convention is to name
things modulename.option
.
__opts__ = { 'example_a.someconfig': 137 }
If you define an __init__
function, it will be called with the following
signature:
def __init__( __opts__ ):
# Do init work here
Note: The __init__
function is ran every time a particular minion causes
the external pillar to be called, so don't put heavy initialization code here.
The __init__
functionality is a side-effect of the Salt loader, so it may
not be as useful in pillars as it is in other Salt items.
If you define a __virtual__
function, you can control whether or not this
module is visible. If it returns False
then Salt ignores this module. If
it returns a string, then that string will be how Salt identifies this external
pillar in its ext_pillar
configuration. If you're not renaming the module,
simply return True
in the __virtual__
function, which is the same as if
this function did not exist, then, the name Salt's ext_pillar
will use to
identify this module is its conventional name in Python.
This is useful to write modules that can be installed on all Salt masters, but will only be visible if a particular piece of software your module requires is installed.
# This external pillar will be known as `example_a`
def __virtual__():
if EXAMPLE_A_LOADED:
return True
return False
# This external pillar will be known as `something_else`
__virtualname__ = 'something_else'
def __virtual__():
if EXAMPLE_A_LOADED:
return __virtualname__
return False
This is where the real work of an external pillar is done. If this module is
active and has a function called ext_pillar
, whenever a minion updates its
pillar this function is called.
How it is called depends on how it is configured in the Salt master
configuration. The first argument is always the current pillar dictionary, this
contains pillar items that have already been added, starting with the data from
pillar_roots
, and then from any already-ran external pillars.
Using our example above:
ext_pillar( id, pillar, 'some argument' ) # example_a
ext_pillar( id, pillar, 'argumentA', 'argumentB' ) # example_b
ext_pillar( id, pillar, keyA='valueA', keyB='valueB' } ) # example_c
In the example_a
case, pillar
will contain the items from the
pillar_roots
, in example_b
pillar
will contain that plus the items
added by example_a
, and in example_c
pillar
will contain that plus
the items added by example_b
. In all three cases, id
will contain the
ID of the minion making the pillar request.
This function should return a dictionary, the contents of which are merged in with all of the other pillars and returned to the minion. Note: this function is called once for each minion that fetches its pillar data.
def ext_pillar( minion_id, pillar, *args, **kwargs ):
my_pillar = {}
# Do stuff
return my_pillar
You shouldn't just add items to pillar
and return that, since that will
cause Salt to merge data that already exists. Rather, just return the items
you are adding or changing. You could, however, use pillar
in your module
to make some decision based on pillar data that already exists.
This function has access to some useful globals:
__opts__: | A dictionary of mostly Salt configuration options. If you had an
__opts__ dictionary defined in your module, those values will be
included. |
---|---|
__salt__: | A dictionary of Salt module functions, useful so you don't have to
duplicate functions that already exist. E.g.
__salt__['cmd.run']( 'ls -l' ) Note, runs on the master |
__grains__: | A dictionary of the grains of the minion making this pillar call. |
As an example, if you wanted to add external pillar via the cmd_json
external pillar, add something like this to your master config:
ext_pillar:
- cmd_json: 'echo {\"arg\":\"value\"}'
Just as with traditional pillars, external pillars must be refreshed in order for minions to see any fresh data:
salt '*' saltutil.refresh_pillar
Clone the repository using:
git clone https://github.com/saltstack/salt
Note
tags
Just cloning the repository is enough to work with Salt and make contributions. However, fetching additional tags from git is required to have Salt report the correct version for itself. To do this, first add the git repository as an upstream source:
git remote add upstream https://github.com/saltstack/salt
Fetching tags is done with the git 'fetch' utility:
git fetch --tags upstream
Create a new virtualenv:
virtualenv /path/to/your/virtualenv
Avoid making your virtualenv path too long.
On Arch Linux, where Python 3 is the default installation of Python, use
the virtualenv2
command instead of virtualenv
.
Note
Using system Python modules in the virtualenv
To use already-installed python modules in virtualenv (instead of having pip
download and compile new ones), run virtualenv --system-site-packages
Using this method eliminates the requirement to install the salt dependencies
again, although it does assume that the listed modules are all installed in the
system PYTHONPATH at the time of virtualenv creation.
Activate the virtualenv:
source /path/to/your/virtualenv/bin/activate
Install Salt (and dependencies) into the virtualenv:
pip install M2Crypto # Don't install on Debian/Ubuntu (see below)
pip install pyzmq PyYAML pycrypto msgpack-python jinja2 psutil
pip install -e ./salt # the path to the salt git clone from above
Note
Installing M2Crypto
swig
and libssl-dev
are required to build M2Crypto. To fix
the error command 'swig' failed with exit status 1
while installing M2Crypto,
try installing it with the following command:
env SWIG_FEATURES="-cpperraswarn -includeall -D__`uname -m`__ -I/usr/include/openssl" pip install M2Crypto
Debian and Ubuntu systems have modified openssl libraries and mandate that a patched version of M2Crypto be installed. This means that M2Crypto needs to be installed via apt:
apt-get install python-m2crypto
This also means that pulling in the M2Crypto installed using apt requires using
--system-site-packages
when creating the virtualenv.
If you're using a platform other than Debian or Ubuntu, and you are
installing M2Crypto via pip instead of a system package, then you will also
need the gcc
compiler.
Note
Installing psutil
Python header files are required to build this module, otherwise the pip
install will fail. If your distribution separates binaries and headers into
separate packages, make sure that you have the headers installed. In most
Linux distributions which split the headers into their own package, this
can be done by installing the python-dev
or python-devel
package.
For other platforms, the package will likely be similarly named.
Note
Installing dependencies on OS X.
You can install needed dependencies on OS X using homebrew or macports. See OS X Installation
Warning
Installing on RedHat-based Distros
If installing from pip (or from source using setup.py install
), be
advised that the yum-utils
package is needed for Salt to manage
packages on RedHat-based systems.
During development it is easiest to be able to run the Salt master and minion that are installed in the virtualenv you created above, and also to have all the configuration, log, and cache files contained in the virtualenv as well.
Copy the master and minion config files into your virtualenv:
mkdir -p /path/to/your/virtualenv/etc/salt
cp ./salt/conf/master ./salt/conf/minion /path/to/your/virtualenv/etc/salt/
Edit the master config file:
user: root
value to your own user.root_dir: /
value to point to
/path/to/your/virtualenv
.pidfile: /var/run/salt-master.pid
value to point to
/path/to/your/virtualenv/salt-master.pid
.publish_port
and ret_port
values as well.Edit the minion config file:
user
and
root_dir
values as well as any port changes.pidfile: /var/run/salt-minion.pid
value to point to
/path/to/your/virtualenv/salt-minion.pid
.master: salt
value to point at localhost
.id:
value to something descriptive like
"saltdev". This isn't strictly necessary but it will serve as a reminder of
which Salt installation you are working with.ret_port
value in the master config because you are
also running a non-development version of Salt, then you will have to
change the master_port
value in the minion config to match.Note
Using salt-call with a Standalone Minion
If you plan to run salt-call with this self-contained development
environment in a masterless setup, you should invoke salt-call with
-c /path/to/your/virtualenv/etc/salt
so that salt can find the minion
config file. Without the -c
option, Salt finds its config files in
/etc/salt.
Start the master and minion, accept the minion's key, and verify your local Salt installation is working:
cd /path/to/your/virtualenv
salt-master -c ./etc/salt -d
salt-minion -c ./etc/salt -d
salt-key -c ./etc/salt -L
salt-key -c ./etc/salt -A
salt -c ./etc/salt '*' test.ping
Running the master and minion in debug mode can be helpful when developing. To
do this, add -l debug
to the calls to salt-master
and salt-minion
.
If you would like to log to the console instead of to the log file, remove the
-d
.
Note
Too long socket path?
Once the minion starts, you may see an error like the following:
zmq.core.error.ZMQError: ipc path "/path/to/your/virtualenv/
var/run/salt/minion/minion_event_7824dcbcfd7a8f6755939af70b96249f_pub.ipc"
is longer than 107 characters (sizeof(sockaddr_un.sun_path)).
This means that the path to the socket the minion is using is too long. This is a system limitation, so the only workaround is to reduce the length of this path. This can be done in a couple different ways:
sock_dir
minion config variable and reduce its
length. Remember that this path is relative to the value you set in
root_dir
.NOTE:
The socket path is limited to 107 characters on Solaris and Linux,
and 103 characters on BSD-based systems.
Note
File descriptor limits
Ensure that the system open file limit is raised to at least 2047:
# check your current limit
ulimit -n
# raise the limit. persists only until reboot
# use 'limit descriptors 2047' for c-shell
ulimit -n 2047
To set file descriptors on OSX, refer to the OS X Installation instructions.
If you are installing using easy_install
, you will need to define a
USE_SETUPTOOLS environment variable, otherwise dependencies will not
be installed:
USE_SETUPTOOLS=1 easy_install salt
You need sphinx-build
command to build the docs. In Debian/Ubuntu this is
provided in the python-sphinx
package. Sphinx can also be installed
to a virtualenv using pip:
pip install Sphinx==1.3b2
Change to salt documentation directory, then:
cd doc; make html
make
without any arguments to see the
available make targets, which include html, man, and
text.make
again.salt/modules/
or salt/states/
.make SPHINXBUILD=sphinx-build2 html
make SPHINXBUILD=sphinx-1.0-build html
Once you've updated the documentation, you can run the following command to launch a simple Python HTTP server to see your changes:
cd _build/html; python -m SimpleHTTPServer
Run the test suite with following command:
./setup.py test
See here for more information regarding the test suite.
SaltStack uses several labeling schemes to help facilitate code contributions
and bug resolution. See the <labels-and-milestones>
documentation for
more information.
SaltStack uses several labeling schemes, as well as applying milestones, to triage incoming issues and pull requests in the GitHub Issue Tracker. Most of the labels and milestones are used for internal tracking, but the following definitions might prove useful for the community to discover the best issues to help resolve.
Milestones are most often applied to issues, as a milestone is assigned to every issue that has been triaged. However, milestones can also be applied to pull requests. SaltStack uses milestones to track bugs or features that should be included in the next major feature release, or even the next bug-fix release, as well as what issues are ready to be worked on or what might be blocked. All incoming issues must have a milestone associated with them.
Blocker
label, but not always.Blocker
label, but not always.Labels are used to facilitate the resolution of new pull requests and open issues. Most labels are confined to being applied to either issues or pull requests, though some labels may be applied to both.
All incoming issues should be triaged with at least one label and a milestone. When a new issue comes in, it should be determined if the issue is a bug or a feature request, and either of those labels should be applied accordingly. Bugs and Feature Requests have differing labeling schemes, detailed below, where other labels are applied to them to further help contributors find issues to fix or implement.
There are some labels, such as Question
or some of the "Status" labels that may be applied as "stand alone" labels
in which more information may be needed or a decision must be reached on how to proceed. (See the "Bug Status Labels"
section below.)
The Feature
label should be applied when a user is requesting entirely new functionality. This can include new
functions, modules, states, modular systems, flags for existing functions, etc. Features do not receive severity
or priority labels, as those labels are only used for bugs. However, they may receive "Functional Area" labels or "ZD".
Feature request issues will be prioritized on an "as-needed" basis using milestones during SaltStack's feature release and sprint planning processes.
All bugs should have the Bug
label as well as a severity, priority, functional area, and a status, as applicable.
How severe is the bug? SaltStack uses four labels to determine the severity of a bug: Blocker
, Critical
,
High
, and Medium
. This scale is intended to make the bug-triage process as objective as possible.
In addition to using a bug severity to classify issues, a priority is also assigned to each bug to give further granularity in searching for bugs to fix. In this way, a bug's priority is defined as follows:
Note
A bug's priority is relative to its functional area. If a bug report, for example, about gitfs
includes details
indicating that everyone who gitfs
will run into this bug, then a P1
label will be applied, even though
Salt users who are not enabling gitfs
will see the bug.
All bugs should receive a "Functional Area" label to indicate what region of Salt the bug is mainly seen in. This will help internal developers as well as community members identify areas of expertise to find issues that can be fixed more easily. Functional Area labels can also be applied to Feature Requests.
Functional Area Labels, in alphabetical order, include:
Status lables are used to define and track the state a bug is in at any given time. Not all bugs will have a status
label, but if a SaltStack employee is able to apply a status label, he or she will. Status labels are somewhat unique
in the fact that they might be the only label on an issue, such as Pending Discussion
, Info Needed
, or
Expected Behavior
until further action can be taken.
Upstream Bug
then a bug report in the upstream
project must be filed (or found if a report already exists) and a link to the report must be provided to the issue
in Salt for tracking purposes. (This can be a stand-alone label.)There are a couple of other labels that are helpful in categorizing bugs that are not included in the categories above.
These labels can either stand on their own such as Question
or can be applied to bugs or feature requests as
applicable.
Blocked
milestone.SaltStack also applies various labels to incoming pull requests. These are mainly used to help SaltStack engineers easily identify the nature the changes presented in a pull request and whether or not that pull request is ready to be reviewed and merged into the Salt codebase.
A "* Change" label is applied to each incoming pull request. The type of change label that is applied to a pull request is based on a scale that encompasses the number of lines affected by the change in conjunction with the area of code the change touches (i.e. core code areas vs. execution or state modules).
The conditions given for these labels are recommendations, as the pull request reviewer will also consult their intuition and experience regarding the magnitude of the impact of the proposed changes in the pull request.
Core code areas include: state compiler, crypto engine, master and minion, transport, pillar rendering, loader, transport layer, event system, salt.utils, client, cli, logging, netapi, runner engine, templating engine, top file compilation, file client, file server, mine, salt-ssh, test runner, etc.
There are two labels that are used to keep track of what pull requests need to be back-ported to an older release branch and which pull requests have already been back-ported.
There are a couple of labels that the QA team uses to indicate the mergability of a pull request. If the pull request is legitimately passing or failing tests, then one or more of these labels may be applied.
Needs Testcase
label. At this point, the Needs Testcase
label must be removed to indicate that tests no longer need to be written.TODO
When first working with Salt, it is not always clear where all of the modular components are and what they do. Salt comes loaded with more modular systems than many users are aware of, making Salt very easy to extend in many places.
The most commonly used modular systems are execution modules and states. But the modular systems extend well beyond the more easily exposed components and are often added to Salt to make the complete system more flexible.
Execution modules make up the core of the functionality used by Salt to interact with client systems. The execution modules create the core system management library used by all Salt systems, including states, which interact with minion systems.
Execution modules are completely open ended in their execution. They can be used to do anything required on a minion, from installing packages to detecting information about the system. The only restraint in execution modules is that the defined functions always return a JSON serializable object.
For a list of all built in execution modules, click here
For information on writing execution modules, see this page.
Sometimes debugging with print()
and extra logs sprinkled everywhere is not
the best strategy.
IPython is a helpful debug tool that has an interactive python environment which can be embedded in python programs.
First the system will require IPython to be installed.
# Debian
apt-get install ipython
# Arch Linux
pacman -Syu ipython2
# RHEL/CentOS (via EPEL)
yum install python-ipython
Now, in the troubling python module, add the following line at a location where the debugger should be started:
test = 'test123'
import IPython; IPython.embed_kernel()
After running a Salt command that hits that line, the following will show up in the log file:
[CRITICAL] To connect another client to this kernel, use:
[IPKernelApp] --existing kernel-31271.json
Now on the system that invoked embed_kernel
, run the following command from
a shell:
# NOTE: use ipython2 instead of ipython for Arch Linux
ipython console --existing
This provides a console that has access to all the vars and functions, and even supports tab-completion.
print(test)
test123
To exit IPython and continue running Salt, press Ctrl-d
to logout.
State modules are used to define the state interfaces used by Salt States. These modules are restrictive in that they must follow a number of rules to function properly.
Note
State modules define the available routines in sls files. If calling an execution module directly is desired, take a look at the module state.
The auth module system allows for external authentication routines to be easily
added into Salt. The auth function needs to be implemented to satisfy the
requirements of an auth module. Use the pam
module as an example.
The fileserver module system is used to create fileserver backends used by the
Salt Master. These modules need to implement the functions used in the
fileserver subsystem. Use the gitfs
module as an example.
Grain modules define extra routines to populate grains data. All defined public functions will be executed and MUST return a Python dict object. The dict keys will be added to the grains made available to the minion.
The output modules supply the outputter system with routines to display data
in the terminal. These modules are very simple and only require the output
function to execute. The default system outputter is the nested
module.
Used to define optional external pillar systems. The pillar generated via the filesystem pillar is passed into external pillars. This is commonly used as a bridge to database data for pillar, but is also the backend to the libvirt state used to generate and sign libvirt certificates on the fly.
Renderers are the system used to render sls files into salt highdata for the
state compiler. They can be as simple as the py
renderer and as complex as
stateconf
and pydsl
.
Returners are used to send data from minions to external sources, commonly
databases. A full returner will implement all routines to be supported as an
external job cache. Use the redis
returner as an example.
Runners are purely master-side execution sequences. These range from simple reporting to orchestration engines like the overstate.
Tops modules are used to convert external data sources into top file data for the state system.
The wheel system is used to manage master side management routines. These routines are primarily intended for the API to enable master configuration.
This page contains guidelines for writing package providers.
One of the most important features of Salt is package management. There is no
shortage of package managers, so in the interest of providing a consistent
experience in pkg
states, there are certain functions
that should be present in a package provider. Note that these are subject to
change as new features are added or existing features are enhanced.
This function should declare an empty dict, and then add packages to it by
calling pkg_resource.add_pkg
, like
so:
__salt__['pkg_resource.add_pkg'](ret, name, version)
The last thing that should be done before returning is to execute
pkg_resource.sort_pkglist
. This
function does not presently do anything to the return dict, but will be used in
future versions of Salt.
__salt__['pkg_resource.sort_pkglist'](ret)
list_pkgs
returns a dictionary of installed packages, with the keys being
the package names and the values being the version installed. Example return
data:
{'foo': '1.2.3-4',
'bar': '5.6.7-8'}
Accepts an arbitrary number of arguments. Each argument is a package name. The return value for a package will be an empty string if the package is not found or if the package is up-to-date. The only case in which a non-empty string is returned is if the package is available for new installation (i.e. not already installed) or if there is an upgrade available.
If only one argument was passed, this function return a string, otherwise a dict of name/version pairs is returned.
This function must also accept **kwargs
, in order to receive the
fromrepo
and repo
keyword arguments from pkg states. Where supported,
these arguments should be used to find the install/upgrade candidate in the
specified repository. The fromrepo
kwarg takes precedence over repo
, so
if both of those kwargs are present, the repository specified in fromrepo
should be used. However, if repo
is used instead of fromrepo
, it should
still work, to preserve backwards compatibility with older versions of Salt.
Like latest_version
, accepts an arbitrary number of arguments and
returns a string if a single package name was passed, or a dict of name/value
pairs if more than one was passed. The only difference is that the return
values are the currently-installed versions of whatever packages are passed. If
the package is not installed, an empty string is returned for that package.
Deprecated and destined to be removed. For now, should just do the following:
return __salt__['pkg.latest_version'](name) != ''
The following arguments are required and should default to None
:
The first thing that this function should do is call
pkg_resource.parse_targets
(see below). This function will convert the SLS input into a more easily parsed
data structure.
pkg_resource.parse_targets
may
need to be modified to support your new package provider, as it does things
like parsing package metadata which cannot be done for every package management
system.
pkg_params, pkg_type = __salt__['pkg_resource.parse_targets'](name,
pkgs,
sources)
Two values will be returned to the install function. The first of them will be a dictionary. The keys of this dictionary will be package names, though the values will differ depending on what kind of installation is being done:
None
. Once the
data has been returned, if the version keyword argument was
provided, then it should replace the None
value in the dictionary.None
if a version was not
specified for the package, and the desired version if specified. See the
Multiple Package Installation Options section of the
pkg.installed
state for more info.The second return value will be a string with two possible values:
repository
or file
. The install function can use this value
(if necessary) to build the proper command to install the targeted package(s).
Both before and after the installing the target(s), you should run
list_pkgs to obtain a list of the installed packages. You should then
return the output of salt.utils.compare_dicts()
return salt.utils.compare_dicts(old, new)
Removes the passed package and return a list of the packages removed.
There are some functions provided by pkg
which are specific to package
repositories, and not to packages themselves. When writing modules for new
package managers, these functions should be made available as stated below, in
order to provide compatibility with the pkgrepo
state.
All repo functions should accept a basedir option, which defines which directory repository configuration should be found in. The default for this is dictated by the repo manager that is being used, and rarely needs to be changed.
basedir = '/etc/yum.repos.d'
__salt__['pkg.list_repos'](basedir)
Lists the repositories that are currently configured on this system.
__salt__['pkg.list_repos']()
Returns a dictionary, in the following format:
{'reponame': 'config_key_1': 'config value 1',
'config_key_2': 'config value 2',
'config_key_3': ['list item 1 (when appropriate)',
'list item 2 (when appropriate)]}
Displays all local configuration for a specific repository.
__salt__['pkg.get_repo'](repo='myrepo')
The information is formatted in much the same way as list_repos, but is specific to only one repo.
{'config_key_1': 'config value 1',
'config_key_2': 'config value 2',
'config_key_3': ['list item 1 (when appropriate)',
'list item 2 (when appropriate)]}
Removes the local configuration for a specific repository. Requires a repo argument, which must match the locally configured name. This function returns a string, which informs the user as to whether or not the operation was a success.
__salt__['pkg.del_repo'](repo='myrepo')
Modify the local configuration for one or more option for a configured repo. This is also the way to create new repository configuration on the local system; if a repo is specified which does not yet exist, it will be created.
The options specified for this function are specific to the system; please refer to the documentation for your specific repo manager for specifics.
__salt__['pkg.mod_repo'](repo='myrepo', url='http://myurl.com/repo')
In general, the standard package functions as describes above will meet your needs. These functions use the system's native repo manager (for instance, yum or the apt tools). In most cases, the repo manager is actually separate from the package manager. For instance, yum is usually a front-end for rpm, and apt is usually a front-end for dpkg. When possible, the package functions that use those package managers directly should do so through the low package functions.
It is normal and sane for pkg
to make calls to lowpkgs
, but lowpkg
must never make calls to pkg
. This is affects functions which are required
by both pkg
and lowpkg
, but the technique in pkg
is more performant
than what is available to lowpkg
. When this is the case, the lowpkg
function that requires that technique must still use the lowpkg
version.
Returns a dict of packages installed, including the package name and version. Can accept a list of packages; if none are specified, then all installed packages will be listed.
installed = __salt__['lowpkg.list_pkgs']('foo', 'bar')
Example output:
{'foo': '1.2.3-4',
'bar': '5.6.7-8'}
Many (but not all) package management systems provide a way to verify that the files installed by the package manager have or have not changed. This function accepts a list of packages; if none are specified, all packages will be included.
installed = __salt__['lowpkg.verify']('httpd')
Example output:
{'/etc/httpd/conf/httpd.conf': {'mismatch': ['size', 'md5sum', 'mtime'],
'type': 'config'}}
Lists all of the files installed by all packages specified. If not packages are specified, then all files for all known packages are returned.
installed = __salt__['lowpkg.file_list']('httpd', 'apache')
This function does not return which files belong to which packages; all files are returned as one giant list (hence the file_list function name. However, This information is still returned inside of a dict, so that it can provide any errors to the user in a sane manner.
{'errors': ['package apache is not installed'],
'files': ['/etc/httpd',
'/etc/httpd/conf',
'/etc/httpd/conf.d',
'...SNIP...']}
Lists all of the files installed by all packages specified. If not packages are specified, then all files for all known packages are returned.
installed = __salt__['lowpkg.file_dict']('httpd', 'apache', 'kernel')
Unlike file_list, this function will break down which files belong to which packages. It will also return errors in the same manner as file_list.
{'errors': ['package apache is not installed'],
'packages': {'httpd': ['/etc/httpd',
'/etc/httpd/conf',
'...SNIP...'],
'kernel': ['/boot/.vmlinuz-2.6.32-279.el6.x86_64.hmac',
'/boot/System.map-2.6.32-279.el6.x86_64',
'...SNIP...']}}
Salt uses GitHub to track open issues and feature requests.
To file a bug, please navigate to the new issue page for the Salt project.
In an issue report, please include the following information:
- The output of
salt --versions-report
from the relevant machines. This can also be gathered remotely by usingsalt <my_tgt> test.versions_report
.- A description of the problem including steps taken to cause the issue to occur and the expected behaviour.
- Any steps taken to attempt to remediate the problem.
- Any configuration options set in a configuration file that may be relevent.
- A reproduceable test case. This may be as simple as an SLS file that illustrates a problem or it may be a link to a repository that contains a number of SLS files that can be used together to re-produce a problem. If the problem is transitory, any information that can be used to try and reproduce the problem is helpful.
- [Optional] The output of each salt component (master/minion/CLI) running with the
-ldebug
flag set.Note
Please be certain to scrub any logs or SLS files for sensitive data!
Below is a list of repositories that show real world Salt applications that you can use to get started. Please note that these projects do not adhere to any standards and express a wide variety of ideas and opinions on how an action can be completed with Salt.
https://github.com/terminalmage/djangocon2013-sls
https://github.com/jesusaurus/hpcs-salt-state
Salt is based on a powerful, asynchronous, network topology using ZeroMQ. Many ZeroMQ systems are in place to enable communication. The central idea is to have the fastest communication possible.
The Salt Master runs 2 network services. First is the ZeroMQ PUB system. This
service by default runs on port 4505
and can be configured via the
publish_port
option in the master configuration.
Second is the ZeroMQ REP system. This is a separate interface used for all
bi-directional communication with minions. By default this system binds to
port 4506
and can be configured via the ret_port
option in the master.
The commands sent out via the salt client are broadcast out to the minions via ZeroMQ PUB/SUB. This is done by allowing the minions to maintain a connection back to the Salt Master and then all connections are informed to download the command data at once. The command data is kept extremely small (usually less than 1K) so it is not a burden on the network.
The PUB/SUB system is a one way communication, so once a publish is sent out
the PUB interface on the master has no further communication with the minion.
The minion, after running the command, then sends the command's return data
back to the master via the ret_port
.
If you wish to help translate the Salt documentation to your language, please head over to the Transifex website and signup for an account.
Once registered, head over to the Salt Translation Project, and either click on Request Language if you can't find yours, or, select the language for which you wish to contribute and click Join Team.
Transifex provides some useful reading resources on their support domain, namely, some useful articles directed to translators.
While you're working on your translation on Transifex, you might want to have a look at how it's rendering.
To interact with the Transifex web service you will need to install the transifex-client:
pip install transifex-client
Once installed, you will need to set it up on your computer. We created a script to help you with that:
.scripts/setup-transifex-config
There's a little script which simplifies the download process of the
translations(which isn't that complicated in the first place).
So, let's assume you're translating pt_PT
, Portuguese(Portugal). To
download the translations, execute from the doc/
directory of your Salt
checkout:
make download-translations SPHINXLANG=pt_PT
To download pt_PT
, Portuguese(Portugal), and nl
, Dutch, you can use the
helper script directly:
.scripts/download-translation-catalog pt_PT nl
After the download process finishes, which might take a while, the next step is
to build a localized version of the documentation.
Following the pt_PT
example above:
make html SPHINXLANG=pt_PT
Open your browser, point it to the local documentation path and check the localized output you've just build.
There are requirements, in addition to Salt's requirements, which need to be installed in order to run the test suite. Install one of the lines below, depending on the relevant Python version:
pip install -r requirements/dev_python26.txt
pip install -r requirements/dev_python27.txt
Note
In Salt 0.17, testing libraries were migrated into their own repo. To install them:
pip install git+https://github.com/saltstack/salt-testing.git#egg=SaltTesting
Failure to install SaltTesting will result in import errors similar to the following:
ImportError: No module named salttesting
Once all require requirements are set, use tests/runtests.py
to
run all of the tests included in Salt's test suite. For more information,
see --help
.
An alternative way of invoking the test suite is available in setup.py
:
./setup.py test
Instead of running the entire test suite, there are several ways to run only specific groups of tests or individual tests:
./tests/runtests.py --unit-tests
./tests/runtests.py --state
./tests/runtests.py -n integration.modules.virt
./tests/runtests.py -n unit.modules.virt_test
test_default_kvm_profile
test in the integration.module.virt
): ./tests/runtests.py -n ingtegration.module.virt.VirtTest.test_default_kvm_profile
Since the unit tests do not require a master or minion to execute, it is often useful to be able to
run unit tests individually, or as a whole group, without having to start up the integration testing
daemons. Starting up the master, minion, and syndic daemons takes a lot of time before the tests can
even start running and is unnecessary to run unit tests. To run unit tests without invoking the
integration test daemons, simple remove the /tests
portion of the runtests.py
command:
./runtests.py --unit
All of the other options to run individual tests, entire classes of tests, or entire test modules still apply.
Salt is used to change the settings and behavior of systems. In order to effectively test Salt's functionality, some integration tests are written to make actual changes to the underlying system. These tests are referred to as "destructive tests". Some examples of destructive tests are changes may be testing the addition of a user or installing packages. By default, destructive tests are disabled and will be skipped.
Generally, destructive tests should clean up after themselves by attempting to restore the system to its original state. For instance, if a new user is created during a test, the user should be deleted after the related test(s) have completed. However, no guarantees are made that test clean-up will complete successfully. Therefore, running destructive tests should be done with caution.
Note
Running destructive tests will change the underlying system. Use caution when running destructive tests.
To run tests marked as destructive, set the --run-destructive
flag:
./tests/runtests.py --run-destructive
Salt's testing suite also includes integration tests to assess the successful creation and deletion of cloud instances using Salt-Cloud for providers supported by Salt-Cloud.
The cloud provider tests are off by default and run on sample configuration files
provided in tests/integration/files/conf/cloud.providers.d/
. In order to run
the cloud provider tests, valid credentials, which differ per provider, must be
supplied. Each credential item that must be supplied is indicated by an empty
string value and should be edited by the user before running the tests. For
example, DigitalOcean requires a client key and an api key to operate. Therefore,
the default cloud provider configuration file for DigitalOcean looks like this:
digitalocean-config:
provider: digital_ocean
client_key: ''
api_key: ''
location: New York 1
As indicated by the empty string values, the client_key
and the api_key
must be provided:
digitalocean-config:
provider: digital_ocean
client_key: wFGEwgregeqw3435gDger
api_key: GDE43t43REGTrkilg43934t34qT43t4dgegerGEgg
location: New York 1
Note
When providing credential information in cloud provider configuration files, do not include the single quotes.
Once all of the valid credentials for the cloud provider have been supplied, the
cloud provider tests can be run by setting the --cloud-provider-tests
flag:
./tests/runtests.py --cloud-provider-tests
The test suite can be executed under a docker container using the
--docked
option flag. The docker container must be properly configured
on the system invoking the tests and the container must have access to the
internet.
Here's a simple usage example:
tests/runtests.py --docked=ubuntu-12.04 -v
The full docker container repository can also be provided:
tests/runtests.py --docked=salttest/ubuntu-12.04 -v
The SaltStack team is creating some containers which will have the necessary dependencies pre-installed. Running the test suite on a container allows destructive tests to run without making changes to the main system. It also enables the test suite to run under a different distribution than the one the main system is currently using.
The current list of test suite images is on Salt's docker repository.
Custom docker containers can be provided by submitting a pull request against Salt's docker Salt test containers repository.
SaltStack maintains a Jenkins server to allow for the execution of tests across supported platforms. The tests executed from Salt's Jenkins server create fresh virtual machines for each test run, then execute destructive tests on the new, clean virtual machine.
When a pull request is submitted to Salt's repository on GitHub, Jenkins runs Salt's test suite on a couple of virtual machines to gauge the pull request's viability to merge into Salt's develop branch. If these initial tests pass, the pull request can then merged into Salt's develop branch by one of Salt's core developers, pending their discretion. If the initial tests fail, core developers may request changes to the pull request. If the failure is unrelated to the changes in question, core developers may merge the pull request despite the initial failure.
Once the pull request is merged into Salt's develop branch, a new set of Jenkins virtual machines will begin executing the test suite. The develop branch tests have many more virtual machines to provide more comprehensive results.
There are a few other groups of virtual machines that Jenkins tests against, including past and current release branches. For a full list of currently running test environments, go to http://jenkins.saltstack.com.
For testing Salt on Jenkins, SaltStack uses Salt-Cloud to spin up virtual machines. The script using Salt-Cloud to accomplish this is open source and can be found here: https://github.com/saltstack/salt/blob/develop/tests/jenkins.py
The salt testing infrastructure is divided into two classes of tests, integration tests and unit tests. These terms may be defined differently in other contexts, but for salt they are defined this way:
salt-call
or any of the salt daemons.Salt testing uses unittest2 from the python standard library and MagicMock.
Any function in either integration test files or unit test files that is doing
the actual testing, such as functions containing assertions, must start with
test_
:
def test_user_present(self):
When functions in test files are not prepended with test_
, the function
acts as a normal, helper function and is not run as a test by the test suite.
The integration tests start up a number of salt daemons to test functionality in a live environment. These daemons include 2 salt masters, 1 syndic, and 2 minions. This allows the syndic interface to be tested and master/minion communication to be verified. All of the integration tests are executed as live salt commands sent through the started daemons.
Integration tests are particularly good at testing modules, states, and shell commands.
Unit tests are good for ensuring consistent results for functions that do not require more than a few mocks.
Mocking all external dependencies for unit tests is encouraged but not required as sometimes the isolation provided by completely mocking the external dependencies is not worth the effort of mocking those dependencies.
Overly detailed mocking can also result in decreased test readability and brittleness as the tests are more likely to fail when the code or its dependencies legitimately change. In these cases, it is better to add dependencies to the test runner dependency state, https://github.com/saltstack/salt-jenkins/blob/master/git/salt.sls.
The Salt integration tests come with a number of classes and methods which allow for components to be easily tested. These classes are generally inherited from and provide specific methods for hooking into the running integration test environment created by the integration tests.
It is noteworthy that since integration tests validate against a running environment that they are generally the preferred means to write tests.
The integration system is all located under tests/integration
in the Salt
source tree. Each directory within tests/integration
corresponds to a
directory in Salt's tree structure. For example, the integration tests for the
test.py
Salt module that is located in salt/modules
should also be
named test.py
and reside in tests/integration/modules
.
If the corresponding Salt directory does not exist within
tests/integration
, the new directory must be created along with the
appropriate test file to maintain Salt's testing directory structure.
In order for Salt's test suite to recognize tests within the newly
created directory, options to run the new integration tests must be added to
tests/runtests.py
. Examples of the necessary options that must be added
can be found here: https://github.com/saltstack/salt/blob/develop/tests/runtests.py. The functions that need to be
edited are setup_additional_options
, validate_options
, and
run_integration_tests
.
The integration classes are located in tests/integration/__init__.py
and
can be extended therein. There are three classes available to extend:
Used to define executions run via the master to minions and to call single modules and states.
The available methods are as follows:
Used to execute remote commands via a syndic, only used to verify the capabilities of the Syndic.
The available methods are as follows:
Shell out to the scripts which ship with Salt.
The available methods are as follows:
Import the integration module, this module is already added to the python path
by the test execution. Inherit from the integration.ModuleCase
class.
Now the workhorse method run_function
can be used to test a module:
import os
import integration
class TestModuleTest(integration.ModuleCase):
'''
Validate the test module
'''
def test_ping(self):
'''
test.ping
'''
self.assertTrue(self.run_function('test.ping'))
def test_echo(self):
'''
test.echo
'''
self.assertEqual(self.run_function('test.echo', ['text']), 'text')
Validating the shell commands can be done via shell tests:
import sys
import shutil
import tempfile
import integration
class KeyTest(integration.ShellCase):
'''
Test salt-key script
'''
_call_binary_ = 'salt-key'
def test_list(self):
'''
test salt-key -L
'''
data = self.run_key('-L')
expect = [
'Unaccepted Keys:',
'Accepted Keys:',
'minion',
'sub_minion',
'Rejected:', '']
self.assertEqual(data, expect)
This example verifies that the salt-key
command executes and returns as
expected by making use of the run_key
method.
Since using Salt largely involves configuring states, editing files, and changing
system data, the integration test suite contains a directory named files
to
aid in testing functions that require files. Various Salt integration tests use
these example files to test against instead of altering system files and data.
Each directory within tests/integration/files
contain files that accomplish
different tasks, based on the needs of the integration tests using those files.
For example, tests/integration/files/ssh
is used to bootstrap the test runner
for salt-ssh testing, while tests/integration/files/pillar
contains files
storing data needed to test various pillar functions.
The tests/integration/files
directory also includes an integration state tree.
The integration state tree can be found at tests/integration/files/file/base
.
The following example demonstrates how integration files can be used with ModuleCase to test states:
import os
import shutil
import integration
HFILE = os.path.join(integration.TMP, 'hosts')
class HostTest(integration.ModuleCase):
'''
Validate the host state
'''
def setUp(self):
shutil.copyfile(os.path.join(integration.FILES, 'hosts'), HFILE)
super(HostTest, self).setUp()
def tearDown(self):
if os.path.exists(HFILE):
os.remove(HFILE)
super(HostTest, self).tearDown()
def test_present(self):
'''
host.present
'''
name = 'spam.bacon'
ip = '10.10.10.10'
ret = self.run_state('host.present', name=name, ip=ip)
result = self.state_result(ret)
self.assertTrue(result)
with open(HFILE) as fp_:
output = fp_.read()
self.assertIn('{0}\t\t{1}'.format(ip, name), output)
To access the integration files, a variable named integration.FILES
points to the tests/integration/files
directory. This is where the referenced
host.present
sls file resides.
In addition to the static files in the integration state tree, the location
integration.TMP
can also be used to store temporary files that the test system
will clean up when the execution finishes.
Since Salt is used to change the settings and behavior of systems, one testing approach is to run tests that make actual changes to the underlying system. This is where the concept of destructive integration tests comes into play. Tests can be written to alter the system they are running on. This capability is what fills in the gap needed to properly test aspects of system management like package installation.
Any test that changes the underlying system in any way, such as creating or
deleting users, installing packages, or changing permissions should include the
@destructive
decorator to signal system changes and should be written with
care. System changes executed within a destructive test should also be restored
once the related tests have completed. For example, if a new user is created to
test a module, the same user should be removed after the test is completed to
maintain system integrity.
To write a destructive test, import, and use the destructiveTest decorator for the test method:
import integration
from salttesting.helpers import destructiveTest
class DestructiveExampleModuleTest(integration.ModuleCase):
'''
Demonstrate a destructive test
'''
@destructiveTest
@skipIf(os.geteuid() != 0, 'you must be root to run this test')
def test_user_not_present(self):
'''
This is a DESTRUCTIVE TEST it creates a new user on the minion.
And then destroys that user.
'''
ret = self.run_state('user.present', name='salt_test')
self.assertSaltTrueReturn(ret)
ret = self.run_state('user.absent', name='salt_test')
self.assertSaltTrueReturn(ret)
Cloud provider integration tests are used to assess Salt-Cloud's ability to create and destroy cloud instances for various supported cloud providers. Cloud provider tests inherit from the ShellCase Integration Class.
Any new cloud provider test files should be added to the tests/integration/cloud/providers/
directory. Each cloud provider test file also requires a sample cloud profile and cloud
provider configuration file in the integration test file directory located at
tests/integration/files/conf/cloud.*.d/
.
The following is an example of the default profile configuration file for Digital
Ocean, located at: tests/integration/files/conf/cloud.profiles.d/digital_ocean.conf
:
digitalocean-test:
provider: digitalocean-config
image: Ubuntu 14.04 x64
size: 512MB
Each cloud provider requires different configuration credentials. Therefore, sensitive information such as API keys or passwords should be omitted from the cloud provider configuration file and replaced with an empty string. The necessary credentials can be provided by the user by editing the provider configuration file before running the tests.
The following is an example of the default provider configuration file for Digital
Ocean, located at: tests/integration/files/conf/cloud.providers.d/digital_ocean.conf
:
digitalocean-config:
provider: digital_ocean
client_key: ''
api_key: ''
location: New York 1
In addition to providing the necessary cloud profile and provider files in the integration
test suite file structure, appropriate checks for if the configuration files exist and
contain valid information are also required in the test class's setUp
function:
class LinodeTest(integration.ShellCase):
'''
Integration tests for the Linode cloud provider in Salt-Cloud
'''
def setUp(self):
'''
Sets up the test requirements
'''
super(LinodeTest, self).setUp()
# check if appropriate cloud provider and profile files are present
profile_str = 'linode-config:'
provider = 'linode'
providers = self.run_cloud('--list-providers')
if profile_str not in providers:
self.skipTest(
'Configuration file for {0} was not found. Check {0}.conf files '
'in tests/integration/files/conf/cloud.*.d/ to run these tests.'
.format(provider)
)
# check if apikey and password are present
path = os.path.join(integration.FILES,
'conf',
'cloud.providers.d',
provider + '.conf')
config = cloud_providers_config(path)
api = config['linode-config']['linode']['apikey']
password = config['linode-config']['linode']['password']
if api == '' or password == '':
self.skipTest(
'An api key and password must be provided to run these tests. Check '
'tests/integration/files/conf/cloud.providers.d/{0}.conf'.format(
provider
)
)
Repeatedly creating and destroying instances on cloud providers can be costly.
Therefore, cloud provider tests are off by default and do not run automatically. To
run the cloud provider tests, the --cloud-provider-tests
flag must be provided:
./tests/runtests.py --cloud-provider-tests
Since cloud provider tests do not run automatically, all provider tests must be
preceded with the @expensiveTest
decorator. The expensive test decorator is
necessary because it signals to the test suite that the
--cloud-provider-tests
flag is required to run the cloud provider tests.
To write a cloud provider test, import, and use the expensiveTest decorator for the test function:
from salttesting.helpers import expensiveTest
@expensiveTest
def test_instance(self):
'''
Test creating an instance on Linode
'''
name = 'linode-testing'
# create the instance
instance = self.run_cloud('-p linode-test {0}'.format(name))
str = ' {0}'.format(name)
# check if instance with salt installed returned as expected
try:
self.assertIn(str, instance)
except AssertionError:
self.run_cloud('-d {0} --assume-yes'.format(name))
raise
# delete the instance
delete = self.run_cloud('-d {0} --assume-yes'.format(name))
str = ' True'
try:
self.assertIn(str, delete)
except AssertionError:
raise
Like many software projects, Salt has two broad-based testing approaches -- integration testing and unit testing. While integration testing focuses on the interaction between components in a sandboxed environment, unit testing focuses on the singular implementation of individual functions.
This guide assumes you've followed the directions for setting up salt testing.
Unit tests should be written to the following specifications:
raise
and return
statement needs to be independently tested.salt/.../<module>.py
are contained in a file called
tests/unit/.../<module>_test.py
, e.g. the tests for
salt/modules/fib.py
are in tests/unit/modules/fib_test.py
.test_<fcn>_<test-name>
where <fcn>
is the
function being tested and <test-name>
describes the raise
or
return
being tested.Most commonly, the following imports are necessary to create a unit test:
# Import Salt Testing libs
from salttesting import skipIf, TestCase
from salttesting.helpers import ensure_in_syspath
If you need mock support to your tests, please also import:
from salttesting.mock import NO_MOCK, NO_MOCK_REASON, MagicMock, patch, call
Let's assume that we're testing a very basic function in an imaginary Salt
execution module. Given a module called fib.py
that has a function called
calculate(num_of_results)
, which given a num_of_results
, produces a list of
sequential Fibonacci numbers of that length.
A unit test to test this function might be commonly placed in a file called
tests/unit/modules/fib_test.py
. The convention is to place unit tests for
Salt execution modules in test/unit/modules/
and to name the tests module
suffixed with _test.py
.
Tests are grouped around test cases, which are logically grouped sets of tests
against a piece of functionality in the tested software. Test cases are created
as Python classes in the unit test module. To return to our example, here's how
we might write the skeleton for testing fib.py
:
# Import Salt Testing libs
from salttesting import TestCase
# Import Salt execution module to test
from salt.modules import fib
# Create test case class and inherit from Salt's customized TestCase
class FibTestCase(TestCase):
'''
This class contains a set of functions that test salt.modules.fib.
'''
def test_fib(self):
'''
To create a unit test, we should prefix the name with `test_' so
that it's recognized by the test runner.
'''
fib_five = (0, 1, 1, 2, 3)
self.assertEqual(fib.calculate(5), fib_five)
At this point, the test can now be run, either individually or as a part of a full run of the test runner. To ease development, a single test can be executed:
tests/runtests.py -v -n unit.modules.fib_test
This will report the status of the test: success, failure, or error. The
-v
flag increases output verbosity.
tests/runtests.py -n unit.modules.fib_test -v
To review the results of a particular run, take a note of the log location given in the output for each test:
Logging tests on /var/folders/nl/d809xbq577l3qrbj3ymtpbq80000gn/T/salt-runtests.log
A longer discussion on the types of assertions one can make can be found by reading Python's documentation on unit testing.
In many cases, the purpose of a Salt module is to interact with some external system, whether it be to control a database, manipulate files on a filesystem or something else. In these varied cases, it's necessary to design a unit test which can test the function whilst replacing functions which might actually call out to external systems. One might think of this as "blocking the exits" for code under tests and redirecting the calls to external systems with our own code which produces known results during the duration of the test.
To achieve this behavior, Salt makes heavy use of the MagicMock package.
To understand how one might integrate Mock into writing a unit test for Salt, let's imagine a scenario in which we're testing an execution module that's designed to operate on a database. Furthermore, let's imagine two separate methods, here presented in pseduo-code in an imaginary execution module called 'db.py.
def create_user(username):
qry = 'CREATE USER {0}'.format(username)
execute_query(qry)
def execute_query(qry):
# Connect to a database and actually do the query...
Here, let's imagine that we want to create a unit test for the create_user function. In doing so, we want to avoid any calls out to an external system and so while we are running our unit tests, we want to replace the actual interaction with a database with a function that can capture the parameters sent to it and return pre-defined values. Therefore, our task is clear -- to write a unit test which tests the functionality of create_user while also replacing 'execute_query' with a mocked function.
To begin, we set up the skeleton of our class much like we did before, but with additional imports for MagicMock:
# Import Salt Testing libs
from salttesting import TestCase
# Import Salt execution module to test
from salt.modules import db
# Import Mock libraries
from salttesting.mock import NO_MOCK, NO_MOCK_REASON, MagicMock, patch, call
# Create test case class and inherit from Salt's customized TestCase
# Skip this test case if we don't have access to mock!
@skipIf(NO_MOCK, NO_MOCK_REASON)
class DbTestCase(TestCase):
def test_create_user(self):
# First, we replace 'execute_query' with our own mock function
db.execute_query = MagicMock()
# Now that the exits are blocked, we can run the function under test.
db.create_user('testuser')
# We could now query our mock object to see which calls were made
# to it.
## print db.execute_query.mock_calls
# Construct a call object that simulates the way we expected
# execute_query to have been called.
expected_call = call('CREATE USER testuser')
# Compare the expected call with the list of actual calls. The
# test will succeed or fail depending on the output of this
# assertion.
db.execute_query.assert_has_calls(expected_call)
__salt__
In Place¶At times, it becomes necessary to make modifications to a module's view of
functions in its own __salt__
dictionary. Luckily, this process is quite
easy.
Below is an example that uses MagicMock's patch
functionality to insert a
function into __salt__
that's actually a MagicMock instance.
def show_patch(self):
with patch.dict(my_module.__salt__,
{'function.to_replace': MagicMock()}:
# From this scope, carry on with testing, with a modified __salt__!
Consider the following function from salt/modules/linux_sysctl.py.
def get(name):
'''
Return a single sysctl parameter for this minion
CLI Example:
.. code-block:: bash
salt '*' sysctl.get net.ipv4.ip_forward
'''
cmd = 'sysctl -n {0}'.format(name)
out = __salt__['cmd.run'](cmd)
return out
This function is very simple, comprising only four source lines of code and
having only one return statement, so we know only one test is needed. There
are also two inputs to the function, the name
function argument and the call
to __salt__['cmd.run']()
, both of which need to be appropriately mocked.
Mocking a function parameter is straightforward, whereas mocking a function
call will require, in this case, the use of MagicMock. For added isolation, we
will also redefine the __salt__
dictionary such that it only contains
'cmd.run'
.
# Import Salt Libs
from salt.modules import linux_sysctl
# Import Salt Testing Libs
from salttesting import skipIf, TestCase
from salttesting.helpers import ensure_in_syspath
from salttesting.mock import (
MagicMock,
patch,
NO_MOCK,
NO_MOCK_REASON
)
ensure_in_syspath('../../')
# Globals
linux_sysctl.__salt__ = {}
@skipIf(NO_MOCK, NO_MOCK_REASON)
class LinuxSysctlTestCase(TestCase):
'''
TestCase for salt.modules.linux_sysctl module
'''
def test_get(self):
'''
Tests the return of get function
'''
mock_cmd = MagicMock(return_value=1)
with patch.dict(linux_sysctl.__salt__, {'cmd.run': mock_cmd}):
self.assertEqual(linux_sysctl.get('net.ipv4.ip_forward'), 1)
if __name__ == '__main__':
from integration import run_tests
run_tests(LinuxSysctlTestCase, needs_daemon=False)
Since get()
has only one raise or return statement and that statement is a
success condition, the test function is simply named test_get()
. As
described, the single function call parameter, name
is mocked with
net.ipv4.ip_forward
and __salt__['cmd.run']
is replaced by a MagicMock
function object. We are only interested in the return value of
__salt__['cmd.run']
, which MagicMock allows to be specified via
return_value=1
. Finally, the test itself tests for equality between the
return value of get()
and the expected return of 1
. This assertion is
expected to succeed because get()
will determine its return value from
__salt__['cmd.run']
, which we have mocked to return 1
.
Now consider the assign()
function from the same
salt/modules/linux_sysctl.py source file.
def assign(name, value):
'''
Assign a single sysctl parameter for this minion
CLI Example:
.. code-block:: bash
salt '*' sysctl.assign net.ipv4.ip_forward 1
'''
value = str(value)
sysctl_file = '/proc/sys/{0}'.format(name.replace('.', '/'))
if not os.path.exists(sysctl_file):
raise CommandExecutionError('sysctl {0} does not exist'.format(name))
ret = {}
cmd = 'sysctl -w {0}="{1}"'.format(name, value)
data = __salt__['cmd.run_all'](cmd)
out = data['stdout']
err = data['stderr']
# Example:
# # sysctl -w net.ipv4.tcp_rmem="4096 87380 16777216"
# net.ipv4.tcp_rmem = 4096 87380 16777216
regex = re.compile(r'^{0}\s+=\s+{1}$'.format(re.escape(name),
re.escape(value)))
if not regex.match(out) or 'Invalid argument' in str(err):
if data['retcode'] != 0 and err:
error = err
else:
error = out
raise CommandExecutionError('sysctl -w failed: {0}'.format(error))
new_name, new_value = out.split(' = ', 1)
ret[new_name] = new_value
return ret
This function contains two raise statements and one return statement, so we
know that we will need (at least) three tests. It has two function arguments
and many references to non-builtin functions. In the tests below you will see
that MagicMock's patch()
method may be used as a context manager or as a
decorator.
There are three test functions, one for each raise and return statement in the source function. Each function is self-contained and contains all and only the mocks and data needed to test the raise or return statement it is concerned with.
# Import Salt Libs
from salt.modules import linux_sysctl
from salt.exceptions import CommandExecutionError
# Import Salt Testing Libs
from salttesting import skipIf, TestCase
from salttesting.helpers import ensure_in_syspath
from salttesting.mock import (
MagicMock,
patch,
NO_MOCK,
NO_MOCK_REASON
)
ensure_in_syspath('../../')
# Globals
linux_sysctl.__salt__ = {}
@skipIf(NO_MOCK, NO_MOCK_REASON)
class LinuxSysctlTestCase(TestCase):
'''
TestCase for salt.modules.linux_sysctl module
'''
@patch('os.path.exists', MagicMock(return_value=False))
def test_assign_proc_sys_failed(self):
'''
Tests if /proc/sys/<kernel-subsystem> exists or not
'''
cmd = {'pid': 1337, 'retcode': 0, 'stderr': '',
'stdout': 'net.ipv4.ip_forward = 1'}
mock_cmd = MagicMock(return_value=cmd)
with patch.dict(linux_sysctl.__salt__, {'cmd.run_all': mock_cmd}):
self.assertRaises(CommandExecutionError,
linux_sysctl.assign,
'net.ipv4.ip_forward', 1)
@patch('os.path.exists', MagicMock(return_value=True))
def test_assign_cmd_failed(self):
'''
Tests if the assignment was successful or not
'''
cmd = {'pid': 1337, 'retcode': 0, 'stderr':
'sysctl: setting key "net.ipv4.ip_forward": Invalid argument',
'stdout': 'net.ipv4.ip_forward = backward'}
mock_cmd = MagicMock(return_value=cmd)
with patch.dict(linux_sysctl.__salt__, {'cmd.run_all': mock_cmd}):
self.assertRaises(CommandExecutionError,
linux_sysctl.assign,
'net.ipv4.ip_forward', 'backward')
@patch('os.path.exists', MagicMock(return_value=True))
def test_assign_success(self):
'''
Tests the return of successful assign function
'''
cmd = {'pid': 1337, 'retcode': 0, 'stderr': '',
'stdout': 'net.ipv4.ip_forward = 1'}
ret = {'net.ipv4.ip_forward': '1'}
mock_cmd = MagicMock(return_value=cmd)
with patch.dict(linux_sysctl.__salt__, {'cmd.run_all': mock_cmd}):
self.assertEqual(linux_sysctl.assign(
'net.ipv4.ip_forward', 1), ret)
if __name__ == '__main__':
from integration import run_tests
run_tests(LinuxSysctlTestCase, needs_daemon=False)
# RAET # Reliable Asynchronous Event Transport Protocol
See also
Layering:
OSI Layers
7: Application: Format: Data (Stack to Application interface buffering etc) 6: Presentation: Format: Data (Encrypt-Decrypt convert to machine independent format) 5: Session: Format: Data (Interhost communications. Authentication. Groups) 4: Transport: Format: Segments (Reliable delivery of Message, Transactions, Segmentation, Error checking) 3: Network: Format: Packets/Datagrams (Addressing Routing) 2: Link: Format: Frames ( Reliable per frame communications connection, Media access controller ) 1: Physical: Bits (Transceiver communication connection not reliable)
Link is hidden from Raet Network is IP host address and Udp Port Transport is Raet transactions, service kind, tail error checking, Could include header signing as part of transport reliable delivery serialization of header Session is session id key exchange for signing. Grouping is Road (like 852 channel) Presentation is Encrypt Decrypt body Serialize Deserialize Body Application is body data dictionary
Header signing spans both the Transport and Session layers.
JSON Header (Tradeoff some processing speed for extensibility, ease of use, readability)
Body initially JSON but support for "packed" binary body
Header ASCII Safe JSON Header termination: Empty line given by double pair of carriage return linefeed /r/n/r/n 10 13 10 13 ADAD 1010 1101 1010 1101
In json carriage return and newline characters cannot appear in a json encoded string unless they are escaped with backslash, so the 4 byte combination is illegal in valid json that does not have multi-byte unicode characters.
These means the header must be ascii safe so no multibyte utf-8 strings allowed in header.
Following Header Terminator is variable length signature block. This is binary and the length is provided in the header.
Following the signature block is the packet body or data. This may either be JSON or packed binary. The format is given in the json header
Finally is an optional tail block for error checking or encryption details
In UDP header
sh = source host sp = source port dh = destination host dp = destination port
In RAET Header
hk = header kind hl = header length
vn = version number
sd = Source Device ID dd = Destination Device ID cf = Corresponder Flag mf = Multicast Flag
si = Session ID ti = Transaction ID
sk = Service Kind pk = Packet Kind bf = Burst Flag (Send all Segments or Ordered packets without interleaved acks)
oi = Order Index dt = DateTime Stamp
sn = Segment Number sc = Segment Count
pf = Pending Segment Flag af = All Flag (Resent all Segments not just one)
nk = Auth header kind nl = Auth header length
bk = body kind bl = body length
tk = tail kind tl = tail length
Minion sends packet with SID of Zero with public key of minions Public Private Key pair Master acks packet with SID of Zero to let minion know it received the request
Some time later Master sends packet with SID of zero that accepts the Minion
Minion
Session is important for security. Want one session opened and then multiple transactions within session.
Session ID SID sid
GUID hash to guarantee uniqueness since no guarantee of nonvolatile storage or require file storage to keep last session ID used.
Four Service Types
One or more maybe (unacknowledged repeat) maybe means no guarantee
at most means fixed number of retries has finite probability of failing B1) finite retries B2) infinite retries with exponential back-off up to a maximum delay
Receiver requests retry of missing packet with same B1 or B2 retry type
This is two B sub transactions
Initially unicast messaging Eventually support for Multicast
The use case for C) is to fragment large packets as once a UDP packet exceeds the frame size its reliability goes way down So its more reliable to fragment large packets.
Better approach might be to have more modularity. Services Levels
- Maybe one or more
- Fire and forget
no transaction either side
- Repeat, no ack, no dupdet
repeat counter send side, no transaction on receive side
- Repeat, no Ack, dupdet
repeat counter send side, dup detection transaction receive side
- More or Less Once
- retry finite, ack no dupdet
retry timer send side, finite number of retires ack receive side no dupdet
- At most Once
- retry finite, ack, dupdet
retry timer send side, finite number of retires ack receive side dupdet
- Exactly once
- ack retry
retry timer send side, ack and duplicate detection receive side Infinite retries with exponential backoff
- Sequential sequence number
- reorder escrow
- Segmented packets
request response to application layer
Service Features
Always include transaction id since multiple transactions on same port So get duplicate detection for free if keep transaction alive but if use
A) Maybe one or more B1) At Least One B2) Exactly One C) One of sequence D) End to End
A) Sender creates transaction id for number of repeats but receiver does not keep transaction alive
B1) Sender creates transaction id keeps it for retries. Receiver keeps it to send ack then kills so retry could be duplicate not detected
B2) Sender creates transaction id keeps for retries Receiver keeps tid for acks on any retires so no duplicates.
C) Sender creates TID and Sequence Number. Receiver checks for out of order sequence and can request retry.
D) Application layer sends response. So question is do we keep transaction open or have response be new transaction. No because then we need a rep-req ID so might as well use the same transaction id. Just keep alive until get response.
Little advantage to B1 vs B2 not having duplicates.
So 4 service types
Also multicast or unicast
Modular Transaction Table
The SaltStack team follows a git policy to maintain stability and consistency with the repository.
The git policy has been developed to encourage contributions and make contributing to Salt as easy as possible. Code contributors to SaltStack projects DO NOT NEED TO READ THIS DOCUMENT, because all contributions come into SaltStack via a single gateway to make it as easy as possible for contributors to give us code.
The primary rule of git management in SaltStack is to make life easy on contributors and developers to send in code. Simplicity is always a goal!
All new SaltStack code is posted to the develop branch, which is the single point of entry. The only exception is when a bugfix to develop cannot be cleanly merged into a release branch and the bugfix needs to be rewritten for the release branch.
SaltStack maintains two types of releases, Feature Releases and Point Releases. A feature release is managed by incrementing the first or second release point number, so 0.10.5 -> 0.11.0 signifies a feature release and 0.11.0 -> 0.11.1 signifies a point release, also a hypothetical 0.42.7 -> 1.0.0 would also signify a feature release.
Each feature release is maintained in a dedicated git branch derived from the last applicable release commit on develop. All file changes relevant to the feature release will be completed in the develop branch prior to the creation of the feature release branch. The feature release branch will be named after the relevant numbers to the feature release, which constitute the first two numbers. This means that the release branch for the 0.11.0 series is named 0.11.
A feature release branch is created with the following command:
# git checkout -b 0.11 # From the develop branch
# git push origin 0.11
Each point release is derived from its parent release branch. Constructing point releases is a critical aspect of Salt development and is managed by members of the core development team. Point releases comprise bug and security fixes which are cherry picked from develop onto the aforementioned release branch. At the time when a core developer accepts a pull request a determination needs to be made if the commits in the pull request need to be backported to the release branch. Some simple criteria are used to make this determination:
Determining when a point release is going to be made is up to the project leader (Thomas Hatch). Generally point releases are made every 1-2 weeks or if there is a security fix they can be made sooner.
The point release is only designated by tagging the commit on the release branch with release number using the existing convention (version 0.11.1 is tagged with v0.11.1). From the tag point a new source tarball is generated and published to PyPI, and a release announcement is made.
Salt's documentation is built using the Sphinx documentation system. It can be build in a large variety of output formats including HTML, PDF, ePub, and manpage.
All the documentation is contained in the main Salt repository. Speaking broadly, most of the narrative documentation is contained within the https://github.com/saltstack/salt/blob/develop/doc subdirectory and most of the reference and API documentation is written inline with Salt's Python code and extracted using a Sphinx extension.
The Salt project recommends the IEEE style guide as a general reference for writing guidelines. Those guidelines are not strictly enforced but rather serve as an excellent resource for technical writing questions. The NCBI style guide is another very approachable resource.
Use third-person perspective and avoid "I", "we", "you" forms of address. Identify the addressee specifically e.g., "users should", "the compiler does", etc.
Use active voice and present-tense. Avoid filler words.
Document titles and section titles within a page should follow normal sentence capitalization rules. Words that are capitalized as part of a regular sentence should be capitalized in a title and otherwise left as lowercase. Punctuation can be omitted unless it aids the intent of the title (e.g., exclamation points or question marks).
For example:
This is a main heading
======================
Paragraph.
This is an exciting sub-heading!
--------------------------------
Paragraph.
According to Wikipedia: In English punctuation, a serial comma or series comma (also called Oxford comma and Harvard comma) is a comma placed immediately before the coordinating conjunction (usually and, or, or nor) in a series of three or more terms. For example, a list of three countries might be punctuated either as "France, Italy, and Spain" (with the serial comma), or as "France, Italy and Spain" (without the serial comma)."
When writing a list that includes three or more items, the serial comma should always be used.
Documentation for Salt's various module types is inline in the code. During the documentation build process it is extracted and formatted into the final HTML, PDF, etc format.
Python has special multi-line strings called docstrings as the first element in a function or class. These strings allow documentation to live alongside the code and can contain special formatting. For example:
def myfunction(value):
'''
Upper-case the given value
Usage:
.. code-block:: python
val = 'a string'
new_val = myfunction(val)
print(new_val) # 'A STRING'
:param value: a string
:return: a copy of ``value`` that has been upper-cased
'''
return value.upper()
New functions or changes to existing functions should include a marker that denotes what Salt release will be affected. For example:
def myfunction(value):
'''
Upper-case the given value
.. versionadded:: 2014.7.0
<...snip...>
'''
return value.upper()
For changes to a function:
def myfunction(value, strip=False):
'''
Upper-case the given value
.. versionchanged:: Boron
Added a flag to also strip whitespace from the string.
<...snip...>
'''
if strip:
return value.upper().strip()
return value.upper()
Each module type has an index listing all modules of that type. For example: Full list of builtin execution modules, Full list of builtin state modules, Full list of builtin renderer modules. New modules must be added to the index manually.
.rst
file for the new module in the same directory as the index.rst
.index.rst
and the new .rst
file and send a
pull request.The Sphinx documentation system contains a wide variety of cross-referencing capabilities.
Link to glossary entries using the term role. A cross-reference should be added the first time a Salt-specific term is used in a document.
A common way to encapsulate master-side functionality is by writing a
custom :term:`Runner Function`. Custom Runner Functions are easy to write.
Sphinx automatically generates many kind of index entries but it is occasionally useful to manually add items to the index.
One method is to use the index directive above the document or section that should appear in the index.
.. index:: ! Event, event bus, event system
see: Reactor; Event
Another method is to use the index role inline with the text that should appear in the index. The index entry is created and the target text is left otherwise intact.
Information about the :index:`Salt Reactor`
-------------------------------------------
Paragraph.
Each document should contain a unique top-level label of the form:
.. _my-page:
My page
=======
Paragraph.
Unique labels can be linked using the ref role. This allows cross-references to survive document renames or movement.
For more information see :ref:`my-page`.
Note, the :doc:
role should not be used to link documents together.
Cross-references to Salt modules can be added using Sphinx's Python domain
roles. For example, to create a link to the test.ping
function:
A useful execution module to test active communication with a minion is the
:py:func:`test.ping <salt.modules.test.ping>` function.
Salt modules can be referenced as well:
The :py:mod:`test module <salt.modules.test>` contains many useful
functions for inspecting an active Salt connection.
The same syntax works for all modules types:
One of the workhorse state module functions in Salt is the
:py:func:`file.managed <salt.states.file.managed>` function.
Individual settings in the Salt Master or Salt Minion configuration files are
cross-referenced using two custom roles, conf_master
, and conf_minion
.
The :conf_minion:`minion ID <id>` setting is a unique identifier for a
single minion.
Install Sphinx using a system package manager or pip. The package name is
often of the form python-sphinx
. There are no other dependencies.
Build the documentation using the provided Makefile or .bat
file on
Windows.
cd /path/to/salt/doc
make html
The generated documentation will be written to the doc/_build/<format>
directory.
A useful method of viewing the HTML documentation locally is the start Python's built-in HTTP server:
cd /path/to/salt/doc/_build/html
python -m SimpleHTTPServer
Then pull up the documentation in a web browser at http://localhost:8000/.
Formulas are pre-written Salt States. They are as open-ended as Salt States themselves and can be used for tasks such as installing a package, configuring, and starting a service, setting up users or permissions, and many other common tasks.
All official Salt Formulas are found as separate Git repositories in the "saltstack-formulas" organization on GitHub:
https://github.com/saltstack-formulas
As a simple example, to install the popular Apache web server (using the normal defaults for the underlying distro) simply include the apache-formula from a top file:
base:
'web*':
- apache
Each Salt Formula is an individual Git repository designed as a drop-in addition to an existing Salt State tree. Formulas can be installed in the following ways.
One design goal of Salt's GitFS fileserver backend was to facilitate reusable States. GitFS is a quick and natural way to use Formulas.
Add one or more Formula repository URLs as remotes in the
gitfs_remotes
list in the Salt Master configuration file:
gitfs_remotes:
- https://github.com/saltstack-formulas/apache-formula
- https://github.com/saltstack-formulas/memcached-formula
We strongly recommend forking a formula repository into your own GitHub account to avoid unexpected changes to your infrastructure.
Many Salt Formulas are highly active repositories so pull new changes with care. Plus any additions you make to your fork can be easily sent back upstream with a quick pull request!
Restart the Salt master.
Formulas are simply directories that can be copied onto the local file system
by using Git to clone the repository or by downloading and expanding a tarball
or zip file of the repository. The directory structure is designed to work with
file_roots
in the Salt master configuration.
Clone or download the repository into a directory:
mkdir -p /srv/formulas
cd /srv/formulas
git clone https://github.com/saltstack-formulas/apache-formula.git
# or
mkdir -p /srv/formulas
cd /srv/formulas
wget https://github.com/saltstack-formulas/apache-formula/archive/master.tar.gz
tar xf apache-formula-master.tar.gz
Add the new directory to file_roots
:
file_roots:
base:
- /srv/salt
- /srv/formulas/apache-formula
Restart the Salt Master.
Each Formula is intended to be immediately usable with sane defaults without
any additional configuration. Many formulas are also configurable by including
data in Pillar; see the pillar.example
file in each Formula repository
for available options.
Formula may be included in an existing sls
file. This is often useful when
a state you are writing needs to require
or extend
a state defined in
the formula.
Here is an example of a state that uses the epel-formula in a
require
declaration which directs Salt to not install the python26
package until after the EPEL repository has also been installed:
include:
- epel
python26:
pkg.installed:
- require:
- pkg: epel
Some Formula perform completely standalone installations that are not referenced from other state files. It is usually cleanest to include these Formula directly from a Top File.
For example the easiest way to set up an OpenStack deployment on a single
machine is to include the openstack-standalone-formula directly from
a top.sls
file:
base:
'myopenstackmaster':
- openstack
Quickly deploying OpenStack across several dedicated machines could also be done directly from a Top File and may look something like this:
base:
'controller':
- openstack.horizon
- openstack.keystone
'hyper-*':
- openstack.nova
- openstack.glance
'storage-*':
- openstack.swift
Salt Formulas are designed to work out of the box with no additional
configuration. However, many Formula support additional configuration and
customization through Pillar. Examples of available options can
be found in a file named pillar.example
in the root directory of each
Formula repository.
Remember that Formula are regular Salt States and can be used with all Salt's
normal state mechanisms. Formula can be required from other States with
require declarations, they can be modified using extend
,
they can made to watch other states with The _in versions of requisites.
The following example uses the stock apache-formula alongside a custom state to create a vhost on a Debian/Ubuntu system and to reload the Apache service whenever the vhost is changed.
# Include the stock, upstream apache formula.
include:
- apache
# Use the watch_in requisite to cause the apache service state to reload
# apache whenever the my-example-com-vhost state changes.
my-example-com-vhost:
file:
- managed
- name: /etc/apache2/sites-available/my-example-com
- watch_in:
- service: apache
Don't be shy to read through the source for each Formula!
Each Formula is a separate repository on GitHub. If you encounter a bug with a Formula please file an issue in the respective repository! Send fixes and additions as a pull request. Add tips and tricks to the repository wiki.
Each Formula is a separate repository in the saltstack-formulas organization on GitHub.
Note
Get involved creating new Formulas
The best way to create new Formula repositories for now is to create a
repository in your own account on GitHub and notify a SaltStack employee
when it is ready. We will add you to the contributors team on the
saltstack-formulas organization and help you transfer the repository
over. Ping a SaltStack employee on IRC (#salt
on Freenode) or send an
email to the salt-users mailing list.
There are a lot of repositories in that organization! Team members can manage which repositories they are subscribed to on GitHub's watching page: https://github.com/watching.
Maintainability, readability, and reusability are all marks of a good Salt sls file. This section contains several suggestions and examples.
# Deploy the stable master branch unless version overridden by passing
# Pillar at the CLI or via the Reactor.
deploy_myapp:
git.latest:
- name: git@github.com/myco/myapp.git
- version: {{ salt.pillar.get('myapp:version', 'master') }}
The ID of a state is used as a unique identifier that may be referenced via other states in requisites. It must be unique across the whole state tree (it is a key in a dictionary, after all).
In addition a state ID should be descriptive and serve as a high-level hint of
what it will do, or manage, or change. For example, deploy_webapp
, or
apache
, or reload_firewall
.
module.function
notation¶So-called "short-declaration" notation is preferred for referencing state
modules and state functions. It provides a consistent pattern of
module.function
shared between Salt States, the Reactor, Overstate, Salt
Mine, the Scheduler, as well as with the CLI.
# Do
apache:
pkg.installed:
- name: httpd
# Don't
apache:
pkg:
- installed
- name: httpd
Salt's state compiler will transform "short-decs" into the longer format when compiling the human-friendly highstate structure into the machine-friendly lowstate structure.
name
parameter¶Use a unique and permanent identifier for the state ID and reserve name
for
data with variability.
The name declaration is a required parameter for all
state functions. The state ID will implicitly be used as name
if it is not
explicitly set in the state.
In many state functions the name
parameter is used for data that varies
such as OS-specific package names, OS-specific file system paths, repository
addresses, etc. Any time the ID of a state changes all references to that ID
must also be changed. Use a permanent ID when writing a state the first time to
future-proof that state and allow for easier refactors down the road.
YAML allows comments at varying indentation levels. It is a good practice to comment state files. Use vertical whitespace to visually separate different concepts or actions.
# Start with a high-level description of the current sls file.
# Explain the scope of what it will do or manage.
# Comment individual states as necessary.
update_a_config_file:
# Provide details on why an unusual choice was made. For example:
#
# This template is fetched from a third-party and does not fit our
# company norm of using Jinja. This must be processed using Mako.
file.managed:
- name: /path/to/file.cfg
- source: salt://path/to/file.cfg.template
- template: mako
# Provide a description or explanation that did not fit within the state
# ID. For example:
#
# Update the application's last-deployed timestamp.
# This is a workaround until Bob configures Jenkins to automate RPM
# builds of the app.
cmd.run:
# FIXME: Joe needs this to run on Windows by next quarter. Switch these
# from shell commands to Salt's file.managed and file.replace state
# modules.
- name: |
touch /path/to/file_last_updated
sed -e 's/foo/bar/g' /path/to/file_environment
- onchanges:
- file: a_config_file
Be careful to use Jinja comments for commenting Jinja code and YAML comments for commenting YAML code.
# BAD EXAMPLE
# The Jinja in this YAML comment is still executed!
# {% set apache_is_installed = 'apache' in salt.pkg.list_pkgs() %}
# GOOD EXAMPLE
# The Jinja in this Jinja comment will not be executed.
{# {% set apache_is_installed = 'apache' in salt.pkg.list_pkgs() %} #}
Jinja templating provides vast flexibility and power when building Salt sls files. It can also create an unmaintainable tangle of logic and data. Speaking broadly, Jinja is best used when kept apart from the states (as much as is possible).
Below are guidelines and examples of how Jinja can be used effectively.
High-level knowledge of how Salt states are compiled and run is useful when writing states.
The default renderer
setting in Salt is Jinja piped to YAML.
Each is a separate step. Each step is not aware of the previous or following
step. Jinja is not YAML aware, YAML is not Jinja aware; they cannot share
variables or interact.
The full evaluation and execution order:
Jinja -> YAML -> Highstate -> low state -> execution
Avoid calling commands from Jinja that change the underlying system. Commands
run via Jinja do not respect Salt's dry-run mode (test=True
)! This is
usually in conflict with the idempotent nature of Salt states unless the
command being run is also idempotent.
A common use for Jinja in Salt states is to gather information about the
underlying system. The grains
dictionary available in the Jinja context is
a great example of common data points that Salt itself has already gathered.
Less common values are often found by running commands. For example:
{% set is_selinux_enabled = salt.cmd.run('sestatus') == '1' %}
This is usually best done with a variable assignment in order to separate the data from the state that will make use of the data.
One of the most common uses for Jinja is to pull external data into the state file. External data can come from anywhere like API calls or database queries, but it most commonly comes from flat files on the file system or Pillar data from the Salt Master. For example:
{% set some_data = salt.pillar.get('some_data', {'sane default': True}) %}
{# or #}
{% load_json 'path/to/file.json' as some_data %}
{# or #}
{% load_text 'path/to/ssh_key.pub' as ssh_pub_key %}
{# or #}
{% from 'path/to/other_file.jinja' import some_data with context %}
This is usually best done with a variable assignment in order to separate the data from the state that will make use of the data.
Jinja is extremely powerful for programatically generating Salt states. It is also easy to overuse. As a rule of thumb, if it is hard to read it will be hard to maintain!
Separate Jinja control-flow statements from the states as much as is possible to create readable states. Limit Jinja within states to simple variable lookups.
Below is a simple example of a readable loop:
{% for user in salt.pillar.get('list_of_users', []) %}
{# Ensure unique state IDs when looping. #}
{{ user.name }}-{{ loop.index }}:
user.present:
- name: {{ user.name }}
- shell: {{ user.shell }}
{% endfor %}
Avoid putting a Jinja conditionals within Salt states where possible. Readability suffers and the correct YAML indentation is difficult to see in the surrounding visual noise. Parameterization (discussed below) and variables are both useful techniques to avoid this. For example:
{# ---- Bad example ---- #}
apache:
pkg.installed:
{% if grains.os_family == 'RedHat' %}
- name: httpd
{% elif grains.os_family == 'Debian' %}
- name: apache2
{% endif %}
{# ---- Better example ---- #}
{% if grains.os_family == 'RedHat' %}
{% set name = 'httpd' %}
{% elif grains.os_family == 'Debian' %}
{% set name = 'apache2' %}
{% endif %}
apache:
pkg.installed:
- name: {{ name }}
{# ---- Good example ---- #}
{% set name = {
'RedHat': 'httpd',
'Debian': 'apache2',
}.get(grains.os_family) %}
apache:
pkg.installed:
- name: {{ name }}
Dictionaries are useful to effectively "namespace" a collection of variables. This is useful with parameterization (discussed below). Dictionaries are also easily combined and merged. And they can be directly serialized into YAML which is often easier than trying to create valid YAML through templating. For example:
{# ---- Bad example ---- #}
haproxy_conf:
file.managed:
- name: /etc/haproxy/haproxy.cfg
- template: jinja
{% if 'external_loadbalancer' in grains.roles %}
- source: salt://haproxy/external_haproxy.cfg
{% elif 'internal_loadbalancer' in grains.roles %}
- source: salt://haproxy/internal_haproxy.cfg
{% endif %}
- context:
{% if 'external_loadbalancer' in grains.roles %}
ssl_termination: True
{% elif 'internal_loadbalancer' in grains.roles %}
ssl_termination: False
{% endif %}
{# ---- Better example ---- #}
{% load_yaml as haproxy_defaults %}
common_settings:
bind_port: 80
internal_loadbalancer:
source: salt://haproxy/internal_haproxy.cfg
settings:
bind_port: 8080
ssl_termination: False
external_loadbalancer:
source: salt://haproxy/external_haproxy.cfg
settings:
ssl_termination: True
{% endload %}
{% if 'external_loadbalancer' in grains.roles %}
{% set haproxy = haproxy_defaults['external_loadbalancer'] %}
{% elif 'internal_loadbalancer' in grains.roles %}
{% set haproxy = haproxy_defaults['internal_loadbalancer'] %}
{% endif %}
{% do haproxy.settings.update(haproxy_defaults.common_settings) %}
haproxy_conf:
file.managed:
- name: /etc/haproxy/haproxy.cfg
- template: jinja
- source: {{ haproxy.source }}
- context: {{ haproxy.settings | yaml() }}
There is still room for improvement in the above example. For example, extracting into an external file or replacing the if-elif conditional with a function call to filter the correct data more succinctly. However, the state itself is simple and legible, the data is separate and also simple and legible. And those suggested improvements can be made at some future date without altering the state at all!
Jinja is not Python. It was made by Python programmers and shares many semantics and some syntax but it does not allow for abitrary Python function calls or Python imports. Jinja is a fast and efficient templating language but the syntax can be verbose and visually noisy.
Once Jinja use within an sls file becomes slightly complicated -- long chains of if-elif-elif-else statements, nested conditionals, complicated dictionary merges, wanting to use sets -- instead consider using a different Salt renderer, such as the Python renderer. As a rule of thumb, if it is hard to read it will be hard to maintain -- switch to a format that is easier to read.
Using alternate renderers is very simple to do using Salt's "she-bang" syntax at the top of the file. The Python renderer must simply return the correct highstate data structure. The following example is a state tree of two sls files, one simple and one complicated.
/srv/salt/top.sls
:
base:
'*':
- common_configuration
- roles_configuration
/srv/salt/common_configuration.sls
:
common_users:
user.present:
- names: [larry, curly, moe]
/srv/salt/roles_configuration
:
#!py
def run():
list_of_roles = set()
# This example has the minion id in the form 'web-03-dev'.
# Easily access the grains dictionary:
try:
app, instance_number, environment = __grains__['id'].split('-')
instance_number = int(instance_number)
except ValueError:
app, instance_number, environment = ['Unknown', 0, 'dev']
list_of_roles.add(app)
if app == 'web' and environment == 'dev':
list_of_roles.add('primary')
list_of_roles.add('secondary')
elif app == 'web' and environment == 'staging':
if instance_number == 0:
list_of_roles.add('primary')
else:
list_of_roles.add('secondary')
# Easily cross-call Salt execution modules:
if __salt__['myutils.query_valid_ec2_instance']():
list_of_roles.add('is_ec2_instance')
return {
'set_roles_grains': {
'grains.present': [
{'name': 'roles'},
{'value': list(list_of_roles)},
],
},
}
In Salt sls files Jinja macros are useful for one thing and one thing only: creating mini templates that can be reused and rendered on demand. Do not fall into the trap of thinking of macros as functions; Jinja is not Python (see above).
Macros are useful for creating reusable, parameterized states. For example:
{% macro user_state(state_id, user_name, shell='/bin/bash', groups=[]) %}
{{ state_id }}:
user.present:
- name: {{ user_name }}
- shell: {{ shell }}
- groups: {{ groups | json() }}
{% endmacro %}
{% for user_info in salt.pillar.get('my_users', []) %}
{{ user_state('user_number_' ~ loop.index, **user_info) }}
{% endfor %}
Macros are also useful for creating one-off "serializers" that can accept a data structure and write that out as a domain-specific configuration file. For example, the following macro could be used to write a php.ini config file:
/srv/salt/php.sls
:
php_ini:
file.managed:
- name: /etc/php.ini
- source: salt://php.ini.tmpl
- template: jinja
- context:
php_ini_settings: {{ salt.pillar.get('php_ini', {}) | json() }}
/srv/pillar/php.sls
:
php_ini:
PHP:
engine: 'On'
short_open_tag: 'Off'
error_reporting: 'E_ALL & ~E_DEPRECATED & ~E_STRICT'
/srv/salt/php.ini.tmpl
:
{% macro php_ini_serializer(data) %}
{% for section_name, name_val_pairs in data.items() %}
[{{ section_name }}]
{% for name, val in name_val_pairs.items() -%}
{{ name }} = "{{ val }}"
{% endfor %}
{% endfor %}
{% endmacro %}
; File managed by Salt at <{{ source }}>.
; Your changes will be overwritten.
{{ php_ini_serializer(php_ini_settings) }}
Separate data that a state uses from the state itself to increases the flexibility and reusability of a state.
An obvious and common example of this is platform-specific package names and file system paths. Another example is sane defaults for an application, or common settings within a company or organization. Organizing such data as a dictionary (aka hash map, lookup table, associative array) often provides a lightweight namespacing and allows for quick and easy lookups. In addition, using a dictionary allows for easily merging and overriding static values within a lookup table with dynamic values fetched from Pillar.
A strong convention in Salt Formulas is to place platform-specific data, such
as package names and file system paths, into a file named map.jinja
that is placed alongside the state files.
The following is an example from the MySQL Formula.
The grains.filter_by
function
performs a lookup on that table using the os_family
grain (by default).
The result is that the mysql
variable is assigned to a subset of
the lookup table for the current platform. This allows states to reference, for
example, the name of a package without worrying about the underlying OS. The
syntax for referencing a value is a normal dictionary lookup in Jinja, such as
{{ mysql['service'] }}
or the shorthand {{ mysql.service }}
.
map.jinja
:
{% set mysql = salt['grains.filter_by']({
'Debian': {
'server': 'mysql-server',
'client': 'mysql-client',
'service': 'mysql',
'config': '/etc/mysql/my.cnf',
'python': 'python-mysqldb',
},
'RedHat': {
'server': 'mysql-server',
'client': 'mysql',
'service': 'mysqld',
'config': '/etc/my.cnf',
'python': 'MySQL-python',
},
'Gentoo': {
'server': 'dev-db/mysql',
'client': 'dev-db/mysql',
'service': 'mysql',
'config': '/etc/mysql/my.cnf',
'python': 'dev-python/mysql-python',
},
}, merge=salt['pillar.get']('mysql:lookup')) %}
Values defined in the map file can be fetched for the current platform in any state file using the following syntax:
{% from "mysql/map.jinja" import mysql with context %}
mysql-server:
pkg.installed:
- name: {{ mysql.server }}
service.running:
- name: {{ mysql.service }}
Common values can be collected into a base dictionary. This
minimizes repetition of identical values in each of the
lookup_dict
sub-dictionaries. Now only the values that are
different from the base must be specified of the alternates:
map.jinja
:
{% set mysql = salt['grains.filter_by']({
'default': {
'server': 'mysql-server',
'client': 'mysql-client',
'service': 'mysql',
'config': '/etc/mysql/my.cnf',
'python': 'python-mysqldb',
},
'Debian': {
},
'RedHat': {
'client': 'mysql',
'service': 'mysqld',
'config': '/etc/my.cnf',
'python': 'MySQL-python',
},
'Gentoo': {
'server': 'dev-db/mysql',
'client': 'dev-db/mysql',
'python': 'dev-python/mysql-python',
},
},
merge=salt['pillar.get']('mysql:lookup'), default='default') %}
Allow static values within lookup tables to be overridden. This is a simple pattern which once again increases flexibility and reusability for state files.
The merge
argument in filter_by
specifies the location of a dictionary in Pillar that can be used to override
values returned from the lookup table. If the value exists in Pillar it will
take precedence.
This is useful when software or configuration files is installed to
non-standard locations or on unsupported platforms. For example, the following
Pillar would replace the config
value from the call above.
mysql:
lookup:
config: /usr/local/etc/mysql/my.cnf
Note
Protecting Expansion of Content with Special Characters
When templating keep in mind that YAML does have special characters for
quoting, flows, and other special structure and content. When a Jinja
substitution may have special characters that will be incorrectly parsed by
YAML care must be taken. It is a good policy to use the yaml_encode
or
the yaml_dquote
Jinja filters:
{%- set foo = 7.7 %}
{%- set bar = none %}
{%- set baz = true %}
{%- set zap = 'The word of the day is "salty".' %}
{%- set zip = '"The quick brown fox . . ."' %}
foo: {{ foo|yaml_encode }}
bar: {{ bar|yaml_encode }}
baz: {{ baz|yaml_encode }}
zap: {{ zap|yaml_encode }}
zip: {{ zip|yaml_dquote }}
The above will be rendered as below:
foo: 7.7
bar: null
baz: true
zap: "The word of the day is \"salty\"."
zip: "\"The quick brown fox . . .\""
The filter_by
function performs a
simple dictionary lookup but also allows for fetching data from Pillar and
overriding data stored in the lookup table. That same workflow can be easily
performed without using filter_by
; other dictionaries besides data from
Pillar can also be used.
{% set lookup_table = {...} %}
{% do lookup_table.update(salt.pillar.get('my:custom:data')) %}
The map.jinja
file is only a convention within Salt Formulas. This greater
pattern is useful for a wide variety of data in a wide variety of workflows.
This pattern is not limited to pulling data from a single file or data source.
This pattern is useful in States, Pillar, the Reactor, and Overstate as well.
Working with a data structure instead of, say, a config file allows the data to be cobbled together from multiple sources (local files, remote Pillar, database queries, etc), combined, overridden, and searched.
Below are a few examples of what lookup tables may be useful for and how they may be used and represented.
An obvious pattern and one used heavily in Salt Formulas is extracting
platform-specific information such as package names and file system paths in
a file named map.jinja
. The pattern is explained in detail above.
Application settings can be a good fit for this pattern. Store default settings along with the states themselves and keep overrides and sensitive settings in Pillar. Combine both into a single dictionary and then write the application config or settings file.
The example below stores most of the Apache Tomcat server.xml
file
alongside the Tomcat states and then allows values to be updated or augmented
via Pillar. (This example uses the BadgerFish format for transforming JSON to
XML.)
/srv/salt/tomcat/defaults.yaml
:
Server:
'@port': '8005'
'@shutdown': SHUTDOWN
GlobalNamingResources:
Resource:
'@auth': Container
'@description': User database that can be updated and saved
'@factory': org.apache.catalina.users.MemoryUserDatabaseFactory
'@name': UserDatabase
'@pathname': conf/tomcat-users.xml
'@type': org.apache.catalina.UserDatabase
# <...snip...>
/srv/pillar/tomcat.sls
:
appX:
server_xml_overrides:
Server:
Service:
'@name': Catalina
Connector:
'@port': '8009'
'@protocol': AJP/1.3
'@redirectPort': '8443'
# <...snip...>
/srv/salt/tomcat/server_xml.sls
:
{% import_yaml 'tomcat/defaults.yaml' as server_xml_defaults %}
{% set server_xml_final_values = salt.pillar.get(
'appX:server_xml_overrides',
default=server_xml_defaults,
merge=True)
%}
appX_server_xml:
file.serialize:
- name: /etc/tomcat/server.xml
- dataset: {{ server_xml_final_values | json() }}
- formatter: xml_badgerfish
The file.serialize
state can provide a
shorthand for creating some files from data structures. There are also many
examples within Salt Formulas of creating one-off "serializers" (often as Jinja
macros) that reformat a data structure to a specific config file format. For
example, `Nginx vhosts`__ or the `php.ini`__
__: https://github.com/saltstack-formulas/nginx-formula/blob/5cad4512/nginx/ng/vhosts_config.sls __: https://github.com/saltstack-formulas/php-formula/blob/82e2cd3a/php/ng/files/php.ini
A single state can be reused when it is parameterized as described in the section below, by separating the data the state will use from the state that performs the work. This can be the difference between deploying Application X and Application Y, or the difference between production and development. For example:
/srv/salt/app/deploy.sls
:
{# Load the map file. #}
{% import_yaml 'app/defaults.yaml' as app_defaults %}
{# Extract the relevant subset for the app configured on the current
machine (configured via a grain in this example). #}
{% app = app_defaults.get(salt.grains.get('role') %}
{# Allow values from Pillar to (optionally) update values from the lookup
table. #}
{% do app_defaults.update(salt.pillar.get('myapp', {}) %}
deploy_application:
git.latest:
- name: {{ app.repo_url }}
- version: {{ app.version }}
- target: {{ app.deploy_dir }}
myco/myapp/deployed:
event.send:
- data:
version: {{ app.version }}
- onchanges:
- git: deploy_application
/srv/salt/app/defaults.yaml
:
appX:
repo_url: git@github.com/myco/appX.git
target: /var/www/appX
version: master
appY:
repo_url: git@github.com/myco/appY.git
target: /var/www/appY
version: v1.2.3.4
Each sls file in a Formula should strive to do a single thing. This increases the reusability of this file by keeping unrelated tasks from getting coupled together.
As an example, the base Apache formula should only install the Apache httpd server and start the httpd service. This is the basic, expected behavior when installing Apache. It should not perform additional changes such as set the Apache configuration file or create vhosts.
If a formula is single-purpose as in the example above, other formulas, and
also other states can include
and use that formula with Requisites and Other Global State Arguments
without also including undesirable or unintended side-effects.
The following is a best-practice example for a reusable Apache formula. (This skips platform-specific options for brevity. See the full apache-formula for more.)
# apache/init.sls
apache:
pkg.installed:
[...]
service.running:
[...]
# apache/mod_wsgi.sls
include:
- apache
mod_wsgi:
pkg.installed:
[...]
- require:
- pkg: apache
# apache/conf.sls
include:
- apache
apache_conf:
file.managed:
[...]
- watch_in:
- service: apache
To illustrate a bad example, say the above Apache formula installed Apache and also created a default vhost. The mod_wsgi state would not be able to include the Apache formula to create that dependency tree without also installing the unneeded default vhost.
Formulas should be reusable. Avoid coupling unrelated actions together.
Parameterization is a key feature of Salt Formulas and also for Salt States. Parameterization allows a single Formula to be reused across many operating systems; to be reused across production, development, or staging environments; and to be reused by many people all with varying goals.
Writing states, specifying ordering and dependencies is the part that takes the longest to write and to test. Filling those states out with data such as users or package names or file locations is the easy part. How many users, what those users are named, or where the files live are all implementation details that should be parameterized. This separation between a state and the data that populates a state creates a reusable formula.
In the example below the data that populates the state can come from anywhere -- it can be hard-coded at the top of the state, it can come from an external file, it can come from Pillar, it can come from an execution function call, or it can come from a database query. The state itself doesn't change regardless of where the data comes from. Production data will vary from development data will vary from data from one company to another, however the state itself stays the same.
{% set user_list = [
{'name': 'larry', 'shell': 'bash'},
{'name': 'curly', 'shell': 'bash'},
{'name': 'moe', 'shell': 'zsh'},
] %}
{# or #}
{% set user_list = salt['pillar.get']('user_list') %}
{# or #}
{% load_json "default_users.json" as user_list %}
{# or #}
{% set user_list = salt['acme_utils.get_user_list']() %}
{% for user in list_list %}
{{ user.name }}:
user.present:
- name: {{ user.name }}
- shell: {{ user.shell }}
{% endfor %}
Formulas should strive to use the defaults of the underlying platform, followed by defaults from the upstream project, followed by sane defaults for the formula itself.
As an example, a formula to install Apache should not change the default Apache configuration file installed by the OS package. However, the Apache formula should include a state to change or override the default configuration file.
Pillar lookups must use the safe get()
and must provide a default value. Create local variables using the Jinja
set
construct to increase redability and to avoid potentially hundreds or
thousands of function calls across a large state tree.
{% from "apache/map.jinja" import apache with context %}
{% set settings = salt['pillar.get']('apache', {}) %}
mod_status:
file.managed:
- name: {{ apache.conf_dir }}
- source: {{ settings.get('mod_status_conf', 'salt://apache/mod_status.conf') }}
- template: {{ settings.get('template_engine', 'jinja') }}
Any default values used in the Formula must also be documented in the
pillar.example
file in the root of the repository. Comments should be
used liberally to explain the intent of each configuration value. In addition,
users should be able copy-and-paste the contents of this file into their own
Pillar to make any desired changes.
Remember that both State files and Pillar files can easily call out to Salt execution modules and have access to all the system grains as well.
{% if '/storage' in salt['mount.active']() %}
/usr/local/etc/myfile.conf:
file:
- symlink
- target: /storage/myfile.conf
{% endif %}
Jinja macros to encapsulate logic or conditionals are discouraged in favor of writing custom execution modules in Python.
A basic Formula repository should have the following layout:
foo-formula
|-- foo/
| |-- map.jinja
| |-- init.sls
| `-- bar.sls
|-- CHANGELOG.rst
|-- LICENSE
|-- pillar.example
|-- README.rst
`-- VERSION
See also
The template-formula repository has a pre-built layout that serves as the basic structure for a new formula repository. Just copy the files from there and edit them.
README.rst
¶The README should detail each available .sls
file by explaining what it
does, whether it has any dependencies on other formulas, whether it has a
target platform, and any other installation or usage instructions or tips.
A sample skeleton for the README.rst
file:
===
foo
===
Install and configure the FOO service.
.. note::
See the full `Salt Formulas installation and usage instructions
<http://docs.saltstack.com/en/latest/topics/development/conventions/formulas.html>`_.
Available states
================
.. contents::
:local:
``foo``
-------
Install the ``foo`` package and enable the service.
``foo.bar``
-----------
Install the ``bar`` package.
CHANGELOG.rst
¶The CHANGELOG.rst
file should detail the individual versions, their
release date and a set of bullet points for each version highlighting the
overall changes in a given version of the formula.
A sample skeleton for the CHANGELOG.rst file:
CHANGELOG.rst
:
foo formula
===========
0.0.2 (2013-01-01)
- Re-organized formula file layout
- Fixed filename used for upstart logger template
- Allow for pillar message to have default if none specified
Formula are versioned according to Semantic Versioning, http://semver.org/.
Note
Given a version number MAJOR.MINOR.PATCH, increment the:
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
Formula versions are tracked using Git tags as well as the VERSION
file
in the formula repository. The VERSION
file should contain the currently
released version of the particular formula.
A smoke-test for invalid Jinja, invalid YAML, or an invalid Salt state
structure can be performed by with the state.show_sls
function:
salt '*' state.show_sls apache
Salt Formulas can then be tested by running each .sls
file via
state.sls
and checking the output for the
success or failure of each state in the Formula. This should be done for each
supported platform.
Since Salt provides a powerful toolkit for system management and automation, the package can be spit into a number of sub-tools. While packaging Salt as a single package containing all components is perfectly acceptable, the split packages should follow this convention.
The occasion may arise where Salt source and default configurations may need to be patched. It is preferable if Salt is only patched to include platform specific additions or to fix release time bugs. It is preferable that configuration settings and operations remain in the default state, as changes here lowers the user experience for users moving across distributions.
In the event where a packager finds a need to change the default configuration it is advised to add the files to the master.d or minion.d directories.
Release packages should always be built from the source tarball distributed via pypi. Release packages should NEVER use a git checkout as the source for distribution.
Shipping Salt as a single package, where the minion, master, and all tools are together is perfectly acceptable and practiced by distributions such as FreeBSD.
Salt Should always be split in a standard way, with standard dependencies, this lowers cross distribution confusion about what components are going to be shipped with specific packages. These packages can be defined from the Salt Source as of Salt 2014.1.0:
The salt-common or salt package should contain the files provided by the
salt python package, or all files distributed from the salt/
directory in
the source distribution packages. The documentation contained under the
doc/
directory can be a part of this package but splitting out a doc
package is preferred.
Since salt-call is the entry point to utilize the libs and is useful for all
salt packages it is included in the salt-common package.
The salt-master package contains the applicable scripts, related man pages and init information for the given platform.
The Salt Syndic package can be rolled completely into the Salt Master package. Platforms which start services as part of the package deployment need to maintain a separate salt-syndic package (primarily Debian based platforms).
The Syndic may optionally not depend on the anything more than the Salt Master since the master will bring in all needed dependencies, but fall back to the platform specific packaging guidelines.
The Minion is a standalone package and should not be split beyond the salt-minion and salt-common packages.
Since Salt SSH does not require the same dependencies as the minion and master, it should be split out.
As of Salt 2014.1.0 Salt Cloud is included in the same repo as Salt. This can be split out into a separate package or it can be included in the salt-master package.
The documentation package is very distribution optional. A completely split package will split out the documentation, but some platform conventions do not prefer this. If the documentation is not split out, it should be included with the Salt Common package.
The goal for Salt projects is to cut a new feature release every four to six weeks. This document outlines the process for these releases, and the subsequent bug fix releases which follow.
When a new release is ready to be cut, the person responsible for cutting the release will follow the following steps (written using the 0.16 release as an example):
0.16
milestone to the 0.17
milestone)v
. (e.g. v0.16
) This tag will reside on the
develop
branch.0.16
)v
. (e.g. v0.16.0RC
)Once a release has been cut, regular cherry-picking sessions should begin to
cherry-pick any bugfixes from the develop
branch to the release branch
(e.g. 0.16
). Once major bugs have been fixes and cherry-picked, a bugfix
release can be cut:
0.16
), create an annotated tag for the
revision release. It should be preceded by the letter v
. (e.g.
v0.16.2
) Release candidates are unnecessary for bugfix releases.Bugfixes should be made on the develop
branch. If the bug also applies to
the current release branch, then on the pull request against develop
, the
user should mention @basepi
and ask for the pull request to be
cherry-picked. If it is verified that the fix is a bugfix, then the
Bugfix -- Cherry-Pick
label will be applied to the pull request. When
those commits are cherry-picked, the label will be switched to the
Bugfix -- [Done] Cherry-Pick
label. This allows easy recognition of which
pull requests have been cherry-picked, and which are still pending to be
cherry-picked. All cherry-picked commits will be present in the next release.
Features will not be cherry-picked, and will be present in the next feature release.
Salt is developed with a certain coding style, while the style is dominantly PEP 8 it is not completely PEP 8. It is also noteworthy that a few development techniques are also employed which should be adhered to. In the end, the code is made to be "Salty".
Most importantly though, we will accept code that violates the coding style and KINDLY ask the contributor to fix it, or go ahead and fix the code on behalf of the contributor. Coding style is NEVER grounds to reject code contributions, and is never grounds to talk down to another member of the community (There are no grounds to treat others without respect, especially people working to improve Salt)!!
Most Salt style conventions are codified in Salt's .pylintrc
file. This file
is found in the root of the Salt project and can be passed as an argument to the
pylint program as follows:
pylint --rcfile=/path/to/salt/.pylintrc salt/dir/to/lint
Salt follows a few rules when formatting strings:
In Salt, all strings use single quotes unless there is a good reason not to. This means that docstrings use single quotes, standard strings use single quotes etc.:
def foo():
'''
A function that does things
'''
name = 'A name'
return name
All strings which require formatting should use the .format string method:
data = 'some text'
more = '{0} and then some'.format(data)
Make sure to use indices or identifiers in the format brackets, since empty brackets are not supported by python 2.6.
Please do NOT use printf formatting.
Docstrings should always add a newline, docutils takes care of the new line and it makes the code cleaner and more vertical:
GOOD:
def bar():
'''
Here lies a docstring with a newline after the quotes and is the salty
way to handle it! Vertical code is the way to go!
'''
return
BAD:
def baz():
'''This is not ok!'''
return
When adding a new function or state, where possible try to use a
versionadded
directive to denote when the function or state was added.
def new_func(msg=''):
'''
.. versionadded:: 0.16.0
Prints what was passed to the function.
msg : None
The string to be printed.
'''
print msg
If you are uncertain what version should be used, either consult a core
developer in IRC or bring this up when opening your
pull request and a core developer will add the proper
version once your pull request has been merged. Bugfixes will be available in a
bugfix release (i.e. 0.17.1, the first bugfix release for 0.17.0), while new
features are held for feature releases, and this will affect what version
number should be used in the versionadded
directive.
Similar to the above, when an existing function or state is modified (for
example, when an argument is added), then under the explanation of that new
argument a versionadded
directive should be used to note the version in
which the new argument was added. If an argument's function changes
significantly, the versionchanged
directive can be used to clarify this:
def new_func(msg='', signature=''):
'''
.. versionadded:: 0.16.0
Prints what was passed to the function.
msg : None
The string to be printed. Will be prepended with 'Greetings! '.
.. versionchanged:: 0.17.1
signature : None
An optional signature.
.. versionadded 0.17.0
'''
print 'Greetings! {0}\n\n{1}'.format(msg, signature)
Dictionaries should be initialized using {} instead of dict().
See here for an in-depth discussion of this topic.
Salt code prefers importing modules and not explicit functions. This is both a style and functional preference. The functional preference originates around the fact that the module import system used by pluggable modules will include callable objects (functions) that exist in the direct module namespace. This is not only messy, but may unintentionally expose code python libs to the Salt interface and pose a security problem.
To say this more directly with an example, this is GOOD:
import os
def minion_path():
path = os.path.join(self.opts['cachedir'], 'minions')
return path
This on the other hand is DISCOURAGED:
from os.path import join
def minion_path():
path = join(self.opts['cachedir'], 'minions')
return path
The time when this is changed is for importing exceptions, generally directly importing exceptions is preferred:
This is a good way to import exceptions:
from salt.exceptions import CommandExecutionError
Although absolute imports seems like an awesome idea, please do not use it.
Extra care would be necessary all over salt's code in order for absolute
imports to work as supposed. Believe it, it has been tried before and, as a
tried example, by renaming salt.modules.sysmod
to salt.modules.sys
, all
other salt modules which needed to import sys
would have to
also import absolute_import
, which should be
avoided.
When writing Salt code, vertical code is generally preferred. This is not a hard rule but more of a guideline. As PEP 8 specifies, Salt code should not exceed 79 characters on a line, but it is preferred to separate code out into more newlines in some cases for better readability:
import os
os.chmod(
os.path.join(self.opts['sock_dir'],
'minion_event_pub.ipc'),
448
)
Where there are more line breaks, this is also apparent when constructing a function with many arguments, something very common in state functions for instance:
def managed(name,
source=None,
source_hash='',
user=None,
group=None,
mode=None,
template=None,
makedirs=False,
context=None,
replace=True,
defaults=None,
env=None,
backup='',
**kwargs):
Note
Making function and class definitions vertical is only required if the arguments are longer then 80 characters. Otherwise, the formatting is optional and both are acceptable.
For function definitions and function calls, Salt adheres to the PEP-8 specification of at most 80 characters per line.
Non function definitions or function calls, please adopt a soft limit of 120 characters per line. If breaking the line reduces the code readability, don't break it. Still, try to avoid passing that 120 characters limit and remember, vertical is better... unless it isn't
Some confusion exists in the python world about indenting things like function calls, the above examples use 8 spaces when indenting comma-delimited constructs.
The confusion arises because the pep8 program INCORRECTLY flags this as wrong, where PEP 8, the document, cites only using 4 spaces here as wrong, as it doesn't differentiate from a new indent level.
Right:
def managed(name,
source=None,
source_hash='',
user=None)
WRONG:
def managed(name,
source=None,
source_hash='',
user=None)
Lining up the indent is also correct:
def managed(name,
source=None,
source_hash='',
user=None)
This also applies to function calls and other hanging indents.
pep8 and Flake8 (and, by extension, the vim plugin Syntastic) will complain
about the double indent for hanging indents. This is a known conflict between
pep8 (the script) and the actual PEP 8 standard. It is recommended that this
particular warning be ignored with the following lines in
~/.config/flake8
:
[flake8]
ignore = E226,E241,E242,E126
Make sure your Flake8/pep8 are up to date. The first three errors are ignored by default and are present here to keep the behavior the same. This will also work for pep8 without the Flake8 wrapper -- just replace all instances of 'flake8' with 'pep8', including the filename.
Many pull requests have been submitted that only churn code in the name of PEP 8. Code churn is a leading source of bugs and is strongly discouraged. While style fixes are encouraged they should be isolated to a single file per commit, and the changes should be legitimate, if there are any questions about whether a style change is legitimate please reference this document and the official PEP 8 (http://legacy.python.org/dev/peps/pep-0008/) document before changing code. Many claims that a change is PEP 8 have been invalid, please double check before committing fixes.
See the version numbers page for more information about the version numbering scheme.
The 2015.5.0 feature release of Salt is focused on hardening Salt and mostly on improving existing systems. A few major additions are present, primarily the new Beacon system. Most enhancements have been focused around improving existing features and interfaces.
As usual the release notes are not exhaustive and primarily include the most notable additions and improvements. Hundreds of bugs have been fixed and many modules have been substantially updated and added.
Warning
In order to fix potential shell injection vulnerabilities in salt modules,
a change has been made to the various cmd
module functions. These
functions now default to python_shell=False
, which means that the
commands will not be sent to an actual shell.
The largest side effect of this change is that "shellisms", such as pipes,
will not work by default. The modules shipped with salt have been audited
to fix any issues that might have arisen from this change. Additionally,
the cmd
state module has been unaffected, and use of cmd.run
in
jinja is also unaffected. cmd.run
calls on the CLI will also allow
shellisms.
However, custom execution modules which use shellisms in cmd
calls
will break, unless you pass python_shell=True
to these calls.
As a temporary workaround, you can set cmd_safe: False
in your minion
and master configs. This will revert the default, but is also less secure,
as it will allow shell injection vulnerabilities to be written in custom
code. We recommend you only set this setting for as long as it takes to
resolve these issues in your custom code, then remove the override.
Note
Starting in this version of salt, pillar_opts
defaults to False instead
of True. This means that master opts will not be present in minion pillar,
and as a result, config.get
calls will not include master opts.
We recommend pillar is used for configuration options which need to make it to the minion.
The beacon system allows the minion to hook into system processes and
continually translate external events into the salt event bus. The primary
example of this is the inotify
beacon. This beacon uses
inotify to watch configured files or directories on the minion for changes,
creation, deletion etc.
This allows for the changes to be sent up to the master where the reactor can respond to changes.
It is now possible to run the minion as a non-root user and for the minion to execute commands via sudo. Simply add sudo_user: root to the minion config, run the minion as a non-root user and grant that user sudo rights to execute salt-call.
The Lazy Loader is a significant overhaul of Salt's module loader system. The Lazy Loader will lazily load modules on access instead of all on start. In addition to a major performance improvement, this "sandboxes" modules so a bad/broken import of a single module will only affect jobs that require accessing the broken module. (:issue: 20274)
The eauth system for LDAP has been extended to support Microsoft Active Directory out of the box. This includes Active Directory and LDAP group support for eauth.
The LXC systems have been overhauled to be more consistent and to fix many bugs.
This overhaul makes using LXC with Salt much easier and substantially improves the underlying capabilities of Salt's LXC integration.
state.single
in salt-ssh
publish.publish
, publish.full_data
, and
publish.runner
in salt-ssh
mine.get
in salt-ssh
The new Windows installer changes how Salt is installed on Windows. The old installer used bbfreeze to create an isolated python environment to execute in. This made adding modules and python libraries difficult. The new installer sets up a more flexible python environment making it easy to manage the python install and add python modules.
Instead of frozen packages, a full python implementation resides in the bin
directory (C:\salt\bin
). By executing pip or easy_install from within the
Scripts directory (C:\salt\bin\Scripts
) you can install any additional
python modules you may need for your custom environment.
The .exe's that once resided at the root of the salt directory (C:\salt
)
have been replaced by .bat files and should function the same way as the .exe's
in previous versions.
The new Windows Installer will not replace the minion config file and key if
they already exist on the target system. Only the salt program files will be
replaced. C:\salt\conf
and C:\salt\var
will remain unchanged.
The hard dependency on the requests library has been removed. Requests is still required by a number of cloud modules but is no longer required for normal Salt operations.
This removal fixes issues that were introduced with requests and salt-ssh, as well as issues users experienced from the many different packaging methods used by requests package maintainers.
While Salt does not YET run on Python 3 it has been updated to INSTALL on Python 3, taking us one step closer. What remains is getting the test suite to the point where it can run on Python 3 so that we can verify compatibility.
The RAET support continues to improve. RAET now supports multi-master and many bugs and performance issues have been fixed. RAET is much closer to being a first class citizen.
A number of functions have been added to the RPM-based package managers to detect and diff files that are modified from the original package installs. This can be found in the new pkg.modified functions.
Fix an infinite recursion problem for runner/wheel reactor jobs by passing a "user" (Reactor) to all jobs that the reactor starts. The reactor skips all events created by that username -- thereby only reacting to events not caused by itself. Because of this, runner and wheel executions from the runner will have user "Reactor" in the job cache.
only_upgrade
argument to apt-based pkg.install
to only install a
package version if the package is already installed. (Great for security
updates!)keyname
to be specified in the provider
configuration. This change was necessitated upstream by the 7.0+ API.args
argument to cmd.script_retcode
to match cmd.script
in
the cmd module
. (:issue: 21122)Removed parameter
keyword argument from eselect.exec_action
execution
module.
Removed runas
parameter from the following pip`
execution module
functions: install
, uninstall
, freeze
, list_
,
list_upgrades
, upgrade_available
, upgrade
. Please migrate to
user
.
Removed runas
parameter from the following pip
state module
functions: installed
, removed
, uptodate
. Please migrate to
user
.
Removed quiet
option from all functions in cmdmod
execution module.
Please use output_loglevel=quiet
instead.
Removed parameter
argument from eselect.set_
state. Please migrate to
module_parameter
or action_parameter
.
The salt_events
table schema has changed to include an additional field
called master_id
to distinguish between events flowing into a database
from multiple masters. If event_return
is enabled in the master config,
the database schema must first be updated to add the master_id
field.
This alteration can be accomplished as follows:
ALTER TABLE salt_events ADD master_id VARCHAR(255) NOT NULL;
release: | 2015-05-20 |
---|
Version 2015.5.1 is a bugfix release for 2015.5.0.
Changes:
Extended Changelog Courtesy of Todd Stansell (https://github.com/tjstansell/salt-changelogs):
@ 2015-05-20T19:33:41Z
@ 2015-05-20T19:13:36Z
@ 2015-05-20T18:41:33Z
@ 2015-05-20T18:32:44Z
@ 2015-05-20T18:05:27Z
@ 2015-05-20T17:12:57Z
@ 2015-05-20T17:12:26Z
@ 2015-05-20T17:11:48Z
@ 2015-05-20T15:43:54Z
@ 2015-05-20T04:53:40Z
@ 2015-05-20T04:37:09Z
@ 2015-05-20T04:35:41Z
@ 2015-05-20T04:00:10Z
@ 2015-05-20T03:04:24Z
PR #23956: (rallytime) Backport #23906 to 2015.5 @ 2015-05-20T03:04:14Z
PR #23955: (rallytime) Backport #19305 to 2015.5 @ 2015-05-20T03:03:55Z
PR #23940: (basepi) [2015.5] Merge forward from 2014.7 to 2015.5 @ 2015-05-19T22:37:58Z
PR #23932: (rallytime) Backport #23908 to 2015.5 @ 2015-05-19T21:41:28Z
PR #23931: (rallytime) Backport #23880 to 2015.5 @ 2015-05-19T21:41:18Z
PR #23898: (kiorky) Lxc profiles | refs: #23897 @ 2015-05-19T21:08:28Z
PR #23922: (garethgreenaway) Fixes to debian_ip.py @ 2015-05-19T18:50:53Z
PR #23925: (jpic) Fixed wrong path in LXC cloud documentation @ 2015-05-19T18:23:56Z
PR #23894: (whiteinge) Add __all__ attribute to Mock class for docs @ 2015-05-19T17:17:35Z
PR #23884: (jfindlay) Fix locale.set_locale on debian @ 2015-05-19T15:51:22Z
PR #23866: (jfindlay) backport #23834, change portage.dep.strip_empty to list comprehension @ 2015-05-19T15:50:43Z
PR #23917: (corywright) Split debian bonding options on dash instead of underscore @ 2015-05-19T15:44:35Z
PR #23909: (jayeshka) 'str' object has no attribute 'capitalized' @ 2015-05-19T15:41:53Z
PR #23903: (garethgreenaway) Adding docs for missing schedule state module parameters. @ 2015-05-19T06:29:34Z
PR #23806: (kiorky) Lxc seeding | refs: #23807 @ 2015-05-18T23:18:33Z
PR #23892: (basepi) [2015.5] Merge forward from 2014.7 to 2015.5 @ 2015-05-18T23:07:57Z
PR #23875: (rallytime) Backport #23838 to 2015.5 @ 2015-05-18T22:28:55Z
PR #23876: (rallytime) Switch digital ocean tests to v2 driver @ 2015-05-18T22:17:13Z
PR #23882: (garethgreenaway) Fixes to scheduler in 2015.5 @ 2015-05-18T22:09:24Z
PR #23868: (basepi) [2015.5] Merge forward from 2014.7 to 2015.5 @ 2015-05-18T18:35:54Z
PR #23863: (rahulhan) Adding states/timezone.py unit test @ 2015-05-18T17:02:19Z
PR #23862: (rahulhan) Adding states/tomcat.py unit tests @ 2015-05-18T17:02:10Z
PR #23860: (rahulhan) Adding states/test.py unit tests @ 2015-05-18T17:01:49Z
PR #23859: (rahulhan) Adding states/sysrc.py unit tests @ 2015-05-18T17:01:46Z
PR #23812: (rallytime) Backport #23790 to 2015.5 @ 2015-05-18T15:30:34Z
PR #23811: (rallytime) Backport #23786 to 2015.5 @ 2015-05-18T15:30:27Z
PR #23850: (jayeshka) adding sysbench unit test case @ 2015-05-18T15:28:04Z
PR #23843: (The-Loeki) Fix erroneous virtual:physical core grain detection @ 2015-05-18T15:24:22Z
PR #23816: (Snergster) Doc for #23685 Added prereq, caution, and additional mask information @ 2015-05-18T15:18:03Z
PR #23832: (ahus1) make saltify provider use standard boostrap procedure @ 2015-05-18T02:18:29Z
PR #23791: (optix2000) Psutil compat @ 2015-05-16T04:05:54Z
PR #23782: (terminalmage) Replace "command -v" with "which" and get rid of spurious log messages @ 2015-05-16T04:03:10Z
PR #23783: (basepi) [2015.5] Merge forward from 2014.7 to 2015.5 @ 2015-05-15T21:38:51Z
PR #23781: (jfindlay) fix unit test mock errors on arch @ 2015-05-15T19:40:07Z
PR #23740: (jfindlay) Binary write @ 2015-05-15T18:10:44Z
PR #23736: (jfindlay) always load pip execution module @ 2015-05-15T18:10:16Z
PR #23770: (cellscape) Fix cloud LXC container destruction @ 2015-05-15T17:38:59Z
PR #23759: (lisa2lisa) fixed the problem for not beable to revoke ., for more detail https… @ 2015-05-15T17:38:38Z
PR #23769: (cellscape) Fix file_roots CA lookup in salt.utils.http.get_ca_bundle @ 2015-05-15T16:21:49Z
PR #23765: (jayeshka) adding states/makeconf unit test case @ 2015-05-15T14:29:43Z
PR #23760: (ticosax) [doc] document refresh argument @ 2015-05-15T14:23:47Z
PR #23766: (jayeshka) adding svn unit test case @ 2015-05-15T14:23:18Z
PR #23751: (rallytime) Backport #23737 to 2015.5 @ 2015-05-15T03:58:37Z
PR #23710: (kiorky) Get more useful output from stateful commands @ 2015-05-14T21:58:10Z
PR #23724: (rallytime) Backport #23609 to 2015.5 @ 2015-05-14T19:34:22Z
PR #23723: (rallytime) Backport #23568 to 2015.5 @ 2015-05-14T19:34:11Z
PR #23725: (rallytime) Backport #23691 to 2015.5 @ 2015-05-14T19:32:30Z
PR #23722: (rallytime) Backport #23472 to 2015.5 @ 2015-05-14T19:31:52Z
PR #23727: (jfindlay) fix npm execution module stacktrace @ 2015-05-14T18:14:12Z
PR #23718: (rahulhan) Adding states/user.py unit tests @ 2015-05-14T17:15:38Z
PR #23720: (basepi) [2015.5] Merge forward from 2014.7 to 2015.5 @ 2015-05-14T17:13:02Z
PR #23704: (jayeshka) adding states/lvs_server unit test case @ 2015-05-14T14:22:10Z
PR #23703: (jayeshka) adding states/lvs_service unit test case @ 2015-05-14T14:21:23Z
PR #23702: (jayeshka) Remove superfluous return statement. @ 2015-05-14T14:20:42Z
PR #23686: (jfindlay) remove superflous return statement @ 2015-05-14T14:20:18Z
PR #23690: (rallytime) Backport #23424 to 2015.5 @ 2015-05-13T23:04:36Z
PR #23681: (cachedout) Start on 2015.5.1 release notes @ 2015-05-13T19:44:22Z
PR #23679: (jfindlay) Merge #23616 @ 2015-05-13T19:03:53Z
PR #23675: (basepi) [2015.5] Merge forward from 2014.7 to 2015.5 @ 2015-05-13T18:35:54Z
PR #23669: (rallytime) Backport #23586 to 2015.5 @ 2015-05-13T18:27:11Z
PR #23662: (rallytime) Merge #23642 with pylint fix @ 2015-05-13T15:46:51Z
PR #23622: (jfindlay) merge #23508 @ 2015-05-13T15:36:49Z
PR #23651: (jayeshka) adding solr unit test case @ 2015-05-13T15:26:15Z
PR #23649: (jayeshka) adding states/libvirt unit test case @ 2015-05-13T15:24:48Z
PR #23648: (jayeshka) adding states/linux_acl unit test case @ 2015-05-13T15:24:11Z
PR #23650: (jayeshka) adding states/kmod unit test case @ 2015-05-13T15:09:18Z
PR #23633: (jayeshka) made changes to test_interfaces function. @ 2015-05-13T06:51:07Z
PR #23619: (jfindlay) fix kmod.present processing of module loading @ 2015-05-13T01:16:56Z
PR #23598: (rahulhan) Adding states/win_dns_client.py unit tests @ 2015-05-12T21:47:36Z
PR #23597: (rahulhan) Adding states/vbox_guest.py unit tests @ 2015-05-12T21:46:30Z
PR #23615: (rallytime) Backport #23577 to 2015.5 @ 2015-05-12T21:19:11Z
PR #23603: (rahulhan) Adding states/winrepo.py unit tests @ 2015-05-12T18:40:12Z
PR #23602: (rahulhan) Adding states/win_path.py unit tests @ 2015-05-12T18:39:37Z
PR #23600: (rahulhan) Adding states/win_network.py unit tests @ 2015-05-12T18:39:01Z
PR #23599: (rahulhan) Adding win_firewall.py unit tests @ 2015-05-12T18:37:49Z
PR #23601: (basepi) Add versionadded for jboss module/state @ 2015-05-12T17:22:59Z
PR #23469: (s0undt3ch) Call the windows specific function not the general one @ 2015-05-12T16:47:22Z
PR #23583: (jayeshka) adding states/ipset unit test case @ 2015-05-12T16:31:55Z
PR #23582: (jayeshka) adding states/keyboard unit test case @ 2015-05-12T16:31:17Z
PR #23581: (jayeshka) adding states/layman unit test case @ 2015-05-12T16:30:36Z
PR #23580: (jayeshka) adding smf unit test case @ 2015-05-12T16:29:58Z
PR #23572: (The-Loeki) Fix regression of #21355 introduced by #21603 @ 2015-05-12T16:28:05Z
PR #23565: (garethgreenaway) fix to aptpkg module @ 2015-05-12T16:25:46Z
PR #23550: (jfindlay) additional mock for rh_ip_test test_build_bond @ 2015-05-12T15:17:16Z
PR #23552: (garethgreenaway) Fix for an issue caused by a previous pull request @ 2015-05-11T21:54:59Z
PR #23547: (slinu3d) Added AWS v4 signature support for 2015.5 @ 2015-05-11T21:52:24Z
PR #23544: (basepi) [2015.5] Merge forward from 2014.7 to 2015.5 @ 2015-05-11T18:02:06Z
PR #23470: (twangboy) Fixed service.restart for salt-minion @ 2015-05-11T17:54:47Z
PR #23539: (rahulhan) Adding states/virtualenv_mod.py unit tests @ 2015-05-11T17:02:31Z
6f0cf2e Merge remote-tracking branch 'upstream/2015.2' into 2015.5
PR #23513: (gladiatr72) short-circuit auto-failure of iptables.delete state @ 2015-05-11T15:18:33Z
PR #23534: (jayeshka) adding states/ini_manage unit test case @ 2015-05-11T14:32:06Z
PR #23533: (jayeshka) adding states/hipchat unit test case @ 2015-05-11T14:30:22Z
PR #23532: (jayeshka) adding states/ipmi unit test case @ 2015-05-11T14:28:15Z
PR #23531: (jayeshka) adding service unit test case @ 2015-05-11T14:27:12Z
PR #23517: (garethgreenaway) fix to returners @ 2015-05-11T14:20:51Z
PR #23502: (rahulhan) Adding states/win_servermanager.py unit tests @ 2015-05-08T19:47:18Z
PR #23495: (jayeshka) adding seed unit test case @ 2015-05-08T17:30:38Z
PR #23494: (jayeshka) adding sensors unit test case @ 2015-05-08T17:30:18Z
PR #23493: (jayeshka) adding states/incron unit test case @ 2015-05-08T17:29:59Z
PR #23492: (jayeshka) adding states/influxdb_database unit test case @ 2015-05-08T17:29:51Z
PR #23491: (jayeshka) adding states/influxdb_user unit test case @ 2015-05-08T16:24:07Z
PR #23477: (galet) LDAP auth: Escape filter value for group membership search @ 2015-05-07T22:04:48Z
PR #23476: (cachedout) Lint becaon @ 2015-05-07T19:55:36Z
PR #23431: (UtahDave) Beacon fixes | refs: #23476 @ 2015-05-07T19:53:47Z
PR #23468: (rahulhan) Adding states/win_system.py unit tests @ 2015-05-07T19:20:50Z
PR #23466: (UtahDave) minor spelling fix @ 2015-05-07T19:19:06Z
PR #23461: (s0undt3ch) [2015.5] Update to latest stable bootstrap script v2015.05.07 @ 2015-05-07T19:16:18Z
PR #23450: (jayeshka) adding scsi unit test case @ 2015-05-07T19:00:28Z
PR #23449: (jayeshka) adding s3 unit test case @ 2015-05-07T18:59:45Z
PR #23448: (jayeshka) adding states/keystone unit test case @ 2015-05-07T18:58:59Z
PR #23447: (jayeshka) adding states/grafana unit test case @ 2015-05-07T18:58:20Z
PR #23438: (techhat) Gate requests import @ 2015-05-07T07:22:58Z
PR #23429: (basepi) [2015.5] Merge forward from 2014.7 to 2015.5 @ 2015-05-07T05:35:13Z
PR #23396: (basepi) [2015.2] Merge forward from 2014.7 to 2015.2 @ 2015-05-06T21:42:35Z
PR #23412: (rahulhan) Adding states/win_update.py unit tests @ 2015-05-06T18:31:09Z
PR #23413: (terminalmage) Update manpages for 2015.2 -> 2015.5 @ 2015-05-06T17:12:57Z
PR #23410: (terminalmage) Update Lithium docstrings in 2015.2 branch @ 2015-05-06T15:53:52Z
PR #23407: (jayeshka) adding rsync unit test case @ 2015-05-06T15:52:23Z
PR #23406: (jayeshka) adding states/lxc unit test case @ 2015-05-06T15:51:50Z
PR #23395: (basepi) [2015.2] Add note to 2015.2.0 release notes about master opts in pillar @ 2015-05-05T22:15:20Z
PR #23393: (basepi) [2015.2] Add warning about python_shell changes to 2015.2.0 release notes @ 2015-05-05T22:12:46Z
PR #23380: (gladiatr72) Fix for double output with static salt cli/v2015.2 @ 2015-05-05T21:44:28Z
static
bits from below the else: fold this time.PR #23379: (rahulhan) Adding states/rabbitmq_cluster.py @ 2015-05-05T21:44:06Z
PR #23377: (rahulhan) Adding states/xmpp.py unit tests @ 2015-05-05T21:43:35Z
PR #23335: (steverweber) 2015.2: include doc in master config for module_dirs @ 2015-05-05T21:28:58Z
PR #23362: (jayeshka) adding states/zk_concurrency unit test case @ 2015-05-05T15:50:06Z
PR #23363: (jayeshka) adding riak unit test case @ 2015-05-05T14:23:05Z
release: | TBA |
---|
Version 2015.5.2 is a bugfix release for 2015.5.0.
Extended Changelog Courtesy of Todd Stansell (https://github.com/tjstansell/salt-changelogs):
@ 2015-06-03T18:44:31Z
@ 2015-06-03T18:39:41Z
@ 2015-06-03T17:50:02Z
@ 2015-06-03T14:49:18Z
ISSUE #22991: (nicholascapo) npm.installed ignores test=True * ae681a4 Merge pull request #24313 from nicholascapo/fix-22991-npm.installed-test-true * ac9644c Fix #22991 npm.installed correctly set result on test=True
@ 2015-06-03T14:49:06Z
ISSUE #18966: (bechtoldt) file.serialize ignores test=True * d57a9a2 Merge pull request #24312 from nicholascapo/fix-18966-file.serialize-test-true * e7328e7 Fix #18966 file.serialize correctly set result on test=True
@ 2015-06-03T03:27:43Z
@ 2015-06-03T01:54:09Z
@ 2015-06-02T15:18:46Z
ISSUE #24319: (dr4Ke) grains state shouldn't fail silently * 88a997e Merge pull request #24328 from dr4Ke/fix_state_grains_silently_fails_2015.5 * 8a63d1e fix state grains silently fails #24319
@ 2015-06-02T03:01:28Z
@ 2015-06-01T17:45:36Z
@ 2015-06-01T14:16:37Z
@ 2015-06-01T04:29:34Z
@ 2015-06-01T04:28:26Z
@ 2015-05-31T04:09:37Z
@ 2015-05-30T17:47:49Z
@ 2015-05-29T23:09:00Z
@ 2015-05-29T22:54:58Z
@ 2015-05-29T22:51:54Z
@ 2015-05-29T21:40:01Z
@ 2015-05-29T21:39:25Z
@ 2015-05-29T21:38:45Z
@ 2015-05-29T21:37:52Z
@ 2015-05-29T21:37:07Z
@ 2015-05-29T21:36:46Z
@ 2015-05-29T20:00:31Z
@ 2015-05-29T15:55:40Z
@ 2015-05-29T15:52:43Z
@ 2015-05-29T15:04:06Z
@ 2015-05-29T14:14:27Z
@ 2015-05-29T03:08:39Z
@ 2015-05-29T03:02:41Z
@ 2015-05-29T03:00:56Z
PR #21968: (ryanwohara) Verifying the key has a value before using it. * a43465d Merge pull request #24142 from basepi/dictupdate24097 * 5c6e210 Deepcopy on merge_recurse
@ 2015-05-28T23:06:33Z
@ 2015-05-28T21:07:26Z
@ 2015-05-28T20:10:34Z
ISSUE #23815: (Snergster) [beacons] inotify errors on subdir creation * 3dc4b85 Merge pull request #24190 from msteed/issue-23815 * 086a1a9 lint
@ 2015-05-28T18:28:15Z
@ 2015-05-28T18:26:20Z
@ 2015-05-28T18:24:39Z
@ 2015-05-28T16:23:57Z
@ 2015-05-28T05:16:48Z
@ 2015-05-28T05:16:18Z
@ 2015-05-28T05:15:08Z
@ 2015-05-28T05:14:36Z
@ 2015-05-28T02:12:09Z
@ 2015-05-27T22:18:37Z
@ 2015-05-27T20:27:49Z
@ 2015-05-27T20:26:31Z
PR #24178: (rallytime) Backport #24118 to 2014.7, too. PR #24159: (rallytime) Fill out modules/keystone.py CLI Examples PR #24158: (rallytime) Fix test_valid_docs test for tls module PR #24118: (trevor-h) removed deprecated pymongo usage
@ 2015-05-27T18:26:28Z
@ 2015-05-27T18:18:47Z
@ 2015-05-27T17:15:08Z
@ 2015-05-27T17:14:26Z
@ 2015-05-27T17:03:10Z
@ 2015-05-27T16:21:53Z
@ 2015-05-27T16:14:01Z
@ 2015-05-27T15:05:01Z
PR #24125: (hvnsweeting) Fix rabbitmq test mode PR #24093: (msteed) Make LocalClient.cmd_iter_no_block() not block PR #24008: (davidjb) Correct reST formatting for states.cmd documentation PR #23933: (jacobhammons) sphinx saltstack2 doc theme * b9507d1 Merge pull request #24156 from basepi/merge-forward-2015.5 * e52b5ab Remove stray >>>>>
@ 2015-05-26T23:20:20Z
@ 2015-05-26T21:24:19Z
@ 2015-05-26T20:15:19Z
@ 2015-05-26T19:25:48Z
@ 2015-05-26T18:24:27Z
@ 2015-05-26T18:23:51Z
@ 2015-05-26T18:23:40Z
@ 2015-05-26T15:58:47Z
ISSUE #23364: (pruiz) Unable to destroy host using proxmox cloud: There was an error destroying machines: 501 Server Error: Method 'DELETE /nodes/pmx1/openvz/openvz/100' not implemented PR #24104: (pruiz) Only try to stop a VM if it's not already stopped. (fixes #23364)
refs: #24136
@ 2015-05-26T15:58:27Z
@ 2015-05-26T15:58:10Z
@ 2015-05-26T15:57:29Z
@ 2015-05-26T15:56:08Z
@ 2015-05-26T15:55:18Z
@ 2015-05-26T15:41:11Z
@ 2015-05-26T15:37:01Z
@ 2015-05-25T19:47:26Z
@ 2015-05-25T19:39:19Z
@ 2015-05-25T19:39:02Z
@ 2015-05-25T19:38:44Z
@ 2015-05-25T19:38:10Z
@ 2015-05-25T19:37:47Z
@ 2015-05-25T12:30:48Z
@ 2015-05-25T12:30:21Z
@ 2015-05-25T12:29:53Z
@ 2015-05-25T04:02:11Z
@ 2015-05-24T05:17:54Z
@ 2015-05-24T04:07:31Z
@ 2015-05-22T23:02:57Z
@ 2015-05-22T21:18:20Z
@ 2015-05-22T20:53:19Z
@ 2015-05-22T18:59:21Z
ISSUE #23883: (kaithar) max_event_size seems broken * bfd812c Merge pull request #24065 from makinacorpus/real23883 * 028282e continue to fix #23883
@ 2015-05-22T16:56:06Z
@ 2015-05-22T16:26:49Z
@ 2015-05-22T14:58:20Z
@ 2015-05-22T05:36:25Z
@ 2015-05-21T23:43:54Z
@ 2015-05-21T23:43:25Z
@ 2015-05-21T23:43:10Z
@ 2015-05-21T22:32:04Z
@ 2015-05-21T22:31:49Z
@ 2015-05-21T20:32:30Z
ISSUE #23883: (kaithar) max_event_size seems broken * ac32000 Merge pull request #24001 from msteed/issue-23883 * bea97a8 issue #23883
@ 2015-05-21T17:26:03Z
@ 2015-05-21T17:03:42Z
@ 2015-05-21T16:50:53Z
@ 2015-05-21T16:49:17Z
@ 2015-05-21T16:48:29Z
@ 2015-05-21T01:55:34Z
ISSUE #23776: (enblde) Presence change events constantly reporting all minions as new in 2015.5 * 701c51b Merge pull request #24005 from msteed/issue-23776 * 62e67d8 issue #23776
@ 2015-05-20T22:44:27Z
@ 2015-05-20T21:18:21Z
This release is the largest Salt release ever, with more features and commits then any previous release of Salt. Everything from the new RAET transport to major updates in Salt Cloud and the merging of Salt API into the main project.
Important
The Fedora/RHEL/CentOS salt-master package has been modified for this release. The following components of Salt have been broken out and placed into their own packages:
When the salt-master package is upgraded, these components will be removed, and they will need to be manually installed.
Important
Compound/pillar matching have been temporarily disabled for the mine
and publish
modules for this release due to the possibility of
inferring pillar data using pillar glob matching. A proper fix is now in
the 2014.7 branch and scheduled for the 2014.7.1 release, and compound
matching and non-globbing pillar matching will be re-enabled at that point.
Compound and pillar matching for normal salt commands are unaffected.
This has been a HUGE amount of work, but the beta release of Salt with RAET is ready to go. RAET is a reliable queuing transport system that has been developed in partnership with a number of large enterprises to give Salt an alternative to ZeroMQ and a way to get Salt to scale well beyond tens of thousands of servers. Unlike ZeroMQ, RAET is completely asynchronous in every aspect of its operation and has been developed using the flow programming paradigm. This allows for many new capabilities to be added to Salt in the upcoming releases.
Please keep in mind that this is a beta release of RAET and we hope for bugs to be worked out, performance to be better realized and more in the 2015.5.0 release.
Simply stated, users running Salt with RAET should expect some hiccups as we hammer out the update. This is a BETA release of Salt RAET.
For information about how to use Salt with RAET please see the tutorial.
Salt SSH has just entered a new league, with substantial updates and improvements to make salt-ssh more reliable and easier then ever! From new features like the ansible roster and fileserver backends to the new pypi salt-ssh installer to lowered deps and a swath of bugfixes, salt-ssh is basically reborn!
Salt-ssh is now pip-installable!
https://pypi.python.org/pypi/salt-ssh/
Pip will bring in all of the required deps, and while some deps are compiled, they all include pure python implementations, meaning that any compile errors which may be seen can be safely ignored.
pip install salt-ssh
Salt-ssh can now use the salt fileserver backend system. This allows for the gitfs, hgfs, s3, and many more ways to centrally store states to be easily used with salt-ssh. This also allows for a distributed team to easily use a centralized source.
The new saltfile system makes it easy to have a user specific custom extended configuration.
Salt-ssh can now use the external pillar system. Making it easier then ever to use salt-ssh with teams.
Thanks to the enhancements in the salt vt system, salt-ssh no longer requires sshpass to send passwords to ssh. This also makes the manipulation of ssh calls substantially more flexible, allowing for intercepting ssh calls in a much more fluid way.
The salt-ssh call originally used a shell script to discover what version of python to execute with and determine the state of the ssh code deployment. This shell script has been replaced with a pure python version making it easy to increase the capability of the code deployment without causing platform inconsistency issues with different shell interpreters.
Custom modules are now seamlessly delivered. This makes the deployment of custom grains, states, execution modules and returners a seamless process.
Salt-ssh now makes simple file transfers easier then ever! The cp module allows for files to be conveniently sent from the salt fileserver system down to systems.
Salt ssh functions by copying a subset of the salt code, or salt thin down to the target system. In the past this was always transferred to /tmp/.salt and cached there for subsequent commands.
Now, salt thin can be sent to a random directory and removed when the call is complete with the -W option. The new -W option still uses a static location but will clean up that location when finished.
The default salt thin location is now user defined, allowing multiple users to cleanly access the same systems.
The new listen
and listen_in
keywords allow for completely imperative
states by calling the mod_watch()
routine after all states have run instead
of re-ordering the states.
The new mod_aggregate
system allows for the state system to rewrite the
state data during execution. This allows for state definitions to be aggregated
dynamically at runtime.
The best example is found in the pkg
state. If
mod_aggregate
is turned on, then when the first pkg state is reached, the
state system will scan all of the other running states for pkg states and take
all other packages set for install and install them all at once in the first
pkg state.
These runtime modifications make it easy to run groups of states together. In
future versions, we hope to fill out the mod_aggregate
system to build in
more and more optimizations.
For more documentation on mod_aggregate
, see the documentation.
The new onchanges
and onchanges_in
requisites make a state apply only if
there are changes in the required state. This is useful to execute post hooks
after changes occur on a system.
The other new requisites, onfail
, and onfail_in
, allow for a state to run
in reaction to the failure of another state.
For more information about these new requisites, see the requisites documentation.
The onlyif
and unless
options can now be used for any state declaration.
names
to expand and override values¶The names declaration in Salt's state system can now override or add values to the expanded data structure. For example:
my_users:
user.present:
- names:
- larry
- curly
- moe:
- shell: /bin/zsh
- groups:
- wheel
- shell: /bin/bash
The Salt scheduler system has received MAJOR enhancements, allowing for
cron-like scheduling and much more granular timing routines. See here
for more info.
All the needed additions have been made to run Salt on RHEL 7 and derived OSes like CentOS and Scientific.
Fileserver backends like gitfs can now be used without a salt master! Just add the fileserver backend configuration to the minion config and execute salt-call. This has been a much-requested feature and we are happy to finally bring it to our users.
An entire family of execution modules further enhancing Salt's Amazon Cloud support. They include the following:
Autoscale Groups
(includes state support
) -- related: Launch Control
statesCloud Watch
(includes state support
)Elastic Cache
(includes state support
)Elastic Load Balancer
(includes state support
)IAM Identity and Access Management
(includes state support
)Route53 DNS
(includes state support
)Security Groups
(includes state support
)Simple Queue Service
(includes state support
)BETA The Salt LXC management system has received a number of enhancements which make running an LXC cloud entirely from Salt an easy proposition.
The Docker support in Salt has been increased at least ten fold. The Docker API is now completely exposed and Salt ships with Docker data tracking systems which make automating Docker deployments very easy.
The peer system communication routines have been refined to make the peer system substantially faster.
Encryption at rest for configs
Encrypted pillar at rest
Lots of new OpenStack stuff
Ran change external queue systems into Salt events
Connecting to multiple masters is more dynamic then ever
Managing Chef with Salt just got even easier!
The salt-api
project has been merged into Salt core and is now available as
part of the regular salt-master
package install. No API changes were made,
the salt-api script and init scripts remain intact.
salt-api
has always provided Yet Another Pluggable Interface to Salt (TM)
in the form of "netapi" modules. These are modules that bind to a port and
start a service. Like many of Salt's other module types, netapi modules often
have library and configuration dependencies. See the documentation for each
module for instructions.
See also
salt.runner.RunnerClient
and salt.wheel.WheelClient
have both gained complimentary cmd_sync
and cmd_async
methods allowing
for synchronous and asynchronous execution of any Runner or Wheel module
function, all protected using Salt's external authentication
system. salt-api
benefits from this addition as well.
rest_cherrypy
Additions¶The rest_cherrypy
netapi module
provides the main REST API for Salt.
This release of course includes the Web Hook additions from the most recent
salt-api
release, which allows external services to signal actions within a
Salt infrastructure. External services such as Amazon SNS, Travis-CI, or
GitHub, as well as internal services that cannot or should not run a Salt
minion daemon can be used as first-class components in Salt's rich
orchestration capabilities.
The raw HTTP request body is now available in the event data. This is sometimes required information for checking an HMAC signature in order to verify a HTTP request. As an example, Amazon or GitHub requests are signed this way.
The /key
convenience URL
generates a public and private key for a minion, automatically pre-accepts the
public key on the Salt Master, and returns both keys as a tarball for download.
This allows for easily bootstrapping the key on a new minion with a single HTTP call, such as with a Kickstart script, all using regular shell tools.
curl -sS http://salt-api.example.com:8000/keys \
-d mid=jerry \
-d username=kickstart \
-d password=kickstart \
-d eauth=pam \
-o jerry-salt-keys.tar
All of the fileserver backends have been overhauled to be faster, lighter, and
more reliable. The VCS backends (gitfs
,
hgfs
, and svnfs
)
have also received a lot of new features.
Additionally, most config parameters for the VCS backends can now be configured on a per-remote basis, allowing for global config parameters to be overridden for a specific gitfs/hgfs/svnfs remote.
gitfs
Features¶In addition to supporting GitPython, support for pygit2 (0.20.3 and newer) and
dulwich have been added. Provided a compatible version of pygit2 is
installed, it will now be the default provider. The config parameter
gitfs_provider
has been added to allow one to choose a specific
provider for gitfs.
Prior to this release, to serve a file from gitfs at a salt fileserver URL of
salt://foo/bar/baz.txt
, it was necessary to ensure that the parent
directories existed in the repository. A new config parameter
gitfs_mountpoint
allows gitfs remotes to be exposed starting at
a user-defined salt://
URL.
By default, gitfs will expose all branches and tags as Salt fileserver
environments. Two new config parameters, gitfs_env_whitelist
, and
gitfs_env_blacklist
, allow more control over which branches and
tags are exposed. More detailed information on how these two options work can
be found in the Gitfs Walkthrough.
As of pygit2 0.20.3, both http(s) and SSH key authentication are supported, and Salt now also supports both authentication methods when using pygit2. Keep in mind that pygit2 0.20.3 is not yet available on many platforms, so those who had been using authenticated git repositories with a passphraseless key should stick to GitPython if a new enough pygit2 is not yet available for the platform on which the master is running.
A full explanation of how to use authentication can be found in the Gitfs Walkthrough.
hgfs
Features¶This feature works exactly like its gitfs counterpart. The new config parameter is called
hgfs_mountpoint
.
This feature works exactly like its gitfs counterpart. The new config parameters are called
hgfs_env_whitelist
and hgfs_env_blacklist
.
svnfs
Features¶This feature works exactly like its gitfs counterpart. The new config parameter is called
svnfs_mountpoint
.
This feature works exactly like its gitfs counterpart. The new config parameters are called
svnfs_env_whitelist
and svnfs_env_blacklist
.
Prior to this release, the paths where trunk, branches, and tags were located
could only be in directores named "trunk", "branches", and "tags" directly
under the root of the repository. Three new config parameters
(svnfs_trunk
, svnfs_branches
, and
svnfs_tags
) allow SVN repositories which are laid out
differently to be used with svnfs.
minionfs
Features¶This feature works exactly like its gitfs counterpart. The new config parameter is called
minionfs_mountpoint
. The one major difference is that, as
minionfs doesn't use multiple remotes (it just serves up files pushed to the
master using cp.push
) there is no such thing as a
per-remote configuration for minionfs_mountpoint
.
A new config parameter (minionfs_env
) allows minionfs files to
be served from a Salt fileserver environment other than base
.
By default, minionfs will expose the pushed files from all minions. Two new
config parameters, minionfs_whitelist
, and
minionfs_blacklist
, allow minionfs to be restricted to serve
files from only the desired minions.
Salt now ships with with the Pyobjects Renderer
that allows for construction of States using pure
Python with an idiomatic object interface.
In addition to the Amazon modules mentioned above, there are also several other new execution modules:
When used with a returner, salt-call now contacts a master if --local
is not specicified.
salt.modules.virtualenv_mod
¶memoize
function from salt/utils/__init__.py
(deprecated)no_site_packages
argument from create
function (deprecated)check_dns
argument from minion_config
and apply_minion_config
functions (deprecated)OutputOptionsWithTextMixIn
class from salt/utils/parsers.py
(deprecated)salt/modules/ps.py
:
- physical_memory_usage
(deprecated)
- virtual_memory_usage
(deprecated)
- cached_physical_memory
(deprecated)
- physical_memory_buffers
(deprecated)cloud_config
function in salt/config.py
:
- vm_config
(deprecated)
- vm_config_path
(deprecated)libcloud_version
function from salt/cloud/libcloudfuncs.py
(deprecated)CloudConfigMixIn
class from salt/utils/parsers.py
(deprecated)release: | 2015-01-12 |
---|
Version 2014.7.1 is a bugfix release for 2014.7.0. The changes include:
file.recurse
states (issue 17700)tty: True
in salt-ssh (issue 16847)log_level='quiet'
for cmd.run
(issue 19479)release: | 2015-02-09 |
---|
Version 2014.7.2 is a bugfix release for 2014.7.0. The changes include:
kmod
(issue 197151, issue 19682)npm state
. This may break behavior for people expecting the state
to lowercase their npm package names for them. The npm module
was never affected by mandatory lowercasing.
(issue 20329)activate
parameter for pip.install for both the
module
and the state
.
If bin_env
is given and points to a virtualenv, there is no need to
activate that virtualenv in a shell for pip to install to the virtualenv.archive_user
in favor of standardized user
parameter in
state
and added group
parameter.release: | TBA |
---|
Version 2014.7.3 is a bugfix release for 2014.7.0.
Changes:
Known issues:
release: | 2015-03-30 |
---|
Version 2014.7.4 is a bugfix release for 2014.7.0.
This is a security release. The security issues fixed have only been present since 2014.7.0, and only users of the two listed modules are vulnerable. The following CVEs have been resolved:
Changes:
Known issues:
release: | 2015-04-16 |
---|
Version 2014.7.5 is a bugfix release for 2014.7.0.
Changes:
Known issues:
release: | 2015-05-18 |
---|
Version 2014.7.6 is a bugfix release for 2014.7.0.
This release is a security release. A minor issue was found, as cited below:
Only users of the Aliyun or Proxmox cloud modules are at risk. The vulnerability does not exist in the latest 2015.5.0 release of Salt.
Changes:
Extended Changelog Courtesy of Todd Stansell (https://github.com/tjstansell/salt-changelogs):
PR #23810: (rallytime) Backport #23757 to 2014.7 @ 2015-05-18T15:30:21Z
PR #23809: (rallytime) Fix virtualport section of virt.get_nics loop @ 2015-05-18T15:30:09Z
PR #23823: (gtmanfred) add link local for ipv6 @ 2015-05-17T12:48:25Z
PR #23802: (gtmanfred) if it is ipv6 ip_to_int will fail @ 2015-05-16T04:06:59Z
PR #23488: (cellscape) LXC cloud fixes @ 2015-05-15T18:09:35Z
PR #23748: (basepi) [2014.7] Log salt-ssh roster render errors more assertively and verbosely @ 2015-05-14T22:38:10Z
PR #23731: (twangboy) Fixes #22959: Trying to add a directory to an unmapped drive in windows @ 2015-05-14T21:59:14Z
PR #23730: (rallytime) Backport #23729 to 2014.7 @ 2015-05-14T21:58:34Z
PR #23688: (twangboy) Added inet_pton to utils/validate/net.py for ip.set_static_ip in windows @ 2015-05-14T16:15:56Z
PR #23680: (cachedout) Rename kwarg in cloud runner @ 2015-05-13T19:44:02Z
PR #23674: (cachedout) Handle lists correctly in grains.list_prsesent @ 2015-05-13T18:34:58Z
PR #23672: (twangboy) Fix user present @ 2015-05-13T18:30:09Z
PR #23670: (rallytime) Backport #23607 to 2014.7 @ 2015-05-13T18:27:17Z
PR #23661: (rallytime) Merge #23640 with whitespace fix @ 2015-05-13T15:47:30Z
PR #23639: (cachedout) Handle exceptions raised by __virtual__ @ 2015-05-13T15:11:12Z
PR #23637: (cachedout) Convert str master to list @ 2015-05-13T15:08:19Z
PR #23595: (rallytime) Backport #23549 to 2014.7 @ 2015-05-12T21:19:40Z
PR #23594: (rallytime) Backport #23496 to 2014.7 @ 2015-05-12T21:19:34Z
PR #23593: (rallytime) Backport #23442 to 2014.7 @ 2015-05-12T21:19:26Z
PR #23606: (twangboy) Fixed checkbox for starting service and actually starting it @ 2015-05-12T21:18:50Z
PR #23592: (rallytime) Backport #23389 to 2014.7 @ 2015-05-12T16:44:42Z
PR #23573: (techhat) Scan all available networks for public and private IPs | refs: #23802 @ 2015-05-12T15:22:22Z
PR #23558: (jfindlay) reorder emerge command line @ 2015-05-12T15:17:46Z
PR #23530: (dr4Ke) salt-ssh state: fix including all salt:// references @ 2015-05-12T15:13:43Z
PR #23433: (twangboy) Obtain all software from the registry @ 2015-05-11T22:47:52Z
PR #23554: (jleroy) Debian: Hostname always updated @ 2015-05-11T21:57:00Z
PR #23551: (dr4Ke) grains.append unit tests, related to #23474 @ 2015-05-11T21:54:25Z
PR #23474: (dr4Ke) Fix grains.append in nested dictionnary grains #23411 @ 2015-05-11T18:00:21Z
PR #23537: (t0rrant) Update changelog @ 2015-05-11T17:02:16Z
PR #23538: (cro) Update date in LICENSE file @ 2015-05-11T15:19:25Z
PR #23505: (aneeshusa) Remove unused ssh config validator. Fixes #23159. @ 2015-05-09T13:24:15Z
PR #23467: (slinu3d) Added AWS v4 signature support @ 2015-05-08T14:36:19Z
PR #23444: (techhat) Add create_attach_volume to nova driver @ 2015-05-07T19:51:32Z
PR #23460: (s0undt3ch) [2014.7] Update to latest stable bootstrap script v2015.05.07 @ 2015-05-07T19:10:54Z
PR #23439: (techhat) Add wait_for_passwd_maxtries variable @ 2015-05-07T07:28:56Z
PR #23422: (cro) $HOME should not be used, some shells don't set it. @ 2015-05-06T21:02:36Z
PR #23425: (basepi) [2014.7] Fix typo in FunctionWrapper @ 2015-05-06T20:38:03Z
PR #23385: (rallytime) Backport #23346 to 2014.7 @ 2015-05-06T20:12:29Z
PR #23414: (jfindlay) 2015.2 -> 2015.5 @ 2015-05-06T20:04:02Z
PR #23404: (hvnsweeting) saltapi cherrypy: initialize var when POST body is empty @ 2015-05-06T17:35:56Z
PR #23409: (terminalmage) Update Lithium docstrings in 2014.7 branch @ 2015-05-06T16:20:46Z
PR #23397: (jfindlay) add more flexible whitespace to locale_gen search @ 2015-05-06T03:44:11Z
PR #23368: (kaithar) Backport #23367 to 2014.7 @ 2015-05-05T21:42:26Z
PR #23350: (lorengordon) Append/prepend: search for full line @ 2015-05-05T21:42:11Z
PR #23341: (cachedout) Fix syndic pid and logfile path @ 2015-05-05T21:29:10Z
PR #23272: (basepi) [2014.7] Allow salt-ssh minion config overrides via master config and roster | refs: #23347 @ **
PR #23347: (basepi) [2014.7] Salt-SSH Backport FunctionWrapper.__contains__ @ 2015-05-05T14:13:21Z
PR #23344: (cachedout) Explicitely set file_client on master @ 2015-05-04T23:21:48Z
PR #23318: (cellscape) Honor seed argument in LXC container initializaton @ 2015-05-04T20:58:12Z
PR #23307: (jfindlay) check for /etc/locale.gen @ 2015-05-04T20:56:32Z
PR #23324: (s0undt3ch) [2014.7] Update to the latest stable release of the bootstrap script v2015.05.04 @ 2015-05-04T16:28:30Z
PR #23329: (cro) Require requests to verify cert when talking to aliyun and proxmox cloud providers @ 2015-05-04T16:18:17Z
PR #23311: (cellscape) Fix new container initialization in LXC runner | refs: #23318 @ 2015-05-04T09:55:29Z
PR #23298: (chris-prince) Fixed issue #18880 in 2014.7 branch @ 2015-05-03T15:49:41Z
PR #23292: (rallytime) Merge #23151 with pylint fixes @ 2015-05-02T03:54:12Z
PR #23274: (basepi) [2014.7] Reduce salt-ssh debug log verbosity @ 2015-05-01T20:19:23Z
PR #23261: (rallytime) Fix tornado websocket event handler registration @ 2015-05-01T18:20:31Z
PR #23258: (teizz) TCP keepalives on the ret side, Revisited. @ 2015-05-01T16:13:49Z
PR #23241: (techhat) Move iptables log options after the jump @ 2015-05-01T01:31:59Z
PR #23228: (rallytime) Backport #23171 to 2014.7 @ 2015-04-30T21:09:45Z
PR #23227: (rallytime) Backport #22808 to 2014.7 @ 2015-04-30T21:09:14Z
PR #22823: (hvnsweeting) 22822 file directory clean @ 2015-04-30T15:25:51Z
PR #22977: (bersace) Fix fileserver backends __opts__ overwritten by _pillar @ 2015-04-30T15:24:56Z
PR #23180: (jfindlay) fix typos from 36841bdd in masterapi.py @ 2015-04-30T15:22:41Z
PR #23176: (jfindlay) copy standard cmd.run* kwargs into cmd.run_chroot @ 2015-04-30T15:22:12Z
PR #23193: (joejulian) supervisord.mod_watch should accept sfun @ 2015-04-30T04:34:21Z
PR #23188: (basepi) [2014.7] Work around bug in salt-ssh in config.get for gpg renderer | refs: #23272 @ 2015-04-30T04:34:10Z
PR #23154: (cachedout) Re-establish channel on interruption in fileclient @ 2015-04-29T16:18:59Z
PR #23146: (rallytime) Backport #20779 to 2014.7 @ 2015-04-28T20:45:06Z
PR #23145: (rallytime) Backport #23089 to 2014.7 @ 2015-04-28T20:44:56Z
PR #23144: (rallytime) Backport #23124 to 2014.7 @ 2015-04-28T20:44:46Z
PR #23120: (terminalmage) Don't run os.path.relpath() if repo doesn't have a "root" param set @ 2015-04-28T15:46:54Z
PR #23132: (clinta) Backport b27c176 @ 2015-04-28T15:00:30Z
PR #23114: (rallytime) Adjust ZeroMQ 4 docs to reflect changes to Ubuntu 12 packages @ 2015-04-28T03:59:24Z
PR #23108: (rallytime) Backport #23097 to 2014.7 @ 2015-04-28T03:58:05Z
PR #23112: (basepi) [2014.7] Backport #22199 to fix mysql returner save_load errors @ 2015-04-28T03:55:44Z
PR #23113: (rallytime) Revert "Backport #22895 to 2014.7" @ 2015-04-28T03:27:29Z
PR #23094: (terminalmage) pygit2: disable cleaning of stale refs for authenticated remotes @ 2015-04-27T20:51:28Z
PR #23048: (jfindlay) py-2.6 compat for utils/boto.py ElementTree exception @ 2015-04-25T16:56:45Z
PR #23025: (jfindlay) catch exceptions on bad system locales/encodings @ 2015-04-25T16:56:30Z
PR #22932: (hvnsweeting) bugfix: also manipulate dir_mode when source not defined @ 2015-04-25T16:54:58Z
PR #23055: (jfindlay) prevent ps module errors on accessing dead procs @ 2015-04-24T22:39:49Z
PR #23031: (jfindlay) convert exception e.message to just e @ 2015-04-24T18:38:13Z
PR #23015: (hvnsweeting) if status of service is stop, there is not an error with it @ 2015-04-24T14:35:10Z
PR #23000: (jfindlay) set systemd service killMode to process for minion @ 2015-04-24T03:42:39Z
PR #22999: (jtand) Added retry_dns to minion doc. @ 2015-04-24T03:30:24Z
PR #22990: (techhat) Use the proper cloud conf variable @ 2015-04-23T17:48:07Z
PR #22976: (multani) Improve state_output documentation @ 2015-04-23T12:24:22Z
PR #22955: (terminalmage) Fix regression introduced yesterday in dockerio module @ 2015-04-22T18:56:39Z
PR #22954: (rallytime) Backport #22909 to 2014.7 @ 2015-04-22T18:56:20Z
PR #22856: (jfindlay) increase timeout and decrease tries for route53 records @ 2015-04-22T16:47:01Z
PR #22946: (s0undt3ch) Test with a more recent pip version to avoid a traceback @ 2015-04-22T16:25:17Z
PR #22945: (garethgreenaway) Fixes to scheduler @ 2015-04-22T16:25:00Z
PR #22887: (hvnsweeting) fix #18843 @ 2015-04-22T15:47:05Z
PR #22930: (jfindlay) localemod.gen_locale now always returns a boolean @ 2015-04-22T15:37:39Z
PR #22933: (hvnsweeting) add test for #18843 @ 2015-04-22T15:27:18Z
PR #22925: (rallytime) Backport #22895 to 2014.7 | refs: #23113 @ 2015-04-22T02:30:26Z
PR #22914: (cachedout) Call proper returner function in jobs.list_jobs @ 2015-04-22T00:49:01Z
PR #22918: (JaseFace) Add a note to the git_pillar docs stating that GitPython is the only currently supported provider @ 2015-04-22T00:48:26Z
PR #22907: (techhat) Properly merge cloud configs to create profiles @ 2015-04-21T22:02:44Z
PR #22894: (0xf10e) Fix issue #22782 @ 2015-04-21T18:55:18Z
PR #22902: (rallytime) Change state example to use proper kwarg @ 2015-04-21T18:50:47Z
PR #22898: (terminalmage) dockerio: better error message for native exec driver @ 2015-04-21T18:02:58Z
PR #22897: (rallytime) Add param documentation for file.replace state @ 2015-04-21T17:31:04Z
PR #22850: (bersace) Fix pillar and salt fileserver mixed @ 2015-04-21T17:04:33Z
PR #22818: (twangboy) Added documentation regarding pip in windows @ 2015-04-21T03:58:59Z
PR #22872: (rallytime) Prevent stacktrace on os.path.exists in hosts module @ 2015-04-21T02:54:40Z
PR #22853: (s0undt3ch) Don't assume package installation order. @ 2015-04-21T02:42:41Z
PR #22877: (s0undt3ch) Don't fail on make clean just because the directory does not exist @ 2015-04-21T02:40:47Z
PR #22873: (thatch45) Type check the version since it will often be numeric @ 2015-04-21T02:38:11Z
PR #22870: (twangboy) Added ability to send a version with a space in it @ 2015-04-20T23:18:28Z
PR #22863: (rallytime) Backport #20974 to 2014.7 @ 2015-04-20T19:29:37Z
PR #22578: (hvnsweeting) gracefully handle when salt-minion cannot decrypt key @ 2015-04-20T15:24:45Z
PR #22800: (terminalmage) Improve error logging for pygit2 SSH-based remotes @ 2015-04-18T17:18:55Z
PR #22813: (twangboy) Updated instructions for building salt @ 2015-04-18T04:10:07Z
PR #22810: (basepi) [2014.7] More msgpack gating for salt-ssh @ 2015-04-17T22:28:24Z
PR #22803: (rallytime) Allow map file to work with softlayer @ 2015-04-17T20:34:42Z
PR #22807: (rallytime) Add 2014.7.5 links to windows installation docs @ 2015-04-17T20:32:13Z
PR #22795: (rallytime) Added release note for 2014.7.5 release @ 2015-04-17T18:05:36Z
PR #22759: (twangboy) Final edits to the batch files for running salt @ 2015-04-17T04:31:15Z
PR #22760: (thatch45) Fix issues with the syndic @ 2015-04-17T04:30:48Z
PR #22762: (twangboy) Fixed version not showing in Add/Remove Programs @ 2015-04-17T04:29:46Z
Note
Due to a change in master to minion communication, 2014.1.0 minions are not compatible with older-version masters. Please upgrade masters first. More info on backwards-compatibility policy here, under the "Upgrading Salt" subheading.
Note
A change in the grammar in the state compiler makes module.run
in
requisites illegal syntax. Its use is replaced simply with the word
module
. In other words you will need to change requisites like this:
require:
module.run: some_module_name
to:
require:
module: some_module_name
This is a breaking change. We apologize for the inconvenience, we needed to do this to remove some ambiguity in parsing requisites.
release: | 2014-02-24 |
---|
The 2014.1.0 release of Salt is a major release which not only increases stability but also brings new capabilities in virtualization, cloud integration, and more. This release brings a great focus on the expansion of testing making roughly double the coverage in the Salt tests, and comes with many new features.
2014.1.0 is the first release to follow the new date-based release naming system. See the version numbers page for more details.
Salt Cloud is a tool for provisioning salted minions across various cloud providers. Prior to this release, Salt Cloud was a separate project but this marks its full integration with the Salt distribution. A Getting Started guide and additional documentation for Salt Cloud can be found here:
Alongside Salt Cloud comes new support for the Google Compute Engine. Salt Stack can now deploy and control GCE virtual machines and the application stacks that they run.
For more information on Salt Stack and GCE, please see this blog post.
Documentation for Salt and GCE can be found here.
Salt Virt is a cloud controller that supports virtual machine deployment, inspection, migration, and integration with many aspects of Salt.
Salt Virt has undergone a major overhaul with this release and now supports many more features and includes a number of critical improvements.
Salt now ships with states
and an execution
module
to manage Docker containers.
Salt continues to increase its unit/regression test coverage. This release includes over 300 new tests.
BSD package management has been entirely rewritten. FreeBSD 9 and older now default to using pkg_add, while FreeBSD 10 and newer will use pkgng. FreeBSD 9 can be forced to use pkgng, however, by specifying the following option in the minion config file:
providers:
pkg: pkgng
In addition, support for installing software from the ports tree has been
added. See the documentation for the ports state
and
execution module
for more information.
Initial support for management of network interfaces on Debian-based distros
has been added. See the documentation for the network state
and the debian_ip
for
more information.
The iptables state
and module
now have IPv6 support. A new parameter family
has
been added to the states and execution functions, to distinguish between IPv4
and IPv6. The default value for this parameter is ipv4
, specifying ipv6
will use ip6tables to manage firewall rules.
Several performance improvements have been made to the Git fileserver
backend
. Additionally, file states can now use any
any SHA1 commit hash as a fileserver environment:
/etc/httpd/httpd.conf:
file.managed:
- source: salt://webserver/files/httpd.conf
- saltenv: 45af879
This applies to the functions in the cp module
as
well:
salt '*' cp.get_file salt://readme.txt /tmp/readme.txt saltenv=45af879
This new fileserver backend allows files which have been pushed from the minion
to the master (using cp.push
) to be served up
from the salt fileserver. The path for these files takes the following format:
salt://minion-id/path/to/file
minion-id
is the id of the "source" minion, the one from which the files
were pushed to the master. /path/to/file
is the full path of the file.
The MinionFS Walkthrough contains a more thorough example of how to use this backend.
To distinguish between fileserver environments and execution functions which
deal with environment variables, fileserver environments are now specified
using the saltenv
parameter. env
will continue to work, but is
deprecated and will be removed in a future release.
A caching layer has been added to the Grains system, which can help speed up
minion startup. Disabled by default, it can be enabled by setting the minion
config option grains_cache
:
grains_cache: True
# Seconds before grains cache is considered to be stale.
grains_cache_expiration: 300
If set to True
, the grains loader will read from/write to a
msgpack-serialized file containing the grains data.
Additional command-line parameters have been added to salt-call, mainly for testing purposes:
--skip-grains
will completely bypass the grains loader when salt-call is
invoked.--refresh-grains-cache
will force the grains loader to bypass the grains
cache and refresh the grains, writing a new grains cache file.When using the cmd module
, either on the CLI or
when developing Salt execution modules, a new keyword argument
output_loglevel
allows for greater control over how (or even if) the
command and its output are logged. For example:
salt '*' cmd.run 'tail /var/log/messages' output_loglevel=debug
The package management modules (apt
, yumpkg
, etc.) have been updated to
log the copious output generated from these commands at loglevel debug
.
Note
To keep a command from being logged, output_loglevel=quiet
can be used.
Prior to this release, this could be done using quiet=True
. This
argument is still supported, but will be removed in a future Salt release.
Initial support for firing events via PagerDuty has been added. See the
documentation for the pagerduty
module.
Sometimes the subprocess module is not good enough, and, in fact, not even
askpass
is. This virtual terminal is still in it's infant childhood, needs
quite some love, and was originally created to replace askpass
, but, while
developing it, it immediately proved that it could do so much more. It's
currently used by salt-cloud when bootstrapping salt on clouds which require
the use of a password.
Initial basic support for Proxy Minions is in this release. Documentation can be found here.
Proxy minions are a developing feature in Salt that enables control of devices that cannot run a minion. Examples include network gear like switches and routers that run a proprietary OS but offer an API, or "dumb" devices that just don't have the horsepower or ability to handle a Python VM.
Proxy minions can be difficult to write, so a simple REST-based example proxy is included. A Python bottle-based webserver can be found at https://github.com/cro/salt-proxy-rest as an endpoint for this proxy.
This is an ALPHA-quality feature. There are a number of issues with it currently, mostly centering around process control, logging, and inability to work in a masterless configuration.
Below are many of the fixes that were implemented in salt during the release candidate phase.
ZMQError: Operation cannot be accomplished in current state
errors (issue 6306)archive
state to work with bsdtar
master_uri
with master_ip
(issue 9694)mod_repo
(issue 9923)salt-run -d
to limit results to specific runner or function (issue 9975)release: | 2014-03-18 |
---|
Version 2014.1.1 is a bugfix release for 2014.1.0. The changes include:
state.sls
execution functiondig
module (issue 10367)~/.salt_token
(issue 10422)saltutil.find_job
for Windows (issue 10581)file.recurse
(issue 10809)purge
in pkg.installed
(issue 10719)zmqversion
grainsaltutil.find_job
for 2014.1 masters talking to 0.17 minions (issue 11020)file.recurse
states with trailing slashes in source (issue 11002)pkg states
to allow pkgname.x86_64
(issue 7306)iptables states
set a default table for flush (issue 11037)--reject-with
after final iptables call in iptables states
(issue:10757)iptables states
(issue 10774)iptables.insert
states (issue 10988)--return
settings (issue 9146)pip --editable
.skip_suggestions
parameter to pkg.installed states which allows pre-flight check to be skipped (issue 11106)chocolatey.bootstrap
(issue 10541)jobs runner
(issue 11151)test=True
CLI override of config option (issue 10877)release: | 2014-08-01 |
---|
Note
Version 2014.1.9 contained a regression which caused inaccurate Salt version detection, and thus was never packaged for general release. This version contains the version detection fix, but is otherwise identical to 2014.1.9.
Version 2014.1.10 is another bugfix release for 2014.1.0. Changes include:
.salt
directory in salt-sshSalt 2014.1.10 fixes security issues documented by CVE-2014-3563: "Insecure tmp-file creation in seed.py, salt-ssh, and salt-cloud." Upgrading is recommended.
release: | 2014-08-29 |
---|
Version 2014.1.11 is another bugfix release for 2014.1.0. Changes include:
runas
deprecation in at
modulefile.makedirs_
(issue 14019)null
caserelease: | 2014-10-08 |
---|
Version 2014.1.12 is another bugfix release for 2014.1.0. Changes include:
scp_file
always failing (which broke salt-cloud) (issue 16437)release: | 2014-10-14 |
---|
Version 2014.1.13 is another bugfix release for 2014.1.0. Changes include:
sftp_file
by checking the exit status code of scp (which broke salt-cloud) (issue 16599)release: | 2014-04-15 |
---|
Version 2014.1.2 is another bugfix release for 2014.1.0. The changes include:
site
module on module refresh for MacOSsalt-key --list all
(issue 10982)find_job
and **kwargs
(issue 10503)saltenv
for aptpkg.mod_repo
from pkgrepo
state__parse_key
in registry state (issue 11408)AssertionError
raised by GitPython (issue 11473)debian_ip
to allow disabling and enabling networking on Ubuntu (issue 11164)psutil
on Windowsfile.replace
and file.search
to Windows (issue 11471)file
module helpers to Windows (issue 11235)pid
to netstat output on Windows (issue 10782)sys.doc
with invalid eauth (issue 11293)git.latest
with test=True
(issue 11595)file.check_perms
hardcoded follow_symlinks
(issue 11387)pkg
states for RHEL5/Cent5 machines (issue 11719)release: | 2014-04-15 |
---|
Version 2014.1.3 is another bugfix release for 2014.1.0. It was created as a hotfix for a regression found in 2014.1.2, which was not distributed. The only change made was as follows:
saltutil.find_job
to fail, causing premature
terminations of salt CLI commands.Changes in the not-distributed 2014.1.2, also included in 2014.1.3:
site
module on module refresh for MacOSsalt-key --list all
(issue 10982)find_job
and **kwargs
(issue 10503)saltenv
for aptpkg.mod_repo
from pkgrepo
state__parse_key
in registry state (issue 11408)AssertionError
raised by GitPython (issue 11473)debian_ip
to allow disabling and enabling networking on Ubuntu (issue 11164)psutil
on Windowsfile.replace
and file.search
to Windows (issue 11471)file
module helpers to Windows (issue 11235)pid
to netstat output on Windows (issue 10782)sys.doc
with invalid eauth (issue 11293)git.latest
with test=True
(issue 11595)file.check_perms
hardcoded follow_symlinks
(issue 11387)pkg
states for RHEL5/Cent5 machines (issue 11719)release: | 2014-05-05 |
---|
Version 2014.1.4 is another bugfix release for 2014.1.0. Changes include:
/proc/1/cgroup
is not readable (issue 11619)lvs.zero
module argument pass-through (issue 9001)debian_ip
interaction with network.system
state (issue 11164)file.directory
state symlink handling
(issue 12209)external_ip
grainfile.managed
makedirs issues
(issue 10446)file
module (issue 9880)ps.boot_time
(issue 12428)release: | 2014-06-11 |
---|
Version 2014.1.5 is another bugfix release for 2014.1.0. Changes include:
syndic_wait
to 5 to fix syndic-related problems
(issue 12262)network.netstat
(issue 12121)makeconf
state (issue 9762)fromrepo
package installs when repo is disabled by default
(issue 12466)file.blockreplace
(issue 12422)get_dns_servers
function for Windows win_dns_client
debian_ip
(issue 12614)cmd_iter
/cmd_iter_no_block
blocking issues (issue 12617)file.directory
saltutil.sync_all
and
state.highstate
saltutil.running
passwd
option that is loaded as a non-string object
(issue 13249)pkg.list_pkgs
outputmodule
/state
(issue 12724)saltenv
being written to YUM repo config files (issue 12887)gitfs_root
causing files not to be
available (issue 13185)release: | 2014-07-08 |
---|
Version 2014.1.6 is another bugfix release for 2014.1.0. Changes include:
iptables --help
output (Sorry!) (issue 13648,
issue 13507, issue 13527, issue 13607)mount.active
for Solarisallow-hotplug
statement in debian_ip network modulejobs.active
output (issue 9526)virtual
grain for Xen (issue 13534)tomcat
support (issue 12889)service
virtual module on Fedora minionsjobs.active
(issue 11151)master_tops
and _ext_nodes
issue (issue 13535, issue 13673)release: | 2014-07-09 |
---|
Version 2014.1.7 is another bugfix release for 2014.1.0. Changes include:
This release was a hotfix release for the regression listed above which was present in the 2014.1.6 release. The changes included in 2014.1.6 are listed below:
iptables --help
output (Sorry!) (issue 13648,
issue 13507, issue 13527, issue 13607)mount.active
for Solarisallow-hotplug
statement in debian_ip network modulejobs.active
output (issue 9526)virtual
grain for Xen (issue 13534)tomcat
support (issue 12889)service
virtual module on Fedora minionsjobs.active
(issue 11151)master_tops
and _ext_nodes
issue (issue 13535, issue 13673)release: | 2014-07-30 |
---|
Note
This release contained a regression which caused inaccurate Salt version detection, and thus was never packaged for general release. Please use version 2014.1.10 instead.
Version 2014.1.8 is another bugfix release for 2014.1.0. Changes include:
.salt
directory in salt-sshrelease: | 2014-07-31 |
---|
Note
This release contained a regression which caused inaccurate Salt version detection, and thus was never packaged for general release. Please use version 2014.1.10 instead.
Note
Version 2014.1.8 contained a regression which caused inaccurate Salt version detection, and thus was never packaged for general release. This version contains the version detection fix, but is otherwise identical to 2014.1.8.
Version 2014.1.9 is another bugfix release for 2014.1.0. Changes include:
.salt
directory in salt-sshrelease: | 2012-06-16 |
---|
0.10.0 has arrived! This release comes with MANY bug fixes, and new capabilities which greatly enhance performance and reliability. This release is primarily a bug fix release with many new tests and many repaired bugs. This release also introduces a few new key features which were brought in primarily to repair bugs and some limitations found in some of the components of the original architecture.
The Salt Master now comes equipped with a new event system. This event system has replaced some of the back end of the Salt client and offers the beginning of a system which will make plugging external applications into Salt. The event system relies on a local ZeroMQ publish socket and other processes can connect to this socket and listen for events. The new events can be easily managed via Salt's event library.
Some enhancements have been added to Salt for running as a user other than
root. These new additions should make switching the user that the Salt Master
is running as very painless, simply change the user
option in the master
configuration and restart the master, Salt will take care of all of the
particulars for you.
Salt has long had the peer communication system used to allow minions to send
commands via the salt master. 0.10.0 adds a new capability here, now the
master can be configured to allow for minions to execute Salt runners via
the peer_run
option in the salt master configuration.
In the past the YAML parser for sls files would return the incorrect numbers when the file mode was set with a preceding 0. The YAML parser used in Salt has been modified to no longer convert these number into octal but to keep them as the correct value so that sls files can be a little cleaner to write.
It was requested that the minion keep a local cache of the most recent executed state run. This has been added and now with state runs the data is stored in a msgpack file in the minion's cachedir.
A new option has been added to the master configuration file. In previous releases the Salt client would look over the Salt job cache to read in the minion return data. With the addition of the event system the Salt client can now watch for events directly from the master worker processes.
This means that the job cache is no longer a hard requirement. Keep in mind though, that turning off the job cache means that historic job execution data cannot be retrieved.
To continue our efforts with testing Salt's ability to scale the minionswarm script has been updated. The minionswarm can now start up minions much faster than it could before and comes with a new feature allowing modules to be disabled, thus lowering the minion's footprint when making a swarm. These new updates have allows us to test
# python minionswarm.py -m 20 --master salt-master
To get a good idea for the number of bugfixes this release offers take a look at the closed tickets for 0.10.0, this is a very substantial update:
https://github.com/saltstack/salt/issues?milestone=12&state=closed
As Salt deployments grow new ways to break Salt are discovered. 0.10.0 comes with a number of fixes for the minions and master greatly improving Salt stability.
release: | 2012-06-19 |
---|
release: | 2012-07-30 |
---|
0.10.2 is out! This release comes with enhancements to the pillar interface, cleaner ways to access the salt-call capabilities in the API, minion data caching and the event system has been added to salt minions.
There have also been updates to the ZeroMQ functions, many more tests (thanks to sponsors, the code sprint and many contributors) and a swath of bug fixes.
The ranks of available Salt modules directories sees a new member in 0.10.2.
With the popularity of pillar a higher demand has arisen for ext_pillar
interfaces to be more like regular Salt module additions. Now ext_pillar
interfaces can be added in the same way as other modules, just drop it into
the pillar directory in the salt source.
In 0.10.0 an event system was added to the Salt master. 0.10.2 adds the event system to the minions as well. Now event can be published on a local minion as well.
The minions can also send events back up to the master. This means that Salt is able to communicate individual events from the minions back up to the Master which are not associated with command.
When pillar was introduced the landscape for available data was greatly enhanced. The minion's began sending grain data back to the master on a regular basis.
The new config option on the master called minion_data_cache
instructs the
Salt master to maintain a cache of the minion's grains and pillar data in the
cachedir. This option is turned off by default to avoid hitting the disk more,
but when enabled the cache is used to make grain matching from the salt command
more powerful, since the minions that will match can be predetermined.
By default all files replaced by the file.managed and file.recurse states we
simply deleted. 0.10.2 adds a new option. By setting the backup option to
minion
the files are backed up before they are replaced.
The backed up files are located in the cachedir under the file_backup
directory. On a default system this will be at:
/var/cache/salt/file_backup
salt-master
and salt-minion
automatically load additional configuration
files from master.d/*.conf
respective minion.d/*.conf
where
master.d
/minion.d
is a directory in the same directory as the main
configuration file.
A number of users complained that they had inadvertently deleted the wrong salt authentication keys. 0.10.2 now displays what keys are going to be deleted and verifies that they are the keys that are intended for deletion.
If autosign_file
is specified in the configuration file incoming keys
will be compared to the list of keynames in autosign_file
. Regular
expressions as well as globbing is supported.
The file must only be writable by the user otherwise the file will be
ignored. To relax the permission and allow group write access set the
permissive_pki_access
option.
New modules for managing services and packages were provided by Joshua Elsasser to further improve the support for OpenBSD.
Existing modules like the disk module were also improved to support OpenBSD.
The MySQL and PostgreSQL modules have both received a number of additions thanks to the work of Avi Marcus and Roman Imankulov.
A new ZFS module has been added by Kurtis Velarde for FreeBSD supporting various ZFS operations like creating, extending or removing zpools.
A new Augeas module by Ulrich Dangel for editing and verifying config files.
The support for the Debian was further improved with an new service module for Debian by Ahmad Khayyat supporting disable and enable.
Cassandra support has been added by Adam Garside. Currently only status and diagnostic information are supported.
The networking support for RHEL has been improved and supports bonding support as well as zeroconf configuration.
Basic monit support by Kurtis Velarde to control services via monit.
Basic support for controlling nzbget by Joseph Hall
Baisc bluez
support for managing and controlling Bluetooth devices.
Supports scanning as well as pairing/unpairing by Joseph Hall.
Another testing script has been added. A bug was found in pillar when many
minions generated pillar data at the same time. The new consist.py
script
is the tests directory was created to reproduce bugs where data should always
be consistent.
To get a good idea for the number of bugfixes this release offers take a look at the closed tickets for 0.10.2, this is a very substantial update:
https://github.com/saltstack/salt/issues?milestone=24&page=1&state=closed
As Salt deployments grow new ways to break Salt are discovered. 0.10.2 comes with a number of fixes for the minions and master greatly improving Salt stability.
release: | 2012-09-30 |
---|
The latest taste of Salt has come, this release has many fixes and feature additions. Modifications have been made to make ZeroMQ connections more reliable, the beginning of the ACL system is in place, a new command line parsing system has been added, dynamic module distribution has become more environment aware, the new master_finger option and many more!
The new ACL system has been introduced. The ACL system allows for system users other than root to execute salt commands. Users can be allowed to execute specific commands in the same way that minions are opened up to the peer system.
The configuration value to open up the ACL system is called client_acl
and is configured like so:
client_acl:
fred:
- test..*
- pkg.list_pkgs
Where fred is allowed access to functions in the test module and to the
pkg.list_pkgs
function.
The master_finger option has been added to improve the security of minion provisioning. The master_finger option allows for the fingerprint of the master public key to be set in the configuration file to double verify that the master is valid. This option was added in response to a motivation to pre-authenticate the master when provisioning new minions to help prevent man in the middle attacks in some situations.
The ability to generate fingerprints of keys used by Salt has been added to
salt-key
. The new option finger accepts the name of the key to generate
and display a fingerprint for.
salt-key -F master
Will display the fingerprints for the master public and private keys.
Pedro Algavio, aka s0undt3ch, has added a substantial update to the command line parsing system that makes the help message output much cleaner and easier to search through. Salt parsers now have --versions-report besides usual --version info which you can provide when reporting any issues found.
We have reduced the requirements needed for salt-key to generate minion keys. You're no longer required to have salt configured and it's common directories created just to generate keys. This might prove useful if you're batch creating keys to pre-load on minions.
A few configuration options have been added which allow for states to be run
when the minion daemon starts. This can be a great advantage when deploying
with Salt because the minion can apply states right when it first runs. To
use startup states set the startup_states
configuration option on the
minion to highstate.
Some users have asked about adding the ability to ensure that other sls files or ids are excluded from a state run. The exclude statement will delete all of the data loaded from the specified sls file or will delete the specified id:
exclude:
- sls: http
- id: /etc/vimrc
While we're currently unable to properly handle ZeroMQ's abort signals when the max open files is reached, due to the way that's handled on ZeroMQ's, we have minimized the chances of this happening without at least warning the user.
Some major changes have been made to the state output system. In the past state
return data was printed in a very verbose fashion and only states that failed
or made changes were printed by default. Now two options can be passed to the
master and minion configuration files to change the behavior of the state
output. State output can be set to verbose (default) or non-verbose with the
state_verbose
option:
state_verbose: False
It is noteworthy that the state_verbose option used to be set to False by default but has been changed to True by default in 0.10.3 due to many requests for the change.
Te next option to be aware of new and called state_output
. This option
allows for the state output to be set to full (default) or terse.
The full output is the standard state output, but the new terse output will print only one line per state making the output much easier to follow when executing a large state system.
state_output: terse
The salt state file.append() tries not to append existing text. Previously the matching check was being made line by line. While this kind of check might be enough for most cases, if the text being appended was multi-line, the check would not work properly. This issue is now properly handled, the match is done as a whole ignoring any white space addition or removal except inside commas. For those thinking that, in order to properly match over multiple lines, salt will load the whole file into memory, that's not true. For most cases this is not important but an erroneous order to read a 4GB file, if not properly handled, like salt does, could make salt chew that amount of memory. Salt has a buffered file reader which will keep in memory a maximum of 256KB and iterates over the file in chunks of 32KB to test for the match, more than enough, if not, explain your usage on a ticket. With this change, also salt.modules.file.contains(), salt.modules.file.contains_regex(), salt.modules.file.contains_glob() and salt.utils.find now do the searching and/or matching using the buffered chunks approach explained above.
Two new keyword arguments were also added, makedirs, and source. The first, makedirs will create the necessary directories in order to append to the specified file, of course, it only applies if we're trying to append to a non-existing file on a non-existing directory:
/tmp/salttest/file-append-makedirs:
file.append:
text: foo
makedirs: True
The second, source, allows one to append the contents of a file instead of specifying the text.
/tmp/salttest/file-append-source:
file.append:
- source: salt://testfile
A timing vulnerability was uncovered in the code which decrypts the AES messages sent over the network. This has been fixed and upgrading is strongly recommended.
release: | 2012-10-23 |
---|
Salt 0.10.4 is a monumental release for the Salt team, with two new module systems, many additions to allow granular access to Salt, improved platform support and much more.
This release is also exciting because we have been able to shorten the release cycle back to under a month. We are working hard to keep up the aggressive pace and look forward to having releases happen more frequently!
This release also includes a serious security fix and all users are very strongly recommended to upgrade. As usual, upgrade the master first, and then the minion to ensure that the process is smooth.
The new external authentication system allows for Salt to pass through authentication to any authentication system to determine if a user has permission to execute a Salt command. The Unix PAM system is the first supported system with more to come!
The external authentication system allows for specific users to be granted access to execute specific functions on specific minions. Access is configured in the master configuration file, and uses the new access control system:
external_auth:
pam:
thatch:
- 'web*':
- test.*
- network.*
The configuration above allows the user thatch to execute functions in the test and network modules on minions that match the web* target.
All Salt systems can now be configured to grant access to non-administrative users in a granular way. The old configuration continues to work. Specific functions can be opened up to specific minions from specific users in the case of external auth and client ACLs, and for specific minions in the case of the peer system.
Access controls are configured like this:
client_acl:
fred:
- web\*:
- pkg.list_pkgs
- test.*
- apache.*
A new matcher has been added to the system which allows for minions to be targeted by network. This new matcher can be called with the -S flag on the command line and is available in all places that the matcher system is available. Using it is simple:
$ salt -S '192.168.1.0/24' test.ping
$ salt -S '192.168.1.100' test.ping
Previously a nodegroup was limited by not being able to include another nodegroup, this restraint has been lifted and now nodegroups will be expanded within other nodegroups with the N@ classifier.
The ability to delete minion keys by glob has been added to salt-key
. To
delete all minion keys whose minion name starts with 'web':
$ salt-key -d 'web*'
The external_nodes system has been upgraded to allow for modular subsystems to be used to generate the top file data for a highstate run.
The external_nodes option still works but will be deprecated in the future in favor of the new master_tops option.
Example of using master_tops:
master_tops:
ext_nodes: cobbler-external-nodes
A lot of work has been put into improved Solaris support by Romeo Theriault. Packaging modules (pkgadd/pkgrm and pkgutil) and states, cron support and user and group management have all been added and improved upon. These additions along with SMF (Service Management Facility) service support and improved Solaris grain detection in 0.10.3 add up to Salt becoming a great tool to manage Solaris servers with.
A vulnerability in the security handshake was found and has been repaired, old minions should be able to connect to a new master, so as usual, the master should be updated first and then the minions.
The pillar communication has been updated to add some extra levels of verification so that the intended minion is the only one allowed to gather the data. Once all minions and the master are updated to salt 0.10.4 please activate pillar 2 by changing the pillar_version in the master config to 2. This will be set to 2 by default in a future release.
release: | 2012-11-15 |
---|
Salt 0.10.5 is ready, and comes with some great new features. A few more interfaces have been modularized, like the outputter system. The job cache system has been made more powerful and can now store and retrieve jobs archived in external databases. The returner system has been extended to allow minions to easily retrieve data from a returner interface.
As usual, this is an exciting release, with many noteworthy additions!
The external job cache is a system which allows for a returner interface to also act as a job cache. This system is intended to allow users to store job information in a central location for longer periods of time and to make the act of looking up information from jobs executed on other minions easier.
Currently the external job cache is supported via the mongo and redis returners:
ext_job_cache: redis
redis.host: salt
Once the external job cache is turned on the new ret module can be used on the minions to retrieve return information from the job cache. This can be a great way for minions to respond and react to other minions.
OpenStack integration with Salt has been moving forward at a blistering pace. The new nova, glance, and keystone modules represent the beginning of ongoing OpenStack integration.
The Salt team has had many conversations with core OpenStack developers and is working on linking to OpenStack in powerful new ways.
A new API was added to the Salt Master which allows the master to be managed via an external API. This new system allows Salt API to easily hook into the Salt Master and manage configs, modify the state tree, manage the pillar and more. The main motivation for the wheel system is to enable features needed in the upcoming web UI so users can manage the master just as easily as they manage minions.
The wheel system has also been hooked into the external auth system. This allows specific users to have granular access to manage components of the Salt Master.
Jack Kuan has added a substantial new feature. The render pipes system allows Salt to treat the render system like unix pipes. This new system enables sls files to be passed through specific render engines. While the default renderer is still recommended, different engines can now be more easily merged. So to pipe the output of Mako used in YAML use this shebang line:
#!mako|yaml
The Salt Key system was originally developed as only a CLI interface, but as time went on it was pressed into becoming a clumsy API. This release marks a complete overhaul of Salt Key. Salt Key has been rewritten to function purely from an API and to use the outputter system. The benefit here is that the outputter system works much more cleanly with Salt Key now, and the internals of Salt Key can be used much more cleanly.
The outputter system is now loaded in a modular way. This means that output systems can be more easily added by dropping a python file down on the master that contains the function output.
Gzip compression has been added as an option to the cp.get_file and cp.get_dir commands. This will make file transfers more efficient and faster, especially over slower network links.
In past releases of Salt, the minions needed to be configured for certain modules to function. This was difficult because it required pre-configuring the minions. 0.10.5 changes this by making all module configs on minions search the master config file for values.
Now if a single database server is needed, then it can be defined in the master config and all minions will become aware of the configuration value.
The salt-call
command has been updated in a few ways. Now, salt-call
can take the --return option to send the data to a returner. Also,
salt-call
now reports executions in the minion proc system, this allows the
master to be aware of the operation salt-call is running.
The old configuration values pub_refresh and sub_timeout have been removed. These options were in place to alleviate problems found in earlier versions of ZeroMQ which have since been fixed. The continued use of these options has proven to cause problems with message passing and have been completely removed.
When running Salt directly from git (for testing or development, of course) it has been difficult to know exactly what code is being executed. The new versioning system will detect the git revision when building and how many commits have been made since the last release. A release from git will look like this:
0.10.4-736-gec74d69
Anthony Cornehl (twinshadow) contributed a module that adds Subversion support to Salt. This great addition helps round out Salt's VCS support.
Arch Linux recently changed to use systemd by default and discontinued support for init scripts. Salt has followed suit and defaults to systemd now for managing services in Arch.
With the releases of Salt 0.10.5 and Salt Cloud 0.8.2, OpenStack becomes the first (non-OS) piece of software to include support both on the user level (with Salt Cloud) and the admin level (with Salt). We are excited to continue to extend support of other platforms at this level.
release: | 2012-12-14 |
---|
Salt 0.11.0 is here, with some highly sought after and exciting features. These features include the new overstate system, the reactor system, a new state run scope component called __context__, the beginning of the search system (still needs a great deal of work), multiple package states, the MySQL returner and a better system to arbitrarily reference outputters.
It is also noteworthy that we are changing how we mark release numbers. For the life of the project we have been pushing every release with features and fixes as point releases. We will now be releasing point releases for only bug fixes on a more regular basis and major feature releases on a slightly less regular basis. This means that the next release will be a bugfix only release with a version number of 0.11.1. The next feature release will be named 0.12.0 and will mark the end of life for the 0.11 series.
The overstate system is a simple way to manage rolling state executions across many minions. The overstate allows for a state to depend on the successful completion of another state.
The new reactor system allows for a reactive logic engine to be created which can respond to events within a salted environment. The reactor system uses sls files to match events fired on the master with actions, enabling Salt to react to problems in an infrastructure.
Your load-balanced group of webservers is under extra load? Spin up a new VM and add it to the group. Your fileserver is filling up? Send a notification to your sysadmin on call. The possibilities are endless!
A new component has been added to the module loader system. The module context is a data structure that can hold objects for a given scope within the module.
This allows for components that are initialized to be stored in a persistent context which can greatly speed up ongoing connections. Right now the best example can be found in the cp execution module.
A long desired feature has been added to package management. By definition Salt States have always installed packages one at a time. On most platforms this is not the fastest way to install packages. Erik Johnson, aka terminalmage, has modified the package modules for many providers and added new capabilities to install groups of packages. These package groups can be defined as a list of packages available in repository servers:
python_pkgs:
pkg.installed:
- pkgs:
- python-mako
- whoosh
- python-git
or specify based on the location of specific packages:
python_pkgs:
pkg.installed:
- sources:
- python-mako: http://some-rpms.org/python-mako.rpm
- whoosh: salt://whoosh/whoosh.rpm
- python-git: ftp://companyserver.net/python-git.rpm
The bones to the search system have been added. This is a very basic interface that allows for search backends to be added as search modules. The first supported search module is the whoosh search backend. Right now only the basic paths for the search system are in place, making this very experimental. Further development will involve improving the search routines and index routines for whoosh and other search backends.
The search system has been made to allow for searching through all of the state and pillar files, configuration files and all return data from minion executions.
All previous versions of Salt have shared many directories between the master
and minion. The default locations for keys, cached data and sockets has been
shared by master and minion. This has created serious problems with running a
master and a minion on the same systems. 0.11.0 changes the defaults to be
separate directories. Salt will also attempt to migrate all of the old key data
into the correct new directories, but if it is not successful it may need to be
done manually. If your keys exhibit issues after updating make sure that they
have been moved from /etc/salt/pki
to /etc/salt/pki/{master,minion}
.
The old setup will look like this:
/etc/salt/pki
|-- master.pem
|-- master.pub
|-- minions
| `-- ragnarok.saltstack.net
|-- minions_pre
|-- minion.pem
|-- minion.pub
|-- minion_master.pub
|-- minions_pre
`-- minions_rejected
With the accepted minion keys in /etc/salt/pki/minions
, the new setup
places the accepted minion keys in /etc/salt/pki/master/minions
.
/etc/salt/pki
|-- master
| |-- master.pem
| |-- master.pub
| |-- minions
| | `-- ragnarok.saltstack.net
| |-- minions_pre
| `-- minions_rejected
|-- minion
| |-- minion.pem
| |-- minion.pub
| `-- minion_master.pub
release: | 2012-12-19 |
---|
release: | 2013-01-15 |
---|
Another feature release of Salt is here! Some exciting additions are included with more ways to make salt modular and even easier management of the salt file server.
The new modular fileserver backend allows for any external system to be used as a salt file server. The main benefit here is that it is now possible to tell the master to directly use a git remote location, or many git remote locations, automatically mapping git branches and tags to salt environments.
A new Salt Windows installer is now available! Much work has been put in to improve Windows support. With this much easier method of getting Salt on your Windows machines, we hope even more development and progress will occur. Please file bug reports on the Salt GitHub repo issue tracker so we can continue improving.
One thing that is missing on Windows that Salt uses extensively is a software package manager and a software package repository. The Salt pkg state allows sys admins to install software across their infrastructure and across operating systems. Software on Windows can now be managed in the same way. The SaltStack team built a package manager that interfaces with the standard Salt pkg module to allow for installing and removing software on Windows. In addition, a software package repository has been built on top of the Salt fileserver. A small YAML file provides the information necessary for the package manager to install and remove software.
An interesting feature of the new Salt Windows software package repository is that one or more remote git repositories can supplement the master's local repository. The repository can point to software on the master's fileserver or on an HTTP, HTTPS, or ftp server.
Salt displays data to the terminal via the outputter system. For a long time the default outputter for Salt has been the python pretty print library. While this has been a generally reasonable outputter, it did have many failings. The new default outputter is called "nested", it recursively scans return data structures and prints them out cleanly.
If the result of the new nested outputter is not desired any other outputter can be used via the --out option, or the output option can be set in the master and minion configs to change the default outputter.
The internal Salt scheduler is a new capability which allows for functions to be executed at given intervals on the minion, and for runners to be executed at given intervals on the master. The scheduler allows for sequences such as executing state runs (locally on the minion or remotely via an overstate) or continually gathering system data to be run at given intervals.
The configuration is simple, add the schedule option to the master or minion config and specify jobs to run, this in the master config will execute the state.over runner every 60 minutes:
schedule:
overstate:
function: state.over
minutes: 60
This example for the minion configuration will execute a highstate every 30 minutes:
schedule:
highstate:
function: state.highstate
minutes: 30
Jack Kuan, our renderer expert, has created something that is astonishing. Salt, now comes with an optional Python based DSL, this is a very powerful interface that makes writing SLS files in pure python easier than it was with the raw py renderer. As usual this can be used with the renderer shebang line, so a single sls can be written with the DSL if pure python power is needed while keeping other sls files simple with YAML.
A new execution function and state module have been added that allows for grains to be set on the minion. Now grains can be set via a remote execution or via states. Use the grains.present state or the grains.setval execution functions.
Major additions to Gentoo specific components have been made. The encompasses executions modules and states ranging from supporting the make.conf file to tools like layman.
release: | 2013-01-21 |
---|
release: | 2013-02-12 |
---|
The lucky number 13 has turned the corner! From CLI notifications when quitting a salt command, to substantial improvements on Windows, Salt 0.13.0 has arrived!
The file.recurse system has been deployed and used in a vast array of situations. Fixes to the file state and module have led towards opening up new ways of running file.recurse to make it faster. Now the file.recurse state will download fewer files and will run substantially faster.
Minion stability on Windows has improved. Many file operations, including file.recurse, have been fixed and improved. The network module works better, to include network.interfaces. Both 32bit and 64bit installers are now available.
In the past, nodegroups were not available for targeting via the peer system. This has been fixed, allowing the new nodegroup expr_form argument for the publish.publish function:
salt-call publish.publish group1 test.ping expr_form=nodegroup
Additions allowing more granular blacklisting are available in 0.13.0. The ability to blacklist users and functions in client_acl have been added, as well as the ability to exclude state formulas from the command line.
Pillar data can now be embedded on the command line when calling state.sls
and state.highstate
. This allows for on the fly changes or settings to
pillar and makes parameterizing state formulas even easier. This is done via
the keyword argument:
salt '*' state.highstate pillar='{"cheese": "spam"}'
The above example will extend the existing pillar to hold the cheese
key
with a value of spam
. If the cheese
key is already specified in the
minion's pillar then it will be overwritten.
In the past hitting ctrl-C and quitting from the salt
command would just
drop to a shell prompt, this caused confusion with users who expected the
remote executions to also quit. Now a message is displayed showing what
command can be used to track the execution and what the job id is for the
execution.
Versions can now be specified within multiple-package pkg.installed
states. An example can be found below:
mypkgs:
pkg.installed:
- pkgs:
- foo
- bar: 1.2.3-4
- baz
The configuration subsystem in Salt has been overhauled to make the opts
dict used by Salt applications more portable, the problem is that this is an
incompatible change with salt-cloud, and salt-cloud will need to be updated
to the latest git to work with Salt 0.13.0. Salt Cloud 0.8.5 will also require
Salt 0.13.0 or later to function.
The SaltStack team is sorry for the inconvenience here, we work hard to make sure these sorts of things do not happen, but sometimes hard changes get in.
release: | 2013-02-15 |
---|
release: | 2013-03-13 |
---|
release: | 2013-03-18 |
---|
release: | 2013-03-23 |
---|
Salt 0.14.0 is here! This release was held up primarily by PyCon, Scale, and illness, but has arrived! 0.14.0 comes with many new features and is breaking ground for Salt in the area of cloud management with the introduction of Salt providing basic cloud controller functionality.
This is the first primitive inroad to using Salt as a cloud controller is available in 0.14.0. Be advised that this is alpha, only tested in a few very small environments.
The cloud controller is built using kvm and libvirt for the hypervisors. Hypervisors are autodetected as minions and only need to have libvirt running and kvm installed to function. The features of the Salt cloud controller are as follows:
- Basic vm discovery and reporting
- Creation of new virtual machines
- Seeding virtual machines with Salt via qemu-nbd or libguestfs
- Live migration (shared and non shared storage)
- Delete existing VMs
It is noteworthy that this feature is still Alpha, meaning that all rights are reserved to change the interface if needs be in future releases!
One of the problems with libvirt is management of certificates needed for live
migration and cross communication between hypervisors. The new libvirt
state makes the Salt Master hold a CA and manage the signing and distribution
of keys onto hypervisors, just add a call to the libvirt state in the sls
formulas used to set up a hypervisor:
libvirt_keys:
libvirt.keys
An easier way to manage data has been introduced. The pillar, grains, and config
execution modules have been extended with the new get
function. This
function works much in the same way as the get method in a python dict, but with
an enhancement, nested dict components can be extracted using a : delimiter.
If a structure like this is in pillar:
foo:
bar:
baz: quo
Extracting it from the raw pillar in an sls formula or file template is done this way:
{{ pillar['foo']['bar']['baz'] }}
Now with the new get function the data can be safely gathered and a default can be set allowing the template to fall back if the value is not available:
{{ salt['pillar.get']('foo:bar:baz', 'qux') }}
This makes handling nested structures much easier, and defaults can be cleanly set. This new function is being used extensively in the new formulae repository of salt sls formulas.
release: | 2013-04-13 |
---|
release: | 2013-05-03 |
---|
The many new features of Salt 0.15.0 have arrived! Salt 0.15.0 comes with many smaller features and a few larger ones.
These features range from better debugging tools to the new Salt Mine system.
First there was the peer system, allowing for commands to be executed from a minion to other minions to gather data live. Then there was the external job cache for storing and accessing long term data. Now the middle ground is being filled in with the Salt Mine. The Salt Mine is a system used to execute functions on a regular basis on minions and then store only the most recent data from the functions on the master, then the data is looked up via targets.
The mine caches data that is public to all minions, so when a minion posts data to the mine all other minions can see it.
0.13.0 saw the addition of initial IPV6 support but errors were encountered and it needed to be stripped out. This time the code covers more cases and must be explicitly enabled. But the support is much more extensive than before.
Minions have long been able to copy files down from the master file server, but until now files could not be easily copied from the minion up to the master.
A new function called cp.push
can push files from the minions up to the
master server. The uploaded files are then cached on the master in the master
cachedir for each minion.
Template errors have long been a burden when writing states and pillar. 0.15.0 will now send the compiled template data to the debug log, this makes tracking down the intermittent stage templates much easier. So running state.sls or state.highstate with -l debug will now print out the rendered templates in the debug information.
The state system is now more closely tied to the master's event bus. Now when a state fails the failure will be fired on the master event bus so that the reactor can respond to it.
The Syndic system has been basically re-written. Now it runs in a completely asynchronous way and functions primarily as an event broker. This means that the events fired on the syndic are now pushed up to the higher level master instead of the old method used which waited for the client libraries to return.
This makes the syndic much more accurate and powerful, it also means that all events fired on the syndic master make it up the pipe as well making a reactor on the higher level master able to react to minions further downstream.
The Peer System has been updated to run using the client libraries instead of firing directly over the publish bus. This makes the peer system much more consistent and reliable.
In the past when a minion was decommissioned the key needed to be manually deleted on the master, but now a function on the minion can be used to revoke the calling minion's key:
$ salt-call saltutil.revoke_auth
Functions can now be assigned numeric return codes to determine if the function executed successfully. While not all functions have been given return codes, many have and it is an ongoing effort to fill out all functions that might return a non-zero return code.
The overstate system was originally created to just manage the execution of states, but with the addition of return codes to functions, requisite logic can now be used with respect to the overstate. This means that an overstate stage can now run single functions instead of just state executions.
Previously if errors surfaced in pillar, then the pillar would consist of only an empty dict. Now all data that was successfully rendered stays in pillar and the render error is also made available. If errors are found in the pillar, states will refuse to run.
Sometimes states are executed purely to maintain a specific state rather than to update states with new configs. This is grounds for the new cached state system. By adding cache=True to a state call the state will not be generated fresh from the master but the last state data to be generated will be used. If no previous state data is available then fresh data will be generated.
The new monitoring states system has been started. This is very young but
allows for states to be used to configure monitoring routines. So far only one
monitoring state is available, the disk.status
state. As more capabilities
are added to Salt UI the monitoring capabilities of Salt will continue to be
expanded.
release: | 2013-05-08 |
---|
The 0.15.1 release has been posted, this release includes fixes to a number of bugs in 0.15.1 and a three security patches.
A number of security issues have been resolved via the 0.15.1 release.
Salt masters did not properly validate the id of a connecting minion. This can lead to an attacker uploading files to the master in arbitrary locations. In particular this can be used to bypass the manual validation of new unknown minions. Exploiting this vulnerability does not require authentication.
This issue affects all known versions of Salt.
This issue was reported by Ronald Volgers.
The issue is fixed in Salt 0.15.1. Updated packages are available in the usual locations.
Specific commits:
https://github.com/saltstack/salt/commit/5427b9438e452a5a8910d9128c6aafb45d8fd5d3
https://github.com/saltstack/salt/commit/7560908ee62351769c3cd43b03d74c1ca772cc52
https://github.com/saltstack/salt/commit/e200b8a7ff53780124e08d2bdefde7587e52bfca
RSA key generation was done incorrectly, leading to very insecure keys. It is recommended to regenerate all RSA keys.
This issue can be used to impersonate Salt masters or minions, or decrypt any transferred data.
This issue can only be exploited by attackers who are able to observe or modify traffic between Salt minions and the legitimate Salt master.
A tool was included in 0.15.1 to assist in mass key regeneration, the manage.regen_keys runner.
This issue affects all known versions of Salt.
This issue was reported by Ronald Volgers.
The issue is fixed in Salt 0.15.1. Updated packages are available in the usual locations.
Specific commits:
https://github.com/saltstack/salt/commit/5dd304276ba5745ec21fc1e6686a0b28da29e6fc
Arbitrary shell commands could be executed on the master by an authenticated minion through options passed when requesting a pillar.
Ext pillar options have been restricted to only allow safe external pillars to be called when prompted by the minion.
This issue affects Salt versions from 0.14.0 to 0.15.0.
This issue was reported by Ronald Volgers.
The issue is fixed in Salt 0.15.1. Updated packages are available in the usual locations.
Specific commits:
https://github.com/saltstack/salt/commit/43d8c16bd26159d827d1a945c83ac28159ec5865
release: | 2013-05-29 |
---|
release: | 2013-06-01 |
---|
release: | 2013-07-01 |
---|
The 0.16.0 release is an exciting one, with new features in master redundancy, and a new, powerful requisite.
This new capability allows for a minion to be actively connected to multiple salt masters at the same time. This allows for multiple masters to send out commands to minions and for minions to automatically reconnect to masters that have gone down. A tutorial is available to help get started here:
The new prereq requisite is very powerful! It allows for states to execute based on a state that is expected to make changes in the future. This allows for a change on the system to be preempted by another execution. A good example is needing to shut down a service before modifying files associated with it, allowing, for instance, a webserver to be shut down allowing a load balancer to stop sending requests while server side code is updated. In this case, the prereq will only run if changes are expected to happen in the prerequired state, and the prerequired state will always run after the prereq state and only if the prereq state succeeds.
The peer system has been revamped to make it more reliable, faster, and like the rest of Salt, async. The peer calls when an updated minion and master are used together will be much faster!
The ability to include an sls relative to the defined sls has been added, the new syntax id documented here:
The state_output
option in the past only supported full and terse,
0.16.0 add the mixed and changes modes further refining how states are sent
to users' eyes.
Support for Salt on Windows continues to improve. Software management on
Windows has become more seamless with Linux/UNIX/BSD software management.
Installed software is now recognized by the short names defined in the
repository SLS. This makes it
possible to run salt '*' pkg.version firefox
and get back results from
Windows and non-Windows minions alike.
When templating files on Windows, Salt will now correctly use Windows appropriate line endings. This makes it much easier to edit and consume files on Windows.
When using the cmd state the shell
option now allows for specifying
Windows Powershell as an alternate shell to execute cmd.run and cmd.script.
This opens up Salt to all the power of Windows Powershell and its advanced
Windows management capabilities.
Several fixes and optimizations were added for the Windows networking modules, especially when working with IPv6.
A system module was added that makes it easy to restart and shutdown Windows minions.
The Salt Minion will now look for its config file in c:\salt\conf
by
default. This means that it's no longer necessary to specify the -c
option
to specify the location of the config file when starting the Salt Minion on
Windows in a terminal.
Both pkg.removed
and pkg.purged
now support the pkgs
argument, which allow for
multiple packages to be targeted in a single state. This, as in
pkg.installed
, helps speed up these
states by reducing the number of times that the package management tools (apt,
yum, etc.) need to be run.
The temporal parameters in cron.present
states (minute, hour, etc.) can now be randomized by using random
instead
of a specific value. For example, by using the random
keyword in the
minute
parameter of a cron state, the same cron job can be pushed to
hundreds or thousands of hosts, and they would each use a randomly-generated
minute. This can be helpful when the cron job accesses a network resource, and
it is not desirable for all hosts to run the job concurrently.
/path/to/cron/script:
cron.present:
- user: root
- minute: random
- hour: 2
Since Salt assumes a value of *
for unspecified temporal parameters, adding
a parameter to the state and setting it to random
will change that value
from *
to a randomized numeric value. However, if that field in the cron
entry on the minion already contains a numeric value, then using the random
keyword will not modify it.
When accepting new keys with salt-key -a minion-id
or salt-key -A
,
there is now a prompt that will show the affected keys and ask for confirmation
before proceeding. This prompt can be bypassed using the -y
or --yes
command line argument, as with other salt-key
commands.
FreeBSD, NetBSD, and OpenBSD all now support setting passwords in
user.present
states.
release: | 2013-07-29 |
---|
release: | 2013-08-01 |
---|
Version 0.16.2 is a bugfix release for 0.16.0, and contains a number of fixes.
virtual
grain on OpenVZ hardware nodeslsb_distrib_
instead of simply lsb_
.
The old naming is not preserved, so SLS may be affected.pillar.item
and
pillar.items
added for parity with
grains.item
/grains.items
. The old function pillar.data
is preserved
for backwards compatibility.publish.publish
, publish.full_data
(issue 5959)publish.publish
(issue 5928)salt-call
(issue 5956)random_reauth_delay
to stagger
re-auth attempts when the minion is waiting for the master to approve its
public key. This helps prevent SYN flooding in larger environments.unique
option for user.present
states in FreeBSDgroup.present
state attempts to use a gid in use by another
groupuser.present
state to set the password hash to the system
default (i.e. an unset password)group.present
states with
the same group (issue 6439)/tmp
is in file_roots
(issue 6118)pkg.latest
statesstate.sls
/state.highstate
service.running
states when the service fails to start
(issue 5894)network.hw_addr
to match network.ip_addrs
and network.ip_addrs6
. All three functions also now work without
the underscore in the name, as well.bridge.show
when
interface is not present (issue 6326)ssh_known_hosts.present
statesssh_auth.present
states are run with test=True
, when
rsa/dss is used for the enc
param instead of ssh-rsa/ssh-dss
(issue 5374)-f
lines in pip freeze outputeditable
argument in pip.installed
states (issue 6025)runas
parameter in execution function calls, in favor of
user
mysql_user.present
states to
set a passwordless login (issue 5550)mysql.processlist
is run (issue 6297)postgres.user_list
(issue 6352)alternatives.install
states
for which the target is a symlink (issue 6162)cmd.script
statescp.get_dir
to return more
directories than expected (issue 6048)supervisord.running
states are run with test=True
(issue 6053)img.mount_image
tomcat.deploy_war
in Windowsselinux.boolean
states (issue 5912)extfs.mkfs
and
extfs.tune
(issue 6462)module.run
state
where the m_name
and m_fun
arguments were being ignored (issue 6464)release: | 2013-08-09 |
---|
Version 0.16.3 is another bugfix release for 0.16.0. The changes include:
mount.mounted
(issue 6522, issue 6545)mysql.query
cp.push
without
having set file_recv
in the master config filerelease: | 2013-09-07 |
---|
Version 0.16.4 is another bugfix release for 0.16.0, likely to be the last before 0.17.0 is released. The changes include:
osfinger
and osarch
grainshg.latest
state that would
erroneously delete directories (issue 6661)ps.top
(issue 6679)MySQL returner
(issue 6695)ipv4
and ipv6
) to include all addresses
(issue 6656)file.contains
on values YAML parses
as non-string (issue 6817)file.get_gid
, file.get_uid
, and file.chown
for broken symlinks (issue 6826)release: | 2013-09-26 |
---|
The 0.17.0 release is a very exciting release of Salt, this brings to Salt some very powerful new features and advances. The advances range from the state system to the test suite, covering new transport capabilities and making states easier and more powerful, to extending Salt Virt and much more!
The 0.17.0 release will also be the last release of Salt to follow the old 0.XX.X numbering system, the next release of Salt will change the numbering to be date based following this format:
<Year>.<Month>.<Minor>
So if the release happens in November of 2013 the number will be 13.11.0, the first bugfix release will be 13.11.1 and so forth.
The new Halite web GUI is now available on PyPI. A great deal of work has been put into Halite to make it fully event driven and amazingly fast. The Halite UI can be started from within the Salt Master (after being installed from PyPI), or standalone, and does not require an external database to run. It is very lightweight!
This initial release of Halite is primarily the framework for the UI and the communication systems, making it easy to extend and build the UI up. It presently supports watching the event bus and firing commands over Salt.
At this time, Halite is not available as a package, but installation documentation is available at: http://docs.saltstack.com/topics/tutorials/halite.html
Halite is, like the rest of Salt, Open Source!
Much more will be coming in the future of Halite!
The new salt-ssh
command has been added to Salt. This system allows for
remote execution and states to be run over ssh. The benefit here being, that
salt can run relying only on the ssh agent, rather than requiring a minion
to be deployed.
The salt-ssh
system runs states in a compatible way as Salt and states
created and run with salt-ssh can be moved over to a standard salt deployment
without modification.
Since this is the initial release of salt-ssh, there is plenty of room for improvement, but it is fully operational, not just a bootstrap tool.
Salt is designed to have the minions be aware of the master and the master does not need to be aware of the location of the minions. The new salt roster system was created and designed to facilitate listing the targets for salt-ssh.
The roster system, like most of Salt, is a plugin system, allowing for the list of systems to target to be derived from any pluggable backend. The rosters shipping with 0.17.0 are flat and scan. Flat is a file which is read in via the salt render system and the scan roster does simple network scanning to discover ssh servers.
This is a major change in how states are evaluated in Salt. State Auto Order is a new feature that makes states get evaluated and executed in the order in which they are defined in the sls file. This feature makes it very easy to see the finite order in which things will be executed, making Salt now, fully imperative AND fully declarative.
The requisite system still takes precedence over the order in which states are
defined, so no existing states should break with this change. But this new
feature can be turned off by setting state_auto_order: False
in the master
config, thus reverting to the old lexicographical order.
The state.sls
runner has been created to allow for a more powerful system
for orchestrating state runs and function calls across the salt minions. This
new system uses the state system for organizing executions.
This allows for states to be defined that are executed on the master to call
states on minions via salt-run state.sls
.
Salt Thin is an exciting new component of Salt, this is the ability to execute Salt routines without any transport mechanisms installed, it is a pure python subset of Salt.
Salt Thin does not have any networking capability, but can be dropped into any
system with Python installed and then salt-call
can be called directly. The
Salt Thin system, is used by the salt-ssh
command, but can still be used to
just drop salt somewhere for easy use.
Events have been updated to be much more flexible. The tags in events have all been namespaced allowing easier tracking of event names.
The popular git fileserver backend has been joined by the mercurial fileserver backend, allowing the state tree to be managed entirely via mercurial.
The external logging handler system allows for Salt to directly hook into any external logging system. Currently supported are sentry and logstash.
The testing systems in Salt have been greatly enhanced, tests for salt are now executed, via jenkins.saltstack.com, across many supported platforms. Jenkins calls out to salt-cloud to create virtual machines on Rackspace, then the minion on the virtual machine checks into the master running on Jenkins where a state run is executed that sets up the minion to run tests and executes the test suite.
This now automates the sequence of running platform tests and allows for continuous destructive tests to be run.
The testing libraries for salt have been moved out of the main salt code base and into a standalone codebase. This has been done to ease the use of the testing systems being used in salt based projects other than Salt itself.
The external auth system now supports the fantastic Stormpath cloud based authentication system.
Extensive additions have been added to Salt for LXC support. This included the backend libs for managing LXC containers. Addition into the salt-virt system is still in the works.
Salt is now able to manage users and groups on Minions running Mac OS X. However, at this time user passwords cannot be managed.
Pillar data can now be derived from Django managed databases.
file.append
(issue 6905)file.search
and file.replace
cp.push
file corruption (issue 6495)release: | 2013-10-17 |
---|
Note
THIS RELEASE IS NOT COMPATIBLE WITH PREVIOUS VERSIONS. If you update your master to 0.17.1, you must update your minions as well. Sorry for the inconvenience -- this is a result of one of the security fixes listed below.
The 0.17.1 release comes with a number of improvements to salt-ssh, many bugfixes, and a number of security updates.
Salt SSH has been improved to be faster, more featureful and more secure. Since the original release of Salt SSH was primarily a proof of concept, it has been very exciting to see its rapid adoption. We appreciate the willingness of security experts to review Salt SSH and help discover oversights and ensure that security issues only exist for such a tiny window of time.
Improvements to Salt SSH's communication have been added that improve routine execution regardless of the target system's login shell.
Deployment of routines is now faster and takes fewer commands to execute.
Be advised that these security issues all apply to a small subset of Salt users and mostly apply to Salt SSH.
This issue allowed for a user with limited privileges to embed executions inside of routines to execute routines that should be restricted. This applies to users using external auth or client ACL and opening up specific routines.
Be advised that these patches address the direct issue. Additional commits have been applied to help mitigate this issue from resurfacing.
CVE-2013-4435
0.15.0 - 0.17.0
https://github.com/saltstack/salt/commit/6d8ef68b605fd63c36bb8ed96122a75ad2e80269 https://github.com/saltstack/salt/commit/ebdef37b7e5d2b95a01d34b211c61c61da67e46a https://github.com/saltstack/salt/commit/7f190ff890e47cdd591d9d7cefa5126574660824 https://github.com/saltstack/salt/commit/8e5afe59cef6743fe5dbd510dcf463dbdfca1ced https://github.com/saltstack/salt/commit/aca78f314481082862e96d4f0c1b75fa382bb885 https://github.com/saltstack/salt/commit/6a9752cdb1e8df2c9505ea910434c79d132eb1e2 https://github.com/saltstack/salt/commit/b73677435ba54ecfc93c1c2d840a7f9ba6f53410 https://github.com/saltstack/salt/commit/07972eb0a6f985749a55d8d4a2e471596591c80d https://github.com/saltstack/salt/commit/1e3f197726aa13ac5c3f2416000089f477f489b5
Feth Arezki, of Majerti
SSH host keys were being accepted by default and not enforced on future SSH connections. These patches set SSH host key checking by default and can be overridden by passing the -i flag to salt-ssh.
CVE-2013-4436
0.17.0
Michael Scherer, Red Hat
The initial release of salt-ssh used the /tmp directory in an insecure way. These patches not only secure usage of files under /tmp in salt-ssh, but also add checksum validation for all packages sent into the now secure locations on target systems.
CVE-2013-4438
0.17.0
https://github.com/saltstack/salt/commit/aa4bb77ef230758cad84381dde0ec660d2dc340a https://github.com/saltstack/salt/commit/8f92b6b2cb2e4ec3af8783eb6bf4ff06f5a352cf https://github.com/saltstack/salt/commit/c58e56811d5a50c908df0597a0ba0b643b45ebfd https://github.com/saltstack/salt/commit/0359db9b46e47614cff35a66ea6a6a76846885d2 https://github.com/saltstack/salt/commit/4348392860e0fd43701c331ac3e681cf1a8c17b0 https://github.com/saltstack/salt/commit/664d1a1cac05602fad2693f6f97092d98a72bf61 https://github.com/saltstack/salt/commit/bab92775a576e28ff9db262f32db9cf2375bba87 https://github.com/saltstack/salt/commit/c6d34f1acf64900a3c87a2d37618ff414e5a704e
Michael Scherer, Red Hat
It has been argued that this is not a valid security issue, as the YAML loading that was happening was only being called after an initial gateway filter in Salt has already safely loaded the YAML and would fail if non-safe routines were embedded. Nonetheless, the CVE was filed and patches applied.
CVE-2013-4438
https://github.com/saltstack/salt/commit/339b0a51befae6b6b218ebcb55daa9cd3329a1c5
Michael Scherer, Red Hat
If a salt master was started as a non-root user by the root user, root's groups would still be applied to the running process. This fix changes the process to have only the groups of the running user.
CVE not considered necessary by submitter.
0.11.0 - 0.17.0
Michael Scherer, Red Hat
Version 0.17.1 is the first bugfix release for 0.17.0. The changes include:
--priv
option for specifying salt-ssh private keysocket.getfqdn()
first (issue 7558)--include-all
flag to salt-key (issue 7399)socket.getfqdn()
(issue 7558)file.directory
staterelease: | 2013-11-14 |
---|
Version 0.17.2 is another bugfix release for 0.17.0. The changes include:
ps
on Debian to prevent truncating (issue 5646)- names
states (issue 7649)--out=quiet
to actually be quiet (issue 8000)test
kwarg in states (issue 7788)salt.client.Caller()
(issue 8078)pkg.latest
regression (issue 8067)__opts__
dictionary persistence (issue 7714)git.latest
state when a commit SHA is used (issue 8163)--output-file
CLI arg (issue 8205)pkgrepo
states when test=True
(issue 8247)test=True
(issue 8279)dir_mode
to file.managed
(issue 7860)release: | 2013-12-08 |
---|
Note
0.17.3 had some regressions which were promptly fixed in the 0.17.4 release. Please use 0.17.4 instead.
Version 0.17.3 is another bugfix release for 0.17.0. The changes include:
file.replace
state changing file ownership (issue 8399)name
even with requirements file (issue 8519)release: | 2013-12-10 |
---|
Version 0.17.4 is another bugfix release for 0.17.0. The changes include:
release: | 2014-01-27 |
---|
Version 0.17.5 is another bugfix release for 0.17.0. The changes include:
user.present
states with non-string fullname (issue 9085)virt.init
return value on failure (issue 6870)file.blockreplace
state when test=True
network.interfaces
when used in cron (issue 7990)git.latest
(issue 9107)cmd.watch
alias (points to cmd.wait
) (issue 8612)_in
requisites to match both on ID and name (issue 9061)ZMQError: Operation cannot be accomplished in current state
errors (issue 6306)The Salt remote execution manager has reached initial functionality! Salt is a management application which can be used to execute commands on remote sets of servers.
The whole idea behind Salt is to create a system where a group of servers can be remotely controlled from a single master, not only can commands be executed on remote systems, but salt can also be used to gather information about your server environment.
Unlike similar systems, like Func and MCollective, Salt is extremely simple to setup and use, the entire application is contained in a single package, and the master and minion daemons require no running dependencies in the way that Func requires Certmaster and MCollective requires activeMQ.
Salt also manages authentication and encryption. Rather than using SSL for encryption, salt manages encryption on a payload level, so the data sent across the network is encrypted with fast AES encryption, and authentication uses RSA keys. This means that Salt is fast, secure, and very efficient.
Messaging in Salt is executed with ZeroMQ, so the message passing interface is built into salt and does not require an external ZeroMQ server. This also adds speed to Salt since there is no additional bloat on the networking layer, and ZeroMQ has already proven itself as a very fast networking system.
The remote execution in Salt is "Lazy Execution", in that once the command is sent the requesting network connection is closed. This makes it easier to detach the execution from the calling process on the master, it also means that replies are cached, so that information gathered from historic commands can be queried in the future.
Salt also allows users to make execution modules in Python. Writers of these modules should also be pleased to know that they have access to the impressive information gathered from PuppetLabs' Facter application, making Salt module more flexible. In the future I hope to also allow Salt to group servers based on Facter information as well.
All in all Salt is fast, efficient, and clean, can be used from a simple command line client or through an API, uses message queue technology to make network execution extremely fast, and encryption is handled in a very fast and efficient manner. Salt is also VERY easy to use and VERY easy to extend.
You can find the source code for Salt on my GitHub page, I have also set up a few wiki pages explaining how to use and set up Salt. If you are using Arch Linux there is a package available in the Arch Linux AUR.
Salt 0.6.0 Source: https://cloud.github.com/downloads/saltstack/salt/salt-0.6.0.tar.gz
GitHub page: https://github.com/saltstack/salt
Wiki: https://github.com/saltstack/salt/wiki
Arch Linux Package: https://aur.archlinux.org/packages/salt-git/
I am very open to contributions, for instance I need packages for more Linux distributions as well as BSD packages and testers.
Give Salt a try, this is the initial release and is not a 1.0 quality release, but it has been working well for me! I am eager to get your feedback!
I am pleased to announce the release of Salt 0.7.0!
This release marks what is the first stable release of salt, 0.7.0 should be suitable for general use.
0.7.0 Brings the following new features to Salt:
0.7.0 Fixes the following major bugs:
The next release of Salt should see the following features:
Coming up next is a higher level management framework for salt called Butter. I want salt to stay as a simple and effective communication framework, and allow for more complicated executions to be managed via Butter.
Right now Butter is being developed to act as a cloud controller using salt as the communication layer, but features like system monitoring and advanced configuration control (a puppet manager) are also in the pipe.
Special thanks to Joseph Hall for the status and network modules, and thanks to Matthias Teege for tracking down some configuration bugs!
Salt can be downloaded from the following locations;
Source Tarball:
https://cloud.github.com/downloads/saltstack/salt/salt-0.7.0.tar.gz
Arch Linux Package:
https://aur.archlinux.org/packages/salt-git/
Please enjoy the latest Salt release!
Salt 0.8.0 is ready for general consumption! The source tarball is available on GitHub for download:
https://cloud.github.com/downloads/saltstack/salt/salt-0.8.0.tar.gz
A lot of work has gone into salt since the last release just 2 weeks ago, and salt has improved a great deal. A swath of new features are here along with performance and threading improvements!
The main new features of salt 0.8.0 are:
Salt-cp
Cython minion modules
Dynamic returners
Faster return handling
Lowered required Python version to 2.6
Advanced minion threading
Configurable minion modules
The salt-cp command introduces the ability to copy simple files via salt to targeted servers. Using salt-cp is very simple, just call salt-cp with a target specification, the source file(s) and where to copy the files on the minions. For instance:
# salt-cp ‘*’ /etc/hosts /etc/hosts
Will copy the local /etc/hosts file to all of the minions.
Salt-cp is very young, in the future more advanced features will be added, and the functionality will much more closely resemble the cp command.
Cython is an amazing tool used to compile Python modules down to c. This is arguably the fastest way to run Python code, and since pyzmq requires cython, adding support to salt for cython adds no new dependencies.
Cython minion modules allow minion modules to be written in cython and therefore executed in compiled c. Simply write the salt module in cython and use the file extension “.pyx” and the minion module will be compiled when the minion is started. An example cython module is included in the main distribution called cytest.pyx:
https://github.com/saltstack/salt/blob/develop/salt/modules/cytest.pyx
By default salt returns command data back to the salt master, but now salt can return command data to any system. This is enabled via the new returners modules feature for salt. The returners modules take the return data and sends it to a specific module. The returner modules work like minion modules, so any returner can be added to the minions.
This means that a custom data returner can be added to communicate the return data so anything from MySQL, Redis, MongoDB, and more!
There are 2 simple stock returners in the returners directory:
https://github.com/saltstack/salt/blob/develop/salt/returners
The documentation on writing returners will be added to the wiki shortly, and returners can be written in pure Python, or in cython.
Minion modules may need to be configured, now the options passed to the minion configuration file can be accessed inside of the minion modules via the __opt__ dict.
Information on how to use this simple addition has been added to the wiki: Writing modules
The test module has an example of using the __opts__ dict, and how to set default options:
https://github.com/saltstack/salt/blob/develop/salt/modules/test.py
In 0.7.0 the minion would block after receiving a command from the master, now the minion will spawn a thread or multiprocess. By default Python threads are used because for general use they have proved to be faster, but the minion can now be configured to use the Python multiprocessing module instead. Using multiprocessing will cause executions that are CPU bound or would otherwise exploit the negative aspects of the Python GIL to run faster and more reliably, but simple calls will still be faster with Python threading. The configuration option can be found in the minion configuration file:
The requirement for Python 2.7 has been removed to support Python 2.6. I have received requests to take the minimum Python version back to 2.4, but unfortunately this will not be possible, since the ZeroMQ Python bindings do not support Python 2.4.
Salt 0.8.0 is a very major update, it also changes the network protocol slightly which makes communication with older salt daemons impossible, your master and minions need to be upgraded together!
I could use some help bringing salt to the people! Right now I only have packages for Arch Linux, Fedora 14 and Gentoo. We need packages for Debian and people willing to help test on more platforms. We also need help writing more minion modules and returner modules. If you want to contribute to salt please hop on the mailing list and send in patches, make a fork on GitHub and send in pull requests! If you want to help but are not sure where you can, please email me directly or post tot he mailing list!
I hope you enjoy salt, while it is not yet 1.0 salt is completely viable and usable!
-Thomas S. Hatch
It has been a month since salt 0.8.0, and it has been a long month! But Salt is still coming along strong. 0.8.7 has a lot of changes and a lot of updates. This update makes Salt’s ZeroMQ back end better, strips Facter from the dependencies, and introduces interfaces to handle more capabilities.
Many of the major updates are in the background, but the changes should shine through to the surface. A number of the new features are still a little thin, but the back end to support expansion is in place.
I also recently gave a presentation to the Utah Python users group in Salt Lake City, the slides from this presentation are available here: https://cloud.github.com/downloads/saltstack/salt/Salt.pdf
The video from this presentation will be available shortly.
The major new features and changes in Salt 0.8.7 are:
The new ZeroMQ topology allows for better scalability, this will be required by the need to execute massive file transfers to multiple machines in parallel and state management. The new ZeroMQ topology is available in the aforementioned presentation.
0.8.7 introduces the capability to declare states, this is similar to the capabilities of Puppet. States in salt are declared via state data structures. This system is very young, but the core feature set is available. Salt states work around rendering files which represent Salt high data. More on the Salt state system will be documented in the near future.
The system for loading salt modules has been pulled out of the minion class to be a standalone module, this has enabled more dynamic loading of Salt modules and enables many of the updates in 0.8.7 –
https://github.com/saltstack/salt/blob/develop/salt/loader.py
Salt Job ids are now microsecond precise, this was needed to repair a race condition unveiled by the speed improvements in the new ZeroMQ topology.
The new grains interface replaces the functionality of Facter, the idea behind grains differs from Facter in that the grains are only used for static system data, dynamic data needs to be derived from a call to a salt module. This makes grains much faster to use, since the grains data is generated when the minion starts.
Virtual salt modules allows for a salt module to be presented as something other than its module name. The idea here is that based on information from the minion decisions about which module should be presented can be made. The best example is the pacman module. The pacman module will only load on Arch Linux minions, and will be called pkg. Similarly the yum module will be presented as pkg when the minion starts on a Fedora/RedHat system.
The new salt-call command allows for minion modules to be executed from the minion. This means that on the minion a salt module can be executed, this is a great tool for testing Salt modules. The salt-call command can also be used to view the grains data.
In previous releases when a minion module threw an exception very little data was returned to the master. Now the stack trace from the failure is returned making debugging of minion modules MUCH easier.
Salt is nearing the goal of 1.0, where the core feature set and capability is complete!
Salt 0.8.7 can be downloaded from GitHub here: https://cloud.github.com/downloads/saltstack/salt/salt-0.8.7.tar.gz
-Thomas S Hatch
Salt 0.8.8 is here! This release adds a great deal of code and some serious new features. The latest release can be downloaded here: https://cloud.github.com/downloads/saltstack/salt/salt-0.8.8.tar.gz
Improved Documentation has been set up for salt using sphinx thanks to the efforts of Seth House. This new documentation system will act as the back end to the salt website which is still under heavy development. The new sphinx documentation system has also been used to greatly clean up the salt manpages. The salt 7 manpage in particular now contains extensive information which was previously only in the wiki. The new documentation can be found at: http://docs.saltstack.com/ We still have a lot to add, and when the domain is set up I will post another announcement.
More additions have been made to the ZeroMQ setup, particularly in the realm of file transfers. Salt 0.8.8 introduces a built in, stateless, encrypted file server which allows salt minions to download files from the salt master using the same encryption system used for all other salt communications. The main motivation for the salt file server has been to facilitate the new salt state system.
Much of the salt code has been cleaned up and a new cleaner logging system has been introduced thanks to the efforts of Pedro Algarvio. These additions will allow for much more flexible logging to be executed by salt, and fixed a great deal of my poor spelling in the salt docstrings! Pedro Algarvio has also cleaned up the API, making it easier to embed salt into another application.
The biggest addition to salt found in 0.8.8 is the new state system. The salt module system has received a new front end which allows salt to be used as a configuration management system. The configuration management system allows for system configuration to be defined in data structures. The configuration management system, or as it is called in salt, the “salt state system” supports many of the features found in other configuration managers, but allows for system states to be written in a far simpler format, executes at blazing speeds, and operates via the salt minion matching system. The state system also operates within the normal scope of salt, and requires no additional configuration to use.
The salt state system can enforce the following states with many more to come: Packages Files Services Executing commands Hosts
The system used to define the salt states is based on a data structure, the data structure used to define the salt states has been made to be as easy to use as possible. The data structure is defined by default using a YAML file rendered via a Jinja template. This means that the state definition language supports all of the data structures that YAML supports, and all of the programming constructs and logic that Jinja supports. If the user does not like YAML or Jinja the states can be defined in yaml-mako, json-jinja, or json-mako. The system used to render the states is completely dynamic, and any rendering system can be added to the capabilities of Salt, this means that a rendering system that renders XML data in a cheetah template, or whatever you can imagine, can be easily added to the capabilities of salt.
The salt state system also supports isolated environments, as well as matching code from several environments to a single salt minion.
The feature base for Salt has grown quite a bit since my last serious documentation push. As we approach 0.9.0 the goals are becoming very clear, and the documentation needs a lot of work. The main goals for 0.9.0 are to further refine the state system, fix any bugs we find, get Salt running on as many platforms as we can, and get the documentation filled out. There is a lot more to come as Salt moves forward to encapsulate a much larger scope, while maintaining supreme usability and simplicity.
If you would like a more complete overview of Salt please watch the Salt presentation: Slides: https://cloud.github.com/downloads/saltstack/salt/Salt.pdf
-Thomas S Hatch
Salt 0.8.9 has finally arrived! Unfortunately this is much later than I had hoped to release 0.8.9, life has been very crazy over the last month. But despite challenges, Salt has moved forward!
This release, as expected, adds few new features and many refinements. One of the most exciting aspect of this release is that the development community for salt has grown a great deal and much of the code is from contributors.
Also, I have filled out the documentation a great deal. So information on States is properly documented, and much of the documentation that was out of date has been filled in.
The Salt source can be downloaded from the salt GitHub site:
https://cloud.github.com/downloads/saltstack/salt/salt-0.8.9.tar.gz
Or from PyPI:
https://pypi.python.org/packages/source/s/salt/salt-0.8.9.tar.gz
Here s the md5sum:
7d5aca4633bc22f59045f59e82f43b56
For instructions on how to set up Salt please see the Installation instructions.
A big feature is the addition of Salt run, the salt-run
command allows for
master side execution modules to be made that gather specific information or
execute custom routines from the master.
Documentation for salt-run can be found here
One problem often complained about in salt was the fact that the output was
so messy. Thanks to help from Jeff Schroeder a cleaner interface for the
command output for the Salt CLI has been made. This new interface makes
adding new printout formats easy and additions to the capabilities of minion
modules makes it possible to set the printout mode or outputter
for
functions in minion modules.
Salt modules can now call each other, the __salt__
dict has been added to
the predefined references in minion modules. This new feature is documented in
the modules documentation.
Now in Salt states you can set the watch option, this will allow watch enabled states to change based on a change in the other defined states. This is similar to subscribe and notify statements in puppet.
Travis Cline has added the ability to define the option root_dir
which
allows the salt minion to operate in a subdir. This is a strong move in
supporting the minion running as an unprivileged user
Thanks again to Travis Cline, the master and minion configuration file locations can be defined in environment variables now.
Quite a few new modules, states, returners, and runners have been made.
Support for apt-get has been added, this adds greatly improved Debian and Ubuntu support to Salt!
Support for manipulating users and groups on Unix-like systems.
Initial support for reporting on aspects of the distributed file system, MooseFS. For more information on MooseFS please see: http://www.moosefs.org
Thanks to Joseph Hall for his work on MooseFS support.
Manage mounts and the fstab.
Execute puppet on remote systems.
Manipulate and manage the user password file.
Interact with ssh keys.
release: | 2011-08-27 |
---|
Salt 0.9.0 is here. This is an exciting release, 0.9.0 includes the new network topology features allowing peer salt commands and masters of masters via the syndic interface.
0.9.0 also introduces many more modules, improvements to the API and improvements to the ZeroMQ systems.
The Salt source can be downloaded from the salt GitHub site:
https://cloud.github.com/downloads/saltstack/salt/salt-0.9.0.tar.gz
Or from PyPI:
https://pypi.python.org/packages/source/s/salt/salt-0.9.0.tar.gz
Here is the md5sum:
9a925da04981e65a0f237f2e77ddab37
For instructions on how to set up Salt please see the Installation instructions.
The new Syndic interface allows a master to be commanded via another higher level salt master. This is a powerful solution allowing a master control structure to exist, allowing salt to scale to much larger levels then before.
0.9.0 introduces the capability for a minion to call a publication on the master and receive the return from another set of minions. This allows salt to act as a communication channel between minions and as a general infrastructure message bus.
Peer communication is turned off by default but can be enabled via the peer
option in the master configuration file. Documentation on the new Peer
interface.
The minion and master classes have been redesigned to allow for specialized
minion and master servers to be easily created. An example on how this is done
for the master can be found in the master.py
salt module:
https://github.com/saltstack/salt/blob/develop/salt/master.py
The Master
class extends the SMaster
class and set up the main master
server.
The minion functions can now also be easily added to another application via
the SMinion
class, this class can be found in the minion.py
module:
https://github.com/saltstack/salt/blob/develop/salt/minion.py
This release changes some of the key naming to allow for multiple master keys to be held based on the type of minion gathering the master key.
The -d option has also been added to the salt-key command allowing for easy removal of accepted public keys.
The --gen-keys option is now available as well for salt-key, this allows for a salt specific RSA key pair to be easily generated from the command line.
The 0MQ worker system has been further refined to be faster and more robust. This new system has been able to handle a much larger load than the previous setup. The new system uses the IPC protocol in 0MQ instead of TCP.
Quite a few new modules have been made.
Work directly with apache servers, great for managing balanced web servers
Read out the contents of a systems crontabs
Module to manage raid devices in Linux, appears as the raid
module
Gather simple data from MySQL databases
Extensive utilities for managing processes
Used by the peer interface to allow minions to make publications
release: | 2011-08-29 |
---|
release: | 2011-09-17 |
---|
Salt 0.9.2 has arrived! 0.9.2 is primarily a bugfix release, the exciting component in 0.9.2 is greatly improved support for salt states. All of the salt states interfaces have been more thoroughly tested and the new salt-states git repo is growing with example of how to use states.
This release introduces salt states for early developers and testers to start helping us clean up the states interface and make it ready for the world!
0.9.2 also fixes a number of bugs found on Python 2.6.
The Salt source can be downloaded from the salt GitHub site:
https://cloud.github.com/downloads/saltstack/salt/salt-0.9.2.tar.gz
Or from PyPI:
https://pypi.python.org/packages/source/s/salt/salt-0.9.2.tar.gz
For instructions on how to set up Salt please see the Installation instructions.
The salt-call command has received an overhaul, it now hooks into the outputter system so command output looks clean, and the logging system has been hooked into salt-call, so the -l option allows the logging output from salt minion functions to be displayed.
The end result is that the salt-call command can execute the state system and return clean output:
# salt-call state.highstate
The state system has been tested and better refined. As of this release the state system is ready for early testers to start playing with. If you are interested in working with the state system please check out the (still very small) salt-states GitHub repo:
https://github.com/saltstack/salt-states
This git repo is the active development branch for determining how a clean salt-state database should look and act. Since the salt state system is still very young a lot of help is still needed here. Please fork the salt-states repo and help us develop a truly large and scalable system for configuration management!
Python 2.6 does not support format strings without an index identifier, all of them have been repaired.
Cython loading requires a development tool chain to be installed on the minion, requiring this by default can cause problems for most Salt deployments. If Cython auto loading is desired it will need to be turned on in the minion config.
release: | 2011-11-05 |
---|
Salt 0.9.3 is finally arrived. This is another big step forward for Salt, new features range from proper FreeBSD support to fixing issues seen when attaching a minion to a master over the Internet.
The biggest improvements in 0.9.3 though can be found in the state system, it has progressed from something ready for early testers to a system ready to compete with platforms such as Puppet and Chef. The backbone of the state system has been greatly refined and many new features are available.
The Salt source can be downloaded from the salt GitHub site:
https://cloud.github.com/downloads/saltstack/salt/salt-0.9.3.tar.gz
Or from PyPI:
https://pypi.python.org/packages/source/s/salt/salt-0.9.3.tar.gz
For instructions on how to set up Salt please see the Installation instructions.
Recently more people have been testing Salt minions connecting to Salt Masters over the Internet. It was found that Minions would commonly loose their connection to the master when working over the internet. The minions can now detect if the connection has been lost and reconnect to the master, making WAN connections much more reliable.
Substantial testing has gone into the state system and it is ready for real world usage. A great deal has been added to the documentation for states and the modules and functions available to states have been cleanly documented.
A number of State System bugs have also been founds and repaired, the output from the state system has also been refined to be extremely clear and concise.
Error reporting has also been introduced, issues found in sls files will now be clearly reported when executing Salt States.
The Salt States have also gained the extend
declaration. This declaration
allows for states to be cleanly modified in a post environment. Simply said,
if there is an apache.sls file that declares the apache service, then another
sls can include apache and then extend it:
include:
- apache
extend:
apache:
service:
- require:
- pkg: mod_python
mod_python:
pkg:
- installed
The notable behavior with the extend functionality is that it literally extends or overwrites a declaration set up in another sls module. This means that Salt will behave as though the modifications were made directly to the apache sls. This ensures that the apache service in this example is directly tied to all requirements.
This release comes with a clear specification of the Highstate data structure that is used to declare Salt States. This specification explains everything that can be declared in the Salt SLS modules.
The specification is extremely simple, and illustrates how Salt has been able to fulfill the requirements of a central configuration manager within a simple and easy to understand format and specification.
It came to our attention that having many renderers means that there may be a situation where more than one State Renderer should be available within a single State Tree.
The method chosen to accomplish this was something already familiar to developers and systems administrators, a SheBang. The Python State Renderer displays this new capability.
Until now Salt States could only be declared in yaml or json using Jinja or Mako. A new, very powerful, renderer has been added, making it possible to write Salt States in pure Python:
#!py
def run():
'''
Install the python-mako package
'''
return {'include': ['python'],
'python-mako': {'pkg': ['installed']}}
This renderer is used by making a run function that returns the Highstate data structure. Any capabilities of Python can be used in pure Python sls modules.
This example of a pure Python sls module is the same as this example in yaml:
include:
- python
python-mako:
pkg:
- installed
Additional support has been added for FreeBSD, this is Salt's first branch out of the Linux world and proves the viability of Salt on non-Linux platforms.
Salt remote execution already worked on FreeBSD, and should work without issue on any Unix-like platform. But this support comes in the form of package management and user support, so Salt States also work on FreeBSD now.
The new freebsdpkg module provides package management support for FreeBSD and the new pw_user and pw_group provide user and group management.
Support for managing the system crontab has been added, declaring a cron state can be done easily:
date > /tmp/datestamp:
cron:
- present
- user: fred
- minute: 5
- hour: 3
The file state has been given a number of new features, primarily the directory, recurse, symlink, and absent functions.
Make sure that a directory exists and has the right permissions.
/srv/foo:
file:
- directory
- user: root
- group: root
- mode: 1755
Make a symlink.
/var/lib/www:
file:
- symlink
- target: /srv/www
- force: True
The recurse state function will recursively download a directory on the master file server and place it on the minion. Any change in the files on the master will be pushed to the minion. The recurse function is very powerful and has been tested by pushing out the full Linux kernel source.
/opt/code:
file:
- recurse
- source: salt://linux
Make sure that the file is not on the system, recursively deletes directories, files, and symlinks.
/etc/httpd/conf.d/somebogusfile.conf:
file:
- absent
The sysctl module and state allows for sysctl components in the kernel to be managed easily. the sysctl module contains the following functions:
The sysctl state allows for sysctl parameters to be assigned:
vm.swappiness:
sysctl:
- present
- value: 20
A module for managing Linux kernel modules has been added. The new functions are as follows:
The kmod state can enforce modules be either present or absent:
kvm_intel:
kmod:
- present
The ssh_auth state can distribute ssh authorized keys out to minions. Ssh authorized keys can be present or absent.
AAAAB3NzaC1kc3MAAACBAL0sQ9fJ5bYTEyYvlRBsJdDOo49CNfhlWHWXQRqul6rwL4KIuPrhY7hBw0tV7UNC7J9IZRNO4iGod9C+OYutuWGJ2x5YNf7P4uGhH9AhBQGQ4LKOLxhDyT1OrDKXVFw3wgY3rHiJYAbd1PXNuclJHOKL27QZCRFjWSEaSrUOoczvAAAAFQD9d4jp2dCJSIseSkk4Lez3LqFcqQAAAIAmovHIVSrbLbXAXQE8eyPoL9x5C+x2GRpEcA7AeMH6bGx/xw6NtnQZVMcmZIre5Elrw3OKgxcDNomjYFNHuOYaQLBBMosyO++tJe1KTAr3A2zGj2xbWO9JhEzu8xvSdF8jRu0N5SRXPpzSyU4o1WGIPLVZSeSq1VFTHRT4lXB7PQAAAIBXUz6ZO0bregF5xtJRuxUN583HlfQkXvxLqHAGY8WSEVlTnuG/x75wolBDbVzeTlxWxgxhafj7P6Ncdv25Wz9wvc6ko/puww0b3rcLNqK+XCNJlsM/7lB8Q26iK5mRZzNsGeGwGTyzNIMBekGYQ5MRdIcPv5dBIP/1M6fQDEsAXQ==:
ssh_auth:
- present
- user: frank
- enc: dsa
- comment: 'Frank's key'
release: | 2011-11-27 |
---|
Salt 0.9.4 has arrived. This is a critical update that repairs a number of
key bugs found in 0.9.3. But this update is not without feature additions
as well! 0.9.4 adds support for Gentoo portage to the pkg module and state
system. Also there are 2 major new state additions, the failhard option and
the ability to set up finite state ordering with the order
option.
This release also sees our largest increase in community contributions. These contributors have and continue to be the life blood of the Salt project, and the team continues to grow. I want to put out a big thanks to our new and existing contributors.
The Salt source can be downloaded from the salt GitHub site:
https://cloud.github.com/downloads/saltstack/salt/salt-0.9.4.tar.gz
Or from PyPI:
https://pypi.python.org/packages/source/s/salt/salt-0.9.4.tar.gz
For instructions on how to set up Salt please see the Installation instructions.
Normally, when a state fails Salt continues to execute the remainder of the defined states and will only refuse to execute states that require the failed state.
But the situation may exist, where you would want all state execution to stop
if a single state execution fails. The capability to do this is called
failing hard
.
A single state can have a failhard set, this means that if this individual state fails that all state execution will immediately stop. This is a great thing to do if there is a state that sets up a critical config file and setting a require for each state that reads the config would be cumbersome. A good example of this would be setting up a package manager early on:
/etc/yum.repos.d/company.repo:
file:
- managed
- source: salt://company/yumrepo.conf
- user: root
- group: root
- mode: 644
- order: 1
- failhard: True
In this situation, the yum repo is going to be configured before other states, and if it fails to lay down the config file, than no other states will be executed.
It may be desired to have failhard be applied to every state that is executed, if this is the case, then failhard can be set in the master configuration file. Setting failhard in the master configuration file will result in failing hard when any minion gathering states from the master have a state fail.
This is NOT the default behavior, normally Salt will only fail states that require a failed state.
Using the global failhard is generally not recommended, since it can result in states not being executed or even checked. It can also be confusing to see states failhard if an admin is not actively aware that the failhard has been set.
To use the global failhard set failhard: True in the master configuration
When creating salt sls files, it is often important to ensure that they run in a specific order. While states will always execute in the same order, that order is not necessarily defined the way you want it.
A few tools exist in Salt to set up the correct state ordering, these tools consist of requisite declarations and order options.
Before using the order option, remember that the majority of state ordering should be done with requisite statements, and that a requisite statement will override an order option.
The order option is used by adding an order number to a state declaration with the option order:
vim:
pkg:
- installed
- order: 1
By adding the order option to 1 this ensures that the vim package will be installed in tandem with any other state declaration set to the order 1.
Any state declared without an order option will be executed after all states with order options are executed.
But this construct can only handle ordering states from the beginning. Sometimes you may want to send a state to the end of the line, to do this set the order to last:
vim:
pkg:
- installed
- order: last
Substantial testing has gone into the state system and it is ready for real world usage. A great deal has been added to the documentation for states and the modules and functions available to states have been cleanly documented.
A number of State System bugs have also been founds and repaired, the output from the state system has also been refined to be extremely clear and concise.
Error reporting has also been introduced, issues found in sls files will now be clearly reported when executing Salt States.
Additional experimental support has been added for Gentoo. This is found in the contribution from Doug Renn, aka nestegg.
release: | 2012-01-15 |
---|
Salt 0.9.5 is one of the largest steps forward in the development of Salt.
0.9.5 comes with many milestones, this release has seen the community of developers grow out to an international team of 46 code contributors and has many feature additions, feature enhancements, bug fixes and speed improvements.
Warning
Be sure to read the upgrade instructions about the switch to msgpack before upgrading!
Nothing has proven to have more value to the development of Salt that the outstanding community that has been growing at such a great pace around Salt. This has proven not only that Salt has great value, but also the expandability of Salt is as exponential as I originally intended.
0.9.5 has received over 600 additional commits since 0.9.4 with a swath of new committers. The following individuals have contributed to the development of 0.9.5:
This makes 21 new developers since 0.9.4 was released!
To keep up with the growing community follow Salt on Ohloh (http://www.ohloh.net/p/salt), to join the Salt development community, fork Salt on GitHub, and get coding (https://github.com/saltstack/salt)!
For a few months now we have been talking about moving away from Python pickles for network serialization, but a preferred serialization format had not yet been found. After an extensive performance testing period involving everything from JSON to protocol buffers, a clear winner emerged. Message Pack (http://msgpack.org/) proved to not only be the fastest and most compact, but also the most "salt like". Message Pack is simple, and the code involved is very small. The msgpack library for Python has been added directly to Salt.
This move introduces a few changes to Salt. First off, Salt is no longer a "noarch" package, since the msgpack lib is written in C. Salt 0.9.5 will also have compatibility issues with 0.9.4 with the default configuration.
We have gone through great lengths to avoid backwards compatibility issues with
Salt, but changing the serialization medium was going to create issues
regardless. Salt 0.9.5 is somewhat backwards compatible with earlier minions. A
0.9.5 master can command older minions, but only if the serial
config value in the master is set to pickle
. This will tell the master to
publish messages in pickle format and will allow the master to receive messages
in both msgpack and pickle formats.
Therefore the suggested methods for upgrading are either to just upgrade everything at once, or:
serial
to pickle
in the master configserial
option from the master configSince pickles can be used as a security exploit the ability for a master to accept pickles from minions at all will be removed in a future release.
All of the YAML rendering is now done with the YAML C bindings. This speeds up all of the sls files when running states.
David Boucha has worked tirelessly to bring initial support to Salt for Microsoft Windows operating systems. Right now the Salt Minion can run as a native Windows service and accept commands.
In the weeks and months to come Windows will receive the full treatment and will have support for Salt States and more robust support for managing Windows systems. This is a big step forward for Salt to move entirely outside of the Unix world, and proves Salt is a viable cross platform solution. Big Thanks to Dave for his contribution here!
Many Salt users have expressed the desire to have Salt distribute in-house modules, states, renderers, returners, and grains. This support has been added in a number of ways:
Now when salt modules are deployed to a minion via the state system as a file, then the modules will be automatically loaded into the active running minion - no restart required - and into the active running state. So custom state modules can be deployed and used in the same state run.
Under the file_roots each environment can now have directories that are used
to deploy large groups of modules. These directories sync modules at the
beginning of a state run on the minion, or can be manually synced via the Salt
module salt.modules.saltutil.sync_all
.
The directories are named:
_modules
_states
_grains
_renderers
_returners
The modules are pushed to their respective scopes on the minions.
Modules can now be reloaded without restarting the minion, this is done by
calling the salt.modules.sys.reload_modules
function.
But wait, there's more! Now when a salt module of any type is added via states the modules will be automatically reloaded, allowing for modules to be laid down with states and then immediately used.
Finally, all modules are reloaded when modules are dynamically distributed from the salt master.
A great deal of demand has existed for adding the capability to set services to be started at boot in the service module. This feature also comes with an overhaul of the service modules and initial systemd support.
This means that the service state
can now
accept - enable: True
to make sure a service is enabled at boot, and -
enable: False
to make sure it is disabled.
A new target type has been added to the lineup, the compound target. In previous versions the desired minions could only be targeted via a single specific target type, but now many target specifications can be declared.
These targets can also be separated by and/or operators, so certain properties can be used to omit a node:
salt -C 'webserv* and G@os:Debian or E@db.*' test.ping
will match all minions with ids starting with webserv via a glob and minions
matching the os:Debian
grain. Or minions that match the db.*
regular
expression.
Often the convenience of having a predefined group of minions to execute targets on is desired. This can be accomplished with the new nodegroups feature. Nodegroups allow for predefined compound targets to be declared in the master configuration file:
nodegroups:
group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com and bl*.domain.com'
group2: 'G@os:Debian and foo.domain.com'
And then used via the -N
option:
salt -N group1 test.ping
The data module introduces the initial approach into storing persistent data on the minions, specific to the minions. This allows for data to be stored on minions that can be accessed from the master or from the minion.
The Minion datastore is young, and will eventually provide an interface similar to a more mature key/value pair server.
The Salt grains have been overhauled to include a massive amount of extra data. this includes hardware data, os data and salt specific data.
In the past the salt query system, which would display the data from recent executions would be displayed in pure Python, and it was unreadable.
0.9.5 has added the outputter system to the -Q
option, thus enabling the
salt query system to return readable output.
Huge strides have been made in packaging Salt for distributions. These additions are thanks to our wonderful community where the work to set up packages has proceeded tirelessly.
Salt on FreeBSD? There a port for that:
http://svnweb.freebsd.org/ports/head/sysutils/py-salt/
This port was developed and added by Christer Edwards. This also marks the first time Salt has been included in an upstream packaging system!
Salt packages have been prepared for inclusion in the Fedora Project and in EPEL for Red Hat Enterprise 5 and 6. These packages are the result of the efforts made by Clint Savage (herlo).
A team of many contributors have assisted in developing packages for Debian and Ubuntu. Salt is still actively seeking inclusion in upstream Debian and Ubuntu and the package data that has been prepared is being pushed through the needed channels for inclusion.
These packages have been prepared with the help of:
We are actively seeking inclusion in more distributions. Primarily getting Salt into Gentoo, SUSE, OpenBSD, and preparing Solaris support are all turning into higher priorities.
Salt continues to be refined into a faster, more stable and more usable application. 0.9.5 comes with more debug logging, more bug fixes and more complete support.
0.9.5 comes with more bugfixes due to more testing than any previous release. The growing community and the introduction a a dedicated QA environment have unearthed many issues that were hiding under the covers. This has further refined and cleaned the state interface, taking care of things from minor visual issues to repairing misleading data.
A custom exception module has been added to throw salt specific exceptions. This allows Salt to give much more granular error information.
data
¶The new data module manages a persistent datastore on the minion. Big thanks to bastichelaar for his help refining this module
freebsdkmod
¶FreeBSD kernel modules can now be managed in the same way Salt handles Linux kernel modules.
This module was contributed thanks to the efforts of Christer Edwards
gentoo_service
¶Support has been added for managing services in Gentoo. Now Gentoo services can be started, stopped, restarted, enabled, disabled, and viewed.
pip
¶The pip module introduces management for pip installed applications. Thanks goes to whitinge for the addition of the pip module
rh_service
¶The rh_service module enables Red Hat and Fedora specific service management. Now Red Hat like systems come with extensive management of the classic init system used by Red Hat
saltutil
¶The saltutil module has been added as a place to hold functions used in the maintenance and management of salt itself. Saltutil is used to salt the salt minion. The saltutil module is presently used only to sync extension modules from the master server.
systemd
¶Systemd support has been added to Salt, now systems using this next generation init system are supported on systems running systemd.
virtualenv
¶The virtualenv module has been added to allow salt to create virtual Python environments. Thanks goes to whitinge for the addition of the virtualenv module
win_disk
¶Support for gathering disk information on Microsoft Windows minions The windows modules come courtesy of Utah_Dave
win_service
¶The win_service module adds service support to Salt for Microsoft Windows services
win_useradd
¶Salt can now manage local users on Microsoft Windows Systems
yumpkg5
¶The yumpkg module introduces in 0.9.4 uses the yum API to interact with the yum package manager. Unfortunately, on Red Hat 5 systems salt does not have access to the yum API because the yum API is running under Python 2.4 and Salt needs to run under Python 2.6.
The yumpkg5 module bypasses this issue by shelling out to yum on systems where the yum API is not available.
mysql_database
¶The new mysql_database state adds the ability to systems running a mysql server to manage the existence of mysql databases.
The mysql states are thanks to syphernl
mysql_user
¶The mysql_user state enables mysql user management.
virtualenv
¶The virtualenv state can manage the state of Python virtual environments. Thanks to Whitinge for the virtualenv state
cassandra_returner
¶A returner allowing Salt to send data to a cassandra server. Thanks to Byron Clark for contributing this returner
release: | 2012-01-21 |
---|
Salt 0.9.6 is a release targeting a few bugs and changes. This is primarily targeting an issue found in the names declaration in the state system. But a few other bugs were also repaired, like missing support for grains in extmods.
Due to a conflict in distribution packaging msgpack will no longer be bundled with Salt, and is required as a dependency.
Now under the source option in the file.managed state a HTTP or ftp address can be used instead of a file located on the salt master.
Now the returner interface can define multiple returners, and will also return data back to the master, making the process less ambiguous.
A number of modules have been taken out of the minion if the underlying systems required by said modules are not present on the minion system. A number of other modules need to be stripped out in this same way which should continue to make the minion more efficient.
A new option, cache_jobs, has been added to the minion to allow for all of the historically run jobs to cache on the minion, allowing for looking up historic returns. By default cache_jobs is set to False.
Templates in the file.managed state can now be defined in a Python script. This script needs to have a run function that returns the string that needs to be in the named file.
release: | 2012-02-15 |
---|
Salt 0.9.7 is here! The latest iteration of Salt brings more features and many fixes. This release is a great refinement over 0.9.6, adding many conveniences under the hood, as well as some features that make working with Salt much better.
A few highlights include the new Job system, refinements to the requisite
system in states, the mod_init
interface for states, external node
classification, search path to managed files in the file state, and refinements
and additions to dynamic module loading.
0.9.7 also introduces the long developed (and oft changed) unit test framework and the initial unit tests.
The new jobs interface makes the management of running executions much cleaner and more transparent. Building on the existing execution framework the jobs system allows clear introspection into the active running state of the running Salt interface.
The Jobs interface is centered in the new minion side proc system. The
minions now store msgpack serialized files under /var/cache/salt/proc
.
These files keep track of the active state of processes on the minion.
A number of functions have been added to the saltutil module to manage and view the jobs:
running
- Returns the data of all running jobs that are found in the proc
directory.
find_job
- Returns specific data about a certain job based on job id.
signal_job
- Allows for a given jid to be sent a signal.
term_job
- Sends a termination signal (SIGTERM, 15
) to the process
controlling the specified job.
kill_job
Sends a kill signal (SIGKILL, 9
) to the process controlling the
specified job.
A convenience runner front end and reporting system has been added as well. The jobs runner contains functions to make viewing data easier and cleaner.
The jobs runner contains a number of functions...
The active function runs saltutil.running
on all minions and formats the
return data about all running jobs in a much more usable and compact format.
The active function will also compare jobs that have returned and jobs that
are still running, making it easier to see what systems have completed a job
and what systems are still being waited on.
When jobs are executed the return data is sent back to the master and cached.
By default is is cached for 24 hours, but this can be configured via the
keep_jobs
option in the master configuration.
Using the lookup_jid
runner will display the same return data that the
initial job invocation with the salt command would display.
Before finding a historic job, it may be required to find the job id.
list_jobs
will parse the cached execution data and display all of the job
data for jobs that have already, or partially returned.
Salt can now use external node classifiers like Cobbler's
cobbler-ext-nodes
.
Salt uses specific data from the external node classifier. In particular the classes value denotes which sls modules to run, and the environment value sets to another environment.
An external node classification can be set in the master configuration file via
the external_nodes
option:
http://salt.readthedocs.org/en/latest/ref/configuration/master.html#external-nodes
External nodes are loaded in addition to the top files. If it is intended to only use external nodes, do not deploy any top files.
An issue arose with the pkg state. Every time a package was run Salt would
need to refresh the package database. This made systems with slower package
metadata refresh speeds much slower to work with. To alleviate this issue the
mod_init
interface has been added to salt states.
The mod_init
interface is a function that can be added to a state file.
This function is called with the first state called. In the case of the pkg
state, the mod_init
function sets up a tag which makes the package database
only refresh on the first attempt to install a package.
In a nutshell, the mod_init
interface allows a state to run any command that
only needs to be run once, or can be used to set up an environment for working
with the state.
The file state continues to be refined, adding speed and capabilities. This release adds the ability to pass a list to the source option. This list is then iterated over until the source file is found, and the first found file is used.
The new syntax looks like this:
/etc/httpd/conf/httpd.conf:
file:
- managed
- source:
- salt://httpd/httpd.conf
- http://myserver/httpd.conf: md5=8c1fe119e6f1fd96bc06614473509bf1
The source option can take sources in the list from the salt file server as well as an arbitrary web source. If using an arbitrary web source the checksum needs to be passed as well for file verification.
A few discrepancies were still lingering in the requisite system, in
particular, it was not possible to have a require
and a watch
requisite
declared in the same state declaration.
This issue has been alleviated, as well as making the requisite system run more quickly.
Because of the module system, and the need to test real scenarios, the development of a viable unit testing system has been difficult, but unit testing has finally arrived. Only a small amount of unit testing coverage has been developed, much more coverage will be in place soon.
A huge thanks goes out to those who have helped with unit testing, and the contributions that have been made to get us where we are. Without these contributions unit tests would still be in the dark.
Originally only support for and
and or
were available in the compound
target. 0.9.7 adds the capability to negate compound targets with not
.
Previously the nodegroups defined in the master configuration file could not be used to match nodes for states. The nodegroups support has been expanded and the nodegroups defined in the master configuration can now be used to match minions in the top file.
release: | 2012-03-21 |
---|
Salt 0.9.8 is a big step forward, with many additions and enhancements, as well as a number of precursors to advanced future developments.
This version of Salt adds much more power to the command line, making the old hard timeout issues a thing of the past and adds keyword argument support. These additions are also available in the salt client API, making the available API tools much more powerful.
The new pillar system allows for data to be stored on the master and assigned to minions in a granular way similar to the state system. It also allows flexibility for users who want to keep data out of their state tree similar to 'external lookup' functionality in other tools.
A new way to extend requisites was added, the "requisite in" statement. This makes adding requires or watch statements to external state decs much easier.
Additions to requisites making them much more powerful have been added as well as improved error checking for sls files in the state system. A new provider system has been added to allow for redirecting what modules run in the background for individual states.
Support for OpenSUSE has been added and support for Solaris has begun serious development. Windows support has been significantly enhanced as well.
The matcher and target systems have received a great deal of attention. The default behavior of grain matching has changed slightly to reflect the rest of salt and the compound matcher system has been refined.
A number of impressive features with keyword arguments have been added to both the CLI and to the state system. This makes states much more powerful and flexible while maintaining the simple configuration everyone loves.
The new batch size capability allows for executions to be rolled through a group of targeted minions a percentage or specific number at a time. This was added to prevent the "thundering herd" problem when targeting large numbers of minions for things like service restarts or file downloads.
There was a previously missed oversight which could cause a newer minion to crash an older master. That oversight has been resolved so the version incompatibility issue will no longer occur. When upgrading to 0.9.8 make sure to upgrade the master first, followed by the minions.
The original Debian/Ubuntu packages were called salt and included all salt applications. New packages in the ppa are split by function. If an old salt package is installed then it should be manually removed and the new split packages need to be freshly installed.
On the master:
# apt-get purge salt
# apt-get install salt-{master,minion}
On the minions:
# apt-get purge salt
# apt-get install salt-minion
And on any Syndics:
# apt-get install salt-syndic
The official Salt PPA for Ubuntu is located at: https://launchpad.net/~saltstack/+archive/salt
Pillar offers an interface to declare variable data on the master that is then assigned to the minions. The pillar data is made available to all modules, states, sls files etc. It is compiled on the master and is declared using the existing renderer system. This means that learning pillar should be fairly trivial to those already familiar with salt states.
The salt
command has received a serious overhaul and is more powerful
than ever. Data is returned to the terminal as it is received, and the salt
command will now wait for all running minions to return data before stopping.
This makes adding very large --timeout arguments completely unnecessary and
gets rid of long running operations returning empty {}
when the timeout is
exceeded.
When calling salt via sudo, the user originally running salt is saved to the log for auditing purposes. This makes it easy to see who ran what by just looking through the minion logs.
The salt-key command gained the -D and --delete-all arguments for removing all keys. Be careful with this one!
The addition of running states without a salt-master has been added to 0.9.8. This feature allows for the unmodified salt state tree to be read locally from a minion. The result is that the UNMODIFIED state tree has just become portable, allowing minions to have a local copy of states or to manage states without a master entirely.
This is accomplished via the new file client interface in Salt that allows
for the salt://
URI to be redirected to custom interfaces. This means that
there are now two interfaces for the salt file server, calling the master
or looking in a local, minion defined file_roots
.
This new feature can be used by modifying the minion config to point to a
local file_roots
and setting the file_client
option to local
.
State modules now accept the **kwargs
argument. This results in all data
in a sls file assigned to a state being made available to the state function.
This passes data in a transparent way back to the modules executing the logic.
In particular, this allows adding arguments to the pkg.install
module that
enable more advanced and granular controls with respect to what the state is
capable of.
An example of this along with the new debconf module for installing ldap client packages on Debian:
ldap-client-packages:
pkg:
- debconf: salt://debconf/ldap-client.ans
- installed
- names:
- nslcd
- libpam-ldapd
- libnss-ldapd
In the past it was required that all arguments be passed in the proper order to
the salt and salt-call commands. As of 0.9.8, keyword arguments can be
passed in the form of kwarg=argument
.
# salt -G 'type:dev' git.clone \
repository=https://github.com/saltstack/salt.git cwd=/tmp/salt user=jeff
A number of fixes and changes have been applied to the Matcher system. The
most noteworthy is the change in the grain matcher. The grain matcher used to
use a regular expression to match the passed data to a grain, but now defaults
to a shell glob like the majority of match interfaces in Salt. A new option
is available that still uses the old style regex matching to grain data called
grain-pcre
. To use regex matching in compound matches use the letter P.
For example, this would match any ArchLinux or Fedora minions:
# salt --grain-pcre 'os:(Arch:Fed).*' test.ping
And the associated compound matcher suitable for top.sls
is P:
P@os:(Arch|Fed).*
NOTE: Changing the grains matcher from pcre to glob is backwards incompatible.
Support has been added for matching minions with Yahoo's range library. This is handled by passing range syntax with -R or --range arguments to salt.
More information at: https://github.com/ytoolshed/range/wiki/%22yamlfile%22-module-file-spec
A new means to updating requisite statements has been added to make adding watchers and requires to external states easier. Before 0.9.8 the only way to extend the states that were watched by a state outside of the sls was to use an extend statement:
include:
- http
extend:
apache:
service:
- watch:
- pkg: tomcat
tomcat:
pkg:
- installed
But the new Requisite in
statement allows for easier extends for
requisites:
include:
- http
tomcat:
pkg:
- installed
- watch_in:
- service: apache
Requisite in is part of the extend system, so still remember to always include the sls that is being extended!
Salt predetermines what modules should be mapped to what uses based on the properties of a system. These determinations are generally made for modules that provide things like package and service management. The apt module maps to pkg on Debian and the yum module maps to pkg on Fedora for instance.
Sometimes in states, it may be necessary for a non-default module to be used for the desired functionality. For instance, an Arch Linux system may have been set up with systemd support. Instead of using the default service module detected for Arch Linux, the systemd module can be used:
http:
service:
- running
- enable: True
- provider: systemd
Default providers can also be defined in the minion config file:
providers:
service: systemd
When default providers are passed in the minion config, then those providers will be applied to all functionality in Salt, this means that the functions called by the minion will use these modules, as well as states.
Requisites can now be defined with glob expansion. This means that if there are many requisites, they can be defined on a single line.
To watch all files in a directory:
http:
service:
- running
- enable: True
- watch:
- file: /etc/http/conf.d/*
This example will watch all defined files that match the glob
/etc/http/conf.d/*
The new batch size option allows commands to be executed while maintaining that only so many hosts are executing the command at one time. This option can take a percentage or a finite number:
salt '*' -b 10 test.ping
salt -G 'os:RedHat' --batch-size 25% apache.signal restart
This will only run test.ping on 10 of the targeted minions at a time and then
restart apache on 25% of the minions matching os:RedHat
at a time and work
through them all until the task is complete. This makes jobs like rolling web
server restarts behind a load balancer or doing maintenance on BSD firewalls
using carp much easier with salt.
This is a list of notable, but non-exhaustive updates with new and existing modules.
Windows support has seen a flurry of support this release cycle. We've gained all new file, network, and shadow modules. Please note that these are still a work in progress.
For our ruby users, new rvm and gem modules have been added along with the associated states
The virt module gained basic Xen support.
The yum module gained Scientific Linux support.
The pkg module on Debian, Ubuntu, and derivatives force apt to run in a non-interactive mode. This prevents issues when package installation waits for confirmation.
A pkg module for OpenSUSE's zypper was added.
The service module on Ubuntu natively supports upstart.
A new debconf module was contributed by our community for more advanced control over deb package deployments on Debian based distributions.
The mysql.user state and mysql module gained a password_hash argument.
The cmd module and state gained
a shell keyword argument for specifying a shell other than /bin/sh
on
Linux / Unix systems.
New git and mercurial modules have been added for fans of distributed version control.
While we feel strongly that the advantages gained with minion side state compiling are very critical, it does prevent certain features that may be desired. 0.9.8 has support for initial master side state compiling, but many more components still need to be developed, it is hoped that these can be finished for 0.9.9.
The goal is that states can be compiled on both the master and the minion allowing for compilation to be split between master and minion. Why will this be great? It will allow storing sensitive data on the master and sending it to some minions without all minions having access to it. This will be good for handling ssl certificates on front-end web servers for instance.
Salt 0.9.8 sees the introduction of basic Solaris support. The daemon runs well, but grains and more of the modules need updating and testing.
Salt states on windows are now much more viable thanks to contributions from our community! States for file, service, local user, and local group management are more fully fleshed out along with network and disk modules. Windows users can also now manage registry entries using the new "reg" module.
release: | 2012-04-27 |
---|
0.9.9 is out and comes with some serious bug fixes and even more serious features. This release is the last major feature release before 1.0.0 and could be considered the 1.0.0 release candidate.
A few updates include more advanced kwargs support, the ability for salt states to more safely configure a running salt minion, better job directory management and the new state test interface.
Many new tests have been added as well, including the new minion swarm test that allows for easier testing of Salt working with large groups of minions. This means that if you have experienced stability issues with Salt before, particularly in larger deployments, that these bugs have been tested for, found, and killed.
Until 0.9.9 the only option when running states to see what was going to be changed was to print out the highstate with state.show_highstate and manually look it over. But now states can be run to discover what is going to be changed.
Passing the option test=True
to many of the state functions will now cause
the salt state system to only check for what is going to be changed and report
on those changes.
salt '*' state.highstate test=True
Now states that would have made changes report them back in yellow.
A shorthand syntax has been added to sls files, and it will be the default syntax in documentation going forward. The old syntax is still fully supported and will not be deprecated, but it is recommended to move to the new syntax in the future. This change moves the state function up into the state name using a dot notation. This is in-line with how state functions are generally referred to as well:
The new way:
/etc/sudoers:
file.present:
- source: salt://sudo/sudoers
- user: root
- mode: 400
Two new requisite statements are available in 0.9.9. The use and use_in requisite and requisite-in allow for the transparent duplication of data between states. When a state "uses" another state it copies the other state's arguments as defaults. This was created in direct response to the new network state, and allows for many network interfaces to be configured in the same way easily. A simple example:
root_file:
file.absent:
- name: /tmp/nothing
- user: root
- mode: 644
- group: root
- use_in:
- file: /etc/vimrc
fred_file:
file.absent:
- name: /tmp/nothing
- user: fred
- group: marketing
- mode: 660
/files/marketing/district7.rst:
file.present:
- source: salt://marketing/district7.rst
- template: jinja
- use:
- file: fred_file
/etc/vimrc:
file.present:
- source: salt://edit/vimrc
This makes the 2 lower state decs inherit the options from their respectively "used" state decs.
The new network state allows for the configuration of network devices via salt states and the ip salt module. This addition has been given to the project by Jeff Hutchins and Bret Palsson from Jive Communications.
Currently the only network configuration backend available is for Red Hat based systems, like Red Hat Enterprise, CentOS, and Fedora.
Originally the jobs executed were stored on the master in the format:
<cachedir>/jobs/jid/{minion ids}
But this format restricted the number of jobs in the cache to the number of
subdirectories allowed on the filesystem. Ext3 for instance limits
subdirectories to 32000. To combat this the new format for 0.9.9 is:
<cachedir>/jobs/jid_hash[:2]/jid_hash[2:]/{minion ids}
So that now the number of maximum jobs that can be run before the cleanup
cycle hits the job directory is substantially higher.
The original ssh_auth state was limited to accepting only arguments to apply to a public key, and the key itself. This was restrictive due to the way the we learned that many people were using the state, so the key section has been expanded to accept options and arguments to the key that over ride arguments passed in the state. This gives substantial power to using ssh_auth with names:
sshkeys:
ssh_auth:
- present
- user: backup
- enc: ssh-dss
- options:
- option1="value1"
- option2="value2 flag2"
- comment: backup
- names:
- AAAAB3NzaC1yc2EAAAABIwAAAQEAlyE26SMFFVY5YJvnL7AF5CRTPtAigSW1U887ASfBt6FDa7Qr1YdO5ochiLoz8aSiMKd5h4dhB6ymHbmntMPjQena29jQjXAK4AK0500rMShG1Y1HYEjTXjQxIy/SMjq2aycHI+abiVDn3sciQjsLsNW59t48Udivl2RjWG7Eo+LYiB17MKD5M40r5CP2K4B8nuL+r4oAZEHKOJUF3rzA20MZXHRQuki7vVeWcW7ie8JHNBcq8iObVSoruylXav4aKG02d/I4bz/l0UdGh18SpMB8zVnT3YF5nukQQ/ATspmhpU66s4ntMehULC+ljLvZL40ByNmF0TZc2sdSkA0111==
- AAAAB3NzaC1yc2EAAAABIwAAAQEAlyE26SMFFVY5YJvnL7AF5CRTPtAigSW1U887ASfBt6FDa7Qr1YdO5ochiLoz8aSiMKd5h4dhB6ymHbmntMPjQena29jQjXAK4AK0500rMShG1Y1HYEjTXjQxIy/SMjq2aycHI+abiVDn3sciQjsLsNW59t48Udivl2RjWG7Eo+LYiB17MKD5M40r5CP2K4B8nuL+r4oAZEHKOJUF3rzA20MZXHRQuki7vVeWcW7ie8JHNBcq8iObVSoruylXav4aKG02d/I4bz/l0UdGh18SpMB8zVnT3YF5nukQQ/ATspmhpU66s4ntMehULC+ljLvZL40ByNmF0TZc2sdSkA0222== override
- ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAlyE26SMFFVY5YJvnL7AF5CRTPtAigSW1U887ASfBt6FDa7Qr1YdO5ochiLoz8aSiMKd5h4dhB6ymHbmntMPjQena29jQjXAK4AK0500rMShG1Y1HYEjTXjQxIy/SMjq2aycHI+abiVDn3sciQjsLsNW59t48Udivl2RjWG7Eo+LYiB17MKD5M40r5CP2K4B8nuL+r4oAZEHKOJUF3rzA20MZXHRQuki7vVeWcW7ie8JHNBcq8iObVSoruylXav4aKG02d/I4bz/l0UdGh18SpMB8zVnT3YF5nukQQ/ATspmhpU66s4ntMehULC+ljLvZL40ByNmF0TZc2sdSkA0333== override
- ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAlyE26SMFFVY5YJvnL7AF5CRTPtAigSW1U887ASfBt6FDa7Qr1YdO5ochiLoz8aSiMKd5h4dhB6ymHbmntMPjQena29jQjXAK4AK0500rMShG1Y1HYEjTXjQxIy/SMjq2aycHI+abiVDn3sciQjsLsNW59t48Udivl2RjWG7Eo+LYiB17MKD5M40r5CP2K4B8nuL+r4oAZEHKOJUF3rzA20MZXHRQuki7vVeWcW7ie8JHNBcq8iObVSoruylXav4aKG02d/I4bz/l0UdGh18SpMB8zVnT3YF5nukQQ/ATspmhpU66s4ntMehULC+ljLvZL40ByNmF0TZc2sdSkA0444==
- option3="value3",option4="value4 flag4" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAlyE26SMFFVY5YJvnL7AF5CRTPtAigSW1U887ASfBt6FDa7Qr1YdO5ochiLoz8aSiMKd5h4dhB6ymHbmntMPjQena29jQjXAK4AK0500rMShG1Y1HYEjTXjQxIy/SMjq2aycHI+abiVDn3sciQjsLsNW59t48Udivl2RjWG7Eo+LYiB17MKD5M40r5CP2K4B8nuL+r4oAZEHKOJUF3rzA20MZXHRQuki7vVeWcW7ie8JHNBcq8iObVSoruylXav4aKG02d/I4bz/l0UdGh18SpMB8zVnT3YF5nukQQ/ATspmhpU66s4ntMehULC+ljLvZL40ByNmF0TZc2sdSkA0555== override
- option3="value3" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAlyE26SMFFVY5YJvnL7AF5CRTPtAigSW1U887ASfBt6FDa7Qr1YdO5ochiLoz8aSiMKd5h4dhB6ymHbmntMPjQena29jQjXAK4AK0500rMShG1Y1HYEjTXjQxIy/SMjq2aycHI+abiVDn3sciQjsLsNW59t48Udivl2RjWG7Eo+LYiB17MKD5M40r5CP2K4B8nuL+r4oAZEHKOJUF3rzA20MZXHRQuki7vVeWcW7ie8JHNBcq8iObVSoruylXav4aKG02d/I4bz/l0UdGh18SpMB8zVnT3YF5nukQQ/ATspmhpU66s4ntMehULC+ljLvZL40ByNmF0TZc2sdSkA0666==
To follow up the recent additions in 0.9.8 of additional kwargs support, 0.9.9 also adds the capability to send kwargs into commands via a dict. This addition to the LocalClient api can be used like so:
import salt.client
client = salt.client.LocalClient('/etc/salt/master')
ret = client.cmd('*', 'cmd.run', ['ls -l'], kwarg={'cwd': '/etc'})
This update has been added to all cmd methods in the LocalClient class.
One problem faced with running Salt states, is that it has been difficult to manage the Salt minion via states, this is due to the fact that if the minion is called to restart while a state run is happening then the state run would be killed. 0.9.9 slightly changes the process scope of the state runs, so now when salt is executing states it can safely restart the salt-minion daemon.
In addition to daemonizing the state run, the apt module also daemonizes. This update makes it possible to cleanly update the salt-minion package on Debian/Ubuntu systems without leaving apt in an inconsistent state or killing the active minion process mid-execution.
Now, when including sls modules in include statements or in the top file, shell globs can be used. This can greatly simplify listing matched sls modules in the top file and include statements:
base:
'*':
- files*
- core*
include:
- users.dev.*
- apache.ser*
Since the pillar data is just, data, it does not need to come expressly from
the pillar interface. The external pillar system allows for hooks to be added
making it possible to extract pillar data from any arbitrary external
interface. The external pillar interface is configured via the ext_pillar
option. Currently interfaces exist to gather external pillar data via hiera
or via a shell command that sends yaml data to the terminal:
ext_pillar:
- cmd_yaml: cat /etc/salt/ext.yaml
- hiera: /etc/hirea.yaml
The initial external pillar interfaces and extra interfaces can be added to the file salt/pillar.py, it is planned to add more external pillar interfaces. If the need arises a new module loader interface will be created in the future to manage external pillar interfaces.
The new state.single function allows for single states to be cleanly executed. This is a great tool for setting up a small group of states on a system or for testing out the behavior of single states:
salt '*' state.single user.present name=wade uid=2000
The test interface functions here as well, so changes can also be tested against as:
salt '*' state.single user.present name=wade uid=2000 test=True
A few exciting new test interfaces have been added, the minion swarm allows not only testing of larger loads, but also allows users to see how Salt behaves with large groups of minions without having to create a large deployment.
The minion swarm test system allows for large groups of minions to be tested against easily without requiring large numbers of servers or virtual machines. The minion swarm creates as many minions as a system can handle and roots them in the /tmp directory and connects them to a master.
The benefit here is that we were able to replicate issues that happen only when there are large numbers of minions. A number of elusive bugs which were causing stability issues in masters and minions have since been hunted down. Bugs that used to take careful watch by users over several days can now be reliably replicated in minutes, and fixed in minutes.
Using the swarm is easy, make sure a master is up for the swarm to connect to, and then use the minionswarm.py script in the tests directory to spin up as many minions as you want. Remember, this is a fork bomb, don't spin up more than your hardware can handle!
python minionswarm.py -m 20 --master salt-master
The new Shell testing system allows us to test the behavior of commands executed from a high level. This allows for the high level testing of salt runners and commands like salt-key.
Tests have been added to test the aspects of the client APIs and ensure that the client calls work, and that they manage passed data, in a desirable way.
See also
See also
A number of unofficial open source projects, based on Salt, or written to enhance Salt have been created.
Created by Aaron Bull Schaefer, aka "elasticdog".
https://github.com/elasticdog/salt-sandbox
Salt Sandbox is a multi-VM Vagrant-based Salt development environment used for creating and testing new Salt state modules outside of your production environment. It's also a great way to learn firsthand about Salt and its remote execution capabilities.
Salt Sandbox will set up three separate virtual machines:
These VMs can be used in conjunction to segregate and test your modules based on node groups, top file environments, grain values, etc. You can even test modules on different Linux distributions or release versions to better match your production infrastructure.
email: | security@saltstack.com |
---|---|
gpg key ID: | 4EA0793D |
gpg key fingerprint: | |
8ABE 4EFC F0F4 B24B FF2A AF90 D570 F2D3 4EA0 793D |
gpg public key:
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
mQINBFO15mMBEADa3CfQwk5ED9wAQ8fFDku277CegG3U1hVGdcxqKNvucblwoKCb
hRK6u9ihgaO9V9duV2glwgjytiBI/z6lyWqdaD37YXG/gTL+9Md+qdSDeaOa/9eg
7y+g4P+FvU9HWUlujRVlofUn5Dj/IZgUywbxwEybutuzvvFVTzsn+DFVwTH34Qoh
QIuNzQCSEz3Lhh8zq9LqkNy91ZZQO1ZIUrypafspH6GBHHcE8msBFgYiNBnVcUFH
u0r4j1Rav+621EtD5GZsOt05+NJI8pkaC/dDKjURcuiV6bhmeSpNzLaXUhwx6f29
Vhag5JhVGGNQxlRTxNEM86HEFp+4zJQ8m/wRDrGX5IAHsdESdhP+ljDVlAAX/ttP
/Ucl2fgpTnDKVHOA00E515Q87ZHv6awJ3GL1veqi8zfsLaag7rw1TuuHyGLOPkDt
t5PAjsS9R3KI7pGnhqI6bTOi591odUdgzUhZChWUUX1VStiIDi2jCvyoOOLMOGS5
AEYXuWYP7KgujZCDRaTNqRDdgPd93Mh9JI8UmkzXDUgijdzVpzPjYgFaWtyK8lsc
Fizqe3/Yzf9RCVX/lmRbiEH+ql/zSxcWlBQd17PKaL+TisQFXcmQzccYgAxFbj2r
QHp5ABEu9YjFme2Jzun7Mv9V4qo3JF5dmnUk31yupZeAOGZkirIsaWC3hwARAQAB
tDBTYWx0U3RhY2sgU2VjdXJpdHkgVGVhbSA8c2VjdXJpdHlAc2FsdHN0YWNrLmNv
bT6JAj4EEwECACgFAlO15mMCGwMFCQeGH4AGCwkIBwMCBhUIAgkKCwQWAgMBAh4B
AheAAAoJENVw8tNOoHk9z/MP/2vzY27fmVxU5X8joiiturjlgEqQw41IYEmWv1Bw
4WVXYCHP1yu/1MC1uuvOmOd5BlI8YO2C2oyW7d1B0NorguPtz55b7jabCElekVCh
h/H4ZVThiwqgPpthRv/2npXjIm7SLSs/kuaXo6Qy2JpszwDVFw+xCRVL0tH9KJxz
HuNBeVq7abWD5fzIWkmGM9hicG/R2D0RIlco1Q0VNKy8klG+pOFOW886KnwkSPc7
JUYp1oUlHsSlhTmkLEG54cyVzrTP/XuZuyMTdtyTc3mfgW0adneAL6MARtC5UB/h
q+v9dqMf4iD3wY6ctu8KWE8Vo5MUEsNNO9EA2dUR88LwFZ3ZnnXdQkizgR/Aa515
dm17vlNkSoomYCo84eN7GOTfxWcq+iXYSWcKWT4X+h/ra+LmNndQWQBRebVUtbKE
ZDwKmiQz/5LY5EhlWcuU4lVmMSFpWXt5FR/PtzgTdZAo9QKkBjcv97LYbXvsPI69
El1BLAg+m+1UpE1L7zJT1il6PqVyEFAWBxW46wXCCkGssFsvz2yRp0PDX8A6u4yq
rTkt09uYht1is61joLDJ/kq3+6k8gJWkDOW+2NMrmf+/qcdYCMYXmrtOpg/wF27W
GMNAkbdyzgeX/MbUBCGCMdzhevRuivOI5bu4vT5s3KdshG+yhzV45bapKRd5VN+1
mZRquQINBFO15mMBEAC5UuLii9ZLz6qHfIJp35IOW9U8SOf7QFhzXR7NZ3DmJsd3
f6Nb/habQFIHjm3K9wbpj+FvaW2oWRlFVvYdzjUq6c82GUUjW1dnqgUvFwdmM835
1n0YQ2TonmyaF882RvsRZrbJ65uvy7SQxlouXaAYOdqwLsPxBEOyOnMPSktW5V2U
IWyxsNP3sADchWIGq9p5D3Y/loyIMsS1dj+TjoQZOKSj7CuRT98+8yhGAY8YBEXu
9r3I9o6mDkuPpAljuMc8r09Im6az2egtK/szKt4Hy1bpSSBZU4W/XR7XwQNywmb3
wxjmYT6Od3Mwj0jtzc3gQiH8hcEy3+BO+NNmyzFVyIwOLziwjmEcw62S57wYKUVn
HD2nglMsQa8Ve0e6ABBMEY7zGEGStva59rfgeh0jUMJiccGiUDTMs0tdkC6knYKb
u/fdRqNYFoNuDcSeLEw4DdCuP01l2W4yY+fiK6hAcL25amjzc+yYo9eaaqTn6RAT
bzdhHQZdpAMxY+vNT0+NhP1Zo5gYBMR65Zp/VhFsf67ijb03FUtdw9N8dHwiR2m8
vVA8kO/gCD6wS2p9RdXqrJ9JhnHYWjiVuXR+f755ZAndyQfRtowMdQIoiXuJEXYw
6XN+/BX81gJaynJYc0uw0MnxWQX+A5m8HqEsbIFUXBYXPgbwXTm7c4IHGgXXdwAR
AQABiQIlBBgBAgAPBQJTteZjAhsMBQkHhh+AAAoJENVw8tNOoHk91rcQAIhxLv4g
duF/J1Cyf6Wixz4rqslBQ7DgNztdIUMjCThg3eB6pvIzY5d3DNROmwU5JvGP1rEw
hNiJhgBDFaB0J/y28uSci+orhKDTHb/cn30IxfuAuqrv9dujvmlgM7JUswOtLZhs
5FYGa6v1RORRWhUx2PQsF6ORg22QAaagc7OlaO3BXBoiE/FWsnEQCUsc7GnnPqi7
um45OJl/pJntsBUKvivEU20fj7j1UpjmeWz56NcjXoKtEvGh99gM5W2nSMLE3aPw
vcKhS4yRyLjOe19NfYbtID8m8oshUDji0XjQ1z5NdGcf2V1YNGHU5xyK6zwyGxgV
xZqaWnbhDTu1UnYBna8BiUobkuqclb4T9k2WjbrUSmTwKixokCOirFDZvqISkgmN
r6/g3w2TRi11/LtbUciF0FN2pd7rj5mWrOBPEFYJmrB6SQeswWNhr5RIsXrQd/Ho
zvNm0HnUNEe6w5YBfA6sXQy8B0Zs6pcgLogkFB15TuHIIIpxIsVRv5z8SlEnB7HQ
Io9hZT58yjhekJuzVQB9loU0C/W0lzci/pXTt6fd9puYQe1DG37pSifRG6kfHxrR
if6nRyrfdTlawqbqdkoqFDmEybAM9/hv3BqriGahGGH/hgplNQbYoXfNwYMYaHuB
aSkJvrOQW8bpuAzgVyd7TyNFv+t1kLlfaRYJ
=wBTJ
-----END PGP PUBLIC KEY BLOCK-----
The SaltStack Security Team is available at security@saltstack.com for security-related bug reports or questions.
We request the disclosure of any security-related bugs or issues be reported non-publicly until such time as the issue can be resolved and a security-fix release can be prepared. At that time we will release the fix and make a public announcement with upgrade instructions and download locations.
SaltStack takes security and the trust of our customers and users very seriously. Our disclosure policy is intended to resolve security issues as quickly and safely as is possible.
The fastest place to receive security announcements is via the salt-announce mailing list. This list is low-traffic.
FAQ
False
would be helpful.X
isn't available, even though the shell command it uses is installed. Why?No. Salt is 100% committed to being open-source, including all of our APIs. It is developed under the Apache 2.0 license, allowing it to be used in both open and proprietary projects.
The salt-users mailing list as well as the salt IRC channel can both be helpful resources to confirm if others are seeing the issue and to assist with immediate debugging.
To report a bug to the Salt project, please follow the instructions in reporting a bug.
Minions need to be able to connect to the Master on TCP ports 4505 and 4506. Minions do not need any inbound ports open. More detailed information on firewall settings can be found here.
This is often caused by SELinux. Try disabling SELinux or putting it in permissive mode and see if the weird behavior goes away.
You are probably using cmd.run
rather than
cmd.wait
. A cmd.wait
state will only run when there has been a change in a
state that it is watching.
A cmd.run
state will run the corresponding command
every time (unless it is prevented from running by the unless
or onlyif
arguments).
More details can be found in the documentation for the cmd
states.
False
would be helpful.¶When you run test.ping the Master tells Minions to run commands/functions, and listens for the return data, printing it to the screen when it is received. If it doesn't receive anything back, it doesn't have anything to display for that Minion.
There are a couple options for getting information on Minions that are not
responding. One is to use the verbose (-v
) option when you run salt
commands, as it will display "Minion did not return" for any Minions which time
out.
salt -v '*' pkg.install zsh
Another option is to use the manage.down
runner:
salt-run manage.down
Also, if the Master is under heavy load, it is possible that the CLI will exit
without displaying return data for all targeted Minions. However, this doesn't
mean that the Minions did not return; this only means that the Salt CLI timed
out waiting for a response. Minions will still send their return data back to
the Master once the job completes. If any expected Minions are missing from the
CLI output, the jobs.list_jobs
runner can
be used to show the job IDs of the jobs that have been run, and the
jobs.lookup_jid
runner can be used to get
the return data for that job.
salt-run jobs.list_jobs
salt-run jobs.lookup_jid 20130916125524463507
If you find that you are often missing Minion return data on the CLI, only to
find it with the jobs runners, then this may be a sign that the
worker_threads
value may need to be increased in the master
config file. Additionally, running your Salt CLI commands with the -t
option will make Salt wait longer for the return data before the CLI command
exits. For instance, the below command will wait up to 60 seconds for the
Minions to return:
salt -t 60 '*' test.ping
If the Minion id is not configured explicitly (using the id
parameter), Salt will determine the id based on the hostname. Exactly how this
is determined varies a little between operating systems and is described in
detail here.
Salt detects the Minion's operating system and assigns the correct package or service management module based on what is detected. However, for certain custom spins and OS derivatives this detection fails. In cases like this, an issue should be opened on our tracker, with the following information:
The output of the following command:
salt <minion_id> grains.items | grep os
The contents of /etc/lsb-release
, if present on the Minion.
In versions of Salt 0.16.3 or older, there is a bug in gitfs which can affect the syncing of custom types. Upgrading to 0.16.4 or newer will fix this.
Custom modules are only synced to Minions when state.highstate
, saltutil.sync_modules
, or saltutil.sync_all
is run. Similarly, custom states are only
synced to Minions when state.highstate
,
saltutil.sync_states
, or
saltutil.sync_all
is run.
Other custom types (renderers, outputters, etc.) have similar behavior, see the
documentation for the saltutil
module for more
information.
X
isn't available, even though the shell command it uses is installed. Why?¶This is most likely a PATH issue. Did you custom-compile the software which the
module requires? RHEL/CentOS/etc. in particular override the root user's path
in /etc/init.d/functions
, setting it to /sbin:/usr/sbin:/bin:/usr/bin
,
making software installed into /usr/local/bin
unavailable to Salt when the
Minion is started using the initscript. In version 2014.1.0, Salt will have a
better solution for these sort of PATH-related issues, but recompiling the
software to install it into a location within the PATH should resolve the
issue in the meantime. Alternatively, you can create a symbolic link within the
PATH using a file.symlink
state.
/usr/bin/foo:
file.symlink:
- target: /usr/local/bin/foo
This depends on the versions. In general, it is recommended that Master and Minion versions match.
When upgrading Salt, the master(s) should always be upgraded first. Backwards compatibility for minions running newer versions of salt than their masters is not guaranteed.
Whenever possible, backwards compatibility between new masters and old minions will be preserved. Generally, the only exception to this policy is in case of a security vulnerability.
Recent examples of backwards compatibility breakage include the 0.17.1 release (where all backwards compatibility was broken due to a security fix), and the 2014.1.0 release (which retained compatibility between 2014.1.0 masters and 0.17 minions, but broke compatibility for 2014.1.0 minions and older masters).
Yes. Salt provides an easy to use addition to your file.managed states that allow you to back up files via backup_mode, backup_mode can be configured on a per state basis, or in the minion config (note that if set in the minion config this would simply be the default method to use, you still need to specify that the file should be backed up!).
Updating the salt-minion package requires a restart of the salt-minion service. But restarting the service while in the middle of a state run interrupts the process of the minion running states and sending results back to the master. It's a tricky problem to solve, and we're working on it, but in the meantime one way of handling this (on Linux and UNIX-based operating systems) is to use at (a job scheduler which predates cron) to schedule a restart of the service. at is not installed by default on most distros, and requires a service to be running (usually called atd) in order to schedule jobs. Here's an example of how to upgrade the salt-minion package at the end of a Salt run, and schedule a service restart for one minute after the package update completes.
salt-minion:
pkg.installed:
- name: salt-minion
- version: 2014.1.7-3.el6
- order: last
service.running:
- name: salt-minion
- require:
- pkg: salt-minion
cmd.wait:
- name: echo service salt-minion restart | at now + 1 minute
- watch:
- pkg: salt-minion
To ensure that at is installed and atd is running, the following states can be used (be sure to double-check the package name and service name for the distro the minion is running, in case they differ from the example below.
at:
pkg.installed:
- name: at
service.running:
- name: atd
- enable: True
An alternative to using the atd daemon is to fork and disown the process.
restart_minion:
cmd.run:
- name: |
exec 0>&- # close stdin
exec 1>&- # close stdout
exec 2>&- # close stderr
nohup /bin/sh -c 'sleep 10 && salt-call --local service.restart salt-minion' &
- python_shell: True
- order: last
For Windows machines, restarting the minion at can be accomplished by adding the following state:
schedule-start:
cmd.run:
- name: 'start powershell "Restart-Service -Name salt-minion"'
- order: last
or running immediately from the command line:
salt -G kernel:Windows cmd.run 'start powershell "Restart-Service -Name salt-minion"'
In order to configure a master server via states, the Salt master can also be "salted" in order to enforce state on the Salt master as well as the Salt minions. Salting the Salt master requires a Salt minion to be installed on the same machine as the Salt master. Once the Salt minion is installed, the minion configuration file must be pointed to the local Salt master:
master: 127.0.0.1
Once the Salt master has been "salted" with a Salt minion, it can be targeted
just like any other minion. If the minion on the salted master is running, the
minion can be targeted via any usual salt
command. Additionally, the
salt-call
command can execute operations to enforce state on the salted
master without requiring the minion to be running.
More information about salting the Salt master can be found in the salt-formula for salt itself:
ext_job_cache
, the list of returners.Salt's Jinja documentation
.jobs runner
.file_client
.id
.PyDSL
.thin runner
.virt runner
.worker_threads
.