Salt Table of Contents

Introduction to Salt

We’re not just talking about NaCl.

The 30 second summary

Salt is:

  • a configuration management system, capable of maintaining remote nodes in defined states (for example, ensuring that specific packages are installed and specific services are running)
  • a distributed remote execution system used to execute commands and query data on remote nodes, either individually or by arbitrary selection criteria

It was developed in order to bring the best solutions found in the world of remote execution together and make them better, faster, and more malleable. Salt accomplishes this through its ability to handle large loads of information, and not just dozens but hundreds and even thousands of individual servers quickly through a simple and manageable interface.

Simplicity

Providing versatility between massive scale deployments and smaller systems may seem daunting, but Salt is very simple to set up and maintain, regardless of the size of the project. The architecture of Salt is designed to work with any number of servers, from a handful of local network systems to international deployments across different data centers. The topology is a simple server/client model with the needed functionality built into a single set of daemons. While the default configuration will work with little to no modification, Salt can be fine tuned to meet specific needs.

Parallel execution

The core functions of Salt:

  • enable commands to remote systems to be called in parallel rather than serially
  • use a secure and encrypted protocol
  • use the smallest and fastest network payloads possible
  • provide a simple programming interface

Salt also introduces more granular controls to the realm of remote execution, allowing systems to be targeted not just by hostname, but also by system properties.

Building on proven technology

Salt takes advantage of a number of technologies and techniques. The networking layer is built with the excellent ZeroMQ networking library, so the Salt daemon includes a viable and transparent AMQ broker. Salt uses public keys for authentication with the master daemon, then uses faster AES encryption for payload communication; authentication and encryption are integral to Salt. Salt takes advantage of communication via msgpack, enabling fast and light network traffic.

Python client interface

In order to allow for simple expansion, Salt execution routines can be written as plain Python modules. The data collected from Salt executions can be sent back to the master server, or to any arbitrary program. Salt can be called from a simple Python API, or from the command line, so that Salt can be used to execute one-off commands as well as operate as an integral part of a larger application.

Fast, flexible, scalable

The result is a system that can execute commands at high speed on target server groups ranging from one to very many servers. Salt is very fast, easy to set up, amazingly malleable and provides a single remote execution architecture that can manage the diverse requirements of any number of servers. The Salt infrastructure brings together the best of the remote execution world, amplifies its capabilities and expands its range, resulting in a system that is as versatile as it is practical, suitable for any network.

Open

Salt is developed under the Apache 2.0 license, and can be used for open and proprietary projects. Please submit your expansions back to the Salt project so that we can all benefit together as Salt grows. Please feel free to sprinkle Salt around your systems and let the deliciousness come forth.

Salt Community

Join the Salt!

There are many ways to participate in and communicate with the Salt community.

Salt has an active IRC channel and a mailing list.

Mailing List

Join the salt-users mailing list. It is the best place to ask questions about Salt and see whats going on with Salt development! The Salt mailing list is hosted by Google Groups. It is open to new members.

https://groups.google.com/forum/#!forum/salt-users

IRC

The #salt IRC channel is hosted on the popular Freenode network. You can use the Freenode webchat client right from your browser.

Logs of the IRC channel activity are being collected courtesy of Moritz Lenz.

If you wish to discuss the development of Salt itself join us in #salt-devel.

Follow on GitHub

The Salt code is developed via GitHub. Follow Salt for constant updates on what is happening in Salt development:

https://github.com/saltstack/salt

Blogs

SaltStack Inc. keeps a blog with recent news and advancements:

http://www.saltstack.com/blog/

Thomas Hatch also shares news and thoughts on Salt and related projects in his personal blog The Red45:

http://red45.wordpress.com/

Hack the Source

If you want to get involved with the development of source code or the documentation efforts, please review the hacking section!

Installation

See also

Installing Salt for development and contributing to the project.

Quick Install

On most distributions, you can set up a Salt Minion with the Salt Bootstrap.

Platform-specific Installation Instructions

These guides go into detail how to install Salt on a given platform.

Arch Linux

Installation

Salt (stable) is currently available via the Arch Linux Official repositories. There are currently -git packages available in the Arch User repositories (AUR) as well.

Stable Release

Install Salt stable releases from the Arch Linux Official repositories as follows:

pacman -S salt-zmq

To install Salt stable releases using the RAET protocol, use the following:

pacman -S salt-raet

Note

transports

Unlike other linux distributions, please be aware that Arch Linux's package manager pacman defaults to RAET as the Salt transport. If you want to use ZeroMQ instead, make sure to enter the associated number for the salt-zmq repository when prompted.

Tracking develop

To install the bleeding edge version of Salt (may include bugs!), use the -git package. Installing the -git package as follows:

wget https://aur.archlinux.org/packages/sa/salt-git/salt-git.tar.gz
tar xf salt-git.tar.gz
cd salt-git/
makepkg -is

Note

yaourt

If a tool such as Yaourt is used, the dependencies will be gathered and built automatically.

The command to install salt using the yaourt tool is:

yaourt salt-git
Post-installation tasks

systemd

Activate the Salt Master and/or Minion via systemctl as follows:

systemctl enable salt-master.service
systemctl enable salt-minion.service

Start the Master

Once you've completed all of these steps you're ready to start your Salt Master. You should be able to start your Salt Master now using the command seen here:

systemctl start salt-master

Now go to the Configuring Salt page.

Debian Installation

Currently the latest packages for Debian Old Stable, Stable, and Unstable (Squeeze, Wheezy, and Sid) are published in our (saltstack.com) Debian repository.

Configure Apt
Squeeze (Old Old Stable)

For squeeze, you will need to enable the Debian backports repository as well as the debian.saltstack.com repository. To do so, add the following to /etc/apt/sources.list or a file in /etc/apt/sources.list.d:

deb http://debian.saltstack.com/debian squeeze-saltstack main
deb http://backports.debian.org/debian-backports squeeze-backports main
Wheezy (Old Stable)

For wheezy, the following line is needed in either /etc/apt/sources.list or a file in /etc/apt/sources.list.d:

deb http://debian.saltstack.com/debian wheezy-saltstack main
Jessie (Stable)

For jessie, the following line is needed in either /etc/apt/sources.list or a file in /etc/apt/sources.list.d:

deb http://debian.saltstack.com/debian jessie-saltstack main
Sid (Unstable)

For sid, the following line is needed in either /etc/apt/sources.list or a file in /etc/apt/sources.list.d:

deb http://debian.saltstack.com/debian unstable main
Import the repository key.

You will need to import the key used for signing.

wget -q -O- "http://debian.saltstack.com/debian-salt-team-joehealy.gpg.key" | apt-key add -

Note

You can optionally verify the key integrity with sha512sum using the public key signature shown here. E.g:

echo "b702969447140d5553e31e9701be13ca11cc0a7ed5fe2b30acb8491567560ee62f834772b5095d735dfcecb2384a5c1a20045f52861c417f50b68dd5ff4660e6  debian-salt-team-joehealy.gpg.key" | sha512sum -c
Update the package database
apt-get update
Install packages

Install the Salt master, minion, or syndic from the repository with the apt-get command. These examples each install one daemon, but more than one package name may be given at a time:

apt-get install salt-master
apt-get install salt-minion
apt-get install salt-syndic
Post-installation tasks

Now, go to the Configuring Salt page.

Fedora

Beginning with version 0.9.4, Salt has been available in the primary Fedora repositories and EPEL. It is installable using yum. Fedora will have more up to date versions of Salt than other members of the Red Hat family, which makes it a great place to help improve Salt!

WARNING: Fedora 19 comes with systemd 204. Systemd has known bugs fixed in later revisions that prevent the salt-master from starting reliably or opening the network connections that it needs to. It's not likely that a salt-master will start or run reliably on any distribution that uses systemd version 204 or earlier. Running salt-minions should be OK.

Installation

Salt can be installed using yum and is available in the standard Fedora repositories.

Stable Release

Salt is packaged separately for the minion and the master. It is necessary only to install the appropriate package for the role the machine will play. Typically, there will be one master and multiple minions.

yum install salt-master
yum install salt-minion
Installing from updates-testing

When a new Salt release is packaged, it is first admitted into the updates-testing repository, before being moved to the stable repo.

To install from updates-testing, use the enablerepo argument for yum:

yum --enablerepo=updates-testing install salt-master
yum --enablerepo=updates-testing install salt-minion
Post-installation tasks

Master

To have the Master start automatically at boot time:

systemctl enable salt-master.service

To start the Master:

systemctl start salt-master.service

Minion

To have the Minion start automatically at boot time:

systemctl enable salt-minion.service

To start the Minion:

systemctl start salt-minion.service

Now go to the Configuring Salt page.

FreeBSD

Salt was added to the FreeBSD ports tree Dec 26th, 2011 by Christer Edwards <christer.edwards@gmail.com>. It has been tested on FreeBSD 7.4, 8.2, 9.0, and 9.1 releases.

Salt is dependent on the following additional ports. These will be installed as dependencies of the sysutils/py-salt port:

/devel/py-yaml
/devel/py-pyzmq
/devel/py-Jinja2
/devel/py-msgpack
/security/py-pycrypto
/security/py-m2crypto
Installation

On FreeBSD 10 and later, to install Salt from the FreeBSD pkgng repo, use the command:

pkg install py27-salt

On older versions of FreeBSD, to install Salt from the FreeBSD ports tree, use the command:

make -C /usr/ports/sysutils/py-salt install clean
Post-installation tasks

Master

Copy the sample configuration file:

cp /usr/local/etc/salt/master.sample /usr/local/etc/salt/master

rc.conf

Activate the Salt Master in /etc/rc.conf or /etc/rc.conf.local and add:

+ salt_master_enable="YES"

Start the Master

Start the Salt Master as follows:

service salt_master start

Minion

Copy the sample configuration file:

cp /usr/local/etc/salt/minion.sample /usr/local/etc/salt/minion

rc.conf

Activate the Salt Minion in /etc/rc.conf or /etc/rc.conf.local and add:

+ salt_minion_enable="YES"
+ salt_minion_paths="/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin"

Start the Minion

Start the Salt Minion as follows:

service salt_minion start

Now go to the Configuring Salt page.

Gentoo

Salt can be easily installed on Gentoo via Portage:

emerge app-admin/salt
Post-installation tasks

Now go to the Configuring Salt page.

OpenBSD

Salt was added to the OpenBSD ports tree on Aug 10th 2013. It has been tested on OpenBSD 5.5 onwards.

Salt is dependent on the following additional ports. These will be installed as dependencies of the sysutils/salt port:

/net/py-msgpack
/net/py-zmq
/security/py-M2Crypto
/security/py-crypto
/textproc/py-MarkupSafe
/textproc/py-yaml
/www/py-jinja2
/www/py-requests
Installation

To install Salt from the OpenBSD pkg repo, use the command:

pkg_add salt
Post-installation tasks

Master

To have the Master start automatically at boot time:

rcctl enable salt_master

To start the Master:

rcctl start salt_master

Minion

To have the Minion start automatically at boot time:

rcctl enable salt_minion

To start the Minion:

rcctl start salt_minion

Now go to the Configuring Salt page.

OS X

Dependency Installation

It should be noted that Homebrew explicitly discourages the use of sudo:

Homebrew is designed to work without using sudo. You can decide to use it but we strongly recommend not to do so. If you have used sudo and run into a bug then it is likely to be the cause. Please don’t file a bug report unless you can reproduce it after reinstalling Homebrew from scratch without using sudo

So when using Homebrew, if you want support from the Homebrew community, install this way:

brew install saltstack

When using MacPorts, install this way:

sudo port install salt

When only using the OS X system's pip, install this way:

sudo pip install salt
Salt-Master Customizations

To run salt-master on OS X, the root user maxfiles limit must be increased:

Note

On OS X 10.10 (Yosemite) and higher, maxfiles should not be adjusted. The default limits are sufficient in all but the most extreme scenarios. Overriding these values with the setting below will cause system instability!

sudo launchctl limit maxfiles 4096 8192

And sudo add this configuration option to the /etc/salt/master file:

max_open_files: 8192

Now the salt-master should run without errors:

sudo salt-master --log-level=all
Post-installation tasks

Now go to the Configuring Salt page.

RHEL / CentOS / Scientific Linux / Amazon Linux / Oracle Linux

Installation Using pip

Since Salt is on PyPI, it can be installed using pip, though most users prefer to install using RPMs (which can be installed from EPEL). Installation from pip is easy:

pip install salt

Warning

If installing from pip (or from source using setup.py install), be advised that the yum-utils package is needed for Salt to manage packages. Also, if the Python dependencies are not already installed, then you will need additional libraries/tools installed to build some of them. More information on this can be found here.

Installation from Repository
RHEL/CentOS 5

Due to the removal of some of Salt's dependencies from EPEL5, we have created a repository on Fedora COPR. Moving forward, this will be the official means of installing Salt on RHEL5-based systems. Information on how to enable this repository can be found here.

RHEL/CentOS 6 and 7, Scientific Linux, etc.

Beginning with version 0.9.4, Salt has been available in EPEL. It is installable using yum. Salt should work properly with all mainstream derivatives of RHEL, including CentOS, Scientific Linux, Oracle Linux and Amazon Linux. Report any bugs or issues on the issue tracker.

On RHEL6, the proper Jinja package 'python-jinja2' was moved from EPEL to the "RHEL Server Optional Channel". Verify this repository is enabled before installing salt on RHEL6.

Enabling EPEL

If the EPEL repository is not installed on your system, you can download the RPM from here for RHEL/CentOS 6 (or here for RHEL/CentOS 7) and install it using the following command:

rpm -Uvh epel-release-X-Y.rpm

Replace epel-release-X-Y.rpm with the appropriate filename.

Installing Stable Release

Salt is packaged separately for the minion and the master. It is necessary only to install the appropriate package for the role the machine will play. Typically, there will be one master and multiple minions.

On the salt-master, run this:

yum install salt-master

On each salt-minion, run this:

yum install salt-minion
Installing from epel-testing

When a new Salt release is packaged, it is first admitted into the epel-testing repository, before being moved to the stable repo.

To install from epel-testing, use the enablerepo argument for yum:

yum --enablerepo=epel-testing install salt-minion
ZeroMQ 4

We recommend using ZeroMQ 4 where available. SaltStack provides ZeroMQ 4.0.4 and pyzmq 14.3.1 in a COPR repository. Instructions for adding this repository (as well as for upgrading ZeroMQ and pyzmq on existing minions) can be found here.

If this repo is added before Salt is installed, then installing either salt-master or salt-minion will automatically pull in ZeroMQ 4.0.4, and additional states to upgrade ZeroMQ and pyzmq are unnecessary.

Warning

RHEL/CentOS 5 Users Using COPR repos on RHEL/CentOS 5 requires that the python-hashlib package be installed. Not having it present will result in checksum errors because YUM will not be able to process the SHA256 checksums used by COPR.

Note

For RHEL/CentOS 5 installations, if using the new repository to install Salt (as detailed above), then it is not necessary to enable the zeromq4 COPR, as the new EL5 repository includes ZeroMQ 4.

Package Management

Salt's interface to yum makes heavy use of the repoquery utility, from the yum-utils package. This package will be installed as a dependency if salt is installed via EPEL. However, if salt has been installed using pip, or a host is being managed using salt-ssh, then as of version 2014.7.0 yum-utils will be installed automatically to satisfy this dependency.

Post-installation tasks

Master

To have the Master start automatically at boot time:

chkconfig salt-master on

To start the Master:

service salt-master start

Minion

To have the Minion start automatically at boot time:

chkconfig salt-minion on

To start the Minion:

service salt-minion start

Now go to the Configuring Salt page.

Solaris

Salt was added to the OpenCSW package repository in September of 2012 by Romeo Theriault <romeot@hawaii.edu> at version 0.10.2 of Salt. It has mainly been tested on Solaris 10 (sparc), though it is built for and has been tested minimally on Solaris 10 (x86), Solaris 9 (sparc/x86) and 11 (sparc/x86). (Please let me know if you're using it on these platforms!) Most of the testing has also just focused on the minion, though it has verified that the master starts up successfully on Solaris 10.

Comments and patches for better support on these platforms is very welcome.

As of version 0.10.4, Solaris is well supported under salt, with all of the following working well:

  1. remote execution
  2. grain detection
  3. service control with SMF
  4. 'pkg' states with 'pkgadd' and 'pkgutil' modules
  5. cron modules/states
  6. user and group modules/states
  7. shadow password management modules/states

Salt is dependent on the following additional packages. These will automatically be installed as dependencies of the py_salt package:

  • py_yaml
  • py_pyzmq
  • py_jinja2
  • py_msgpack_python
  • py_m2crypto
  • py_crypto
  • python
Installation

To install Salt from the OpenCSW package repository you first need to install pkgutil assuming you don't already have it installed:

On Solaris 10:

pkgadd -d http://get.opencsw.org/now

On Solaris 9:

wget http://mirror.opencsw.org/opencsw/pkgutil.pkg
pkgadd -d pkgutil.pkg all

Once pkgutil is installed you'll need to edit it's config file /etc/opt/csw/pkgutil.conf to point it at the unstable catalog:

- #mirror=http://mirror.opencsw.org/opencsw/testing
+ mirror=http://mirror.opencsw.org/opencsw/unstable

OK, time to install salt.

# Update the catalog
root> /opt/csw/bin/pkgutil -U
# Install salt
root> /opt/csw/bin/pkgutil -i -y py_salt
Minion Configuration

Now that salt is installed you can find it's configuration files in /etc/opt/csw/salt/.

You'll want to edit the minion config file to set the name of your salt master server:

- #master: salt
+ master: your-salt-server

If you would like to use pkgutil as the default package provider for your Solaris minions, you can do so using the providers option in the minion config file.

You can now start the salt minion like so:

On Solaris 10:

svcadm enable salt-minion

On Solaris 9:

/etc/init.d/salt-minion start

You should now be able to log onto the salt master and check to see if the salt-minion key is awaiting acceptance:

salt-key -l un

Accept the key:

salt-key -a <your-salt-minion>

Run a simple test against the minion:

salt '<your-salt-minion>' test.ping
Troubleshooting

Logs are in /var/log/salt

Ubuntu Installation

Add repository

The latest packages for Ubuntu are published in the saltstack PPA. If you have the add-apt-repository utility, you can add the repository and import the key in one step:

sudo add-apt-repository ppa:saltstack/salt

In addition to the main repository, there are secondary repositories for each individual major release. These repositories receive security and point releases but will not upgrade to any subsequent major release. There are currently four available repos: salt16, salt17, salt2014-1, salt2014-7. For example to follow 2014.7.x releases:

sudo add-apt-repository ppa:saltstack/salt2014-7

add-apt-repository: command not found?

The add-apt-repository command is not always present on Ubuntu systems. This can be fixed by installing python-software-properties:

sudo apt-get install python-software-properties

The following may be required as well:

sudo apt-get install software-properties-common

Note that since Ubuntu 12.10 (Raring Ringtail), add-apt-repository is found in the software-properties-common package, and is part of the base install. Thus, add-apt-repository should be able to be used out-of-the-box to add the PPA.

Alternately, manually add the repository and import the PPA key with these commands:

echo deb http://ppa.launchpad.net/saltstack/salt/ubuntu `lsb_release -sc` main | sudo tee /etc/apt/sources.list.d/saltstack.list
wget -q -O- "http://keyserver.ubuntu.com:11371/pks/lookup?op=get&search=0x4759FA960E27C0A6" | sudo apt-key add -

After adding the repository, update the package management database:

sudo apt-get update
Install packages

Install the Salt master, minion, or syndic from the repository with the apt-get command. These examples each install one daemon, but more than one package name may be given at a time:

sudo apt-get install salt-master
sudo apt-get install salt-minion
sudo apt-get install salt-syndic

Some core components are packaged separately in the Ubuntu repositories. These should be installed as well: salt-cloud, salt-ssh, salt-api

sudo apt-get install salt-cloud
sudo apt-get install salt-ssh
sudo apt-get install salt-api
ZeroMQ 4

ZeroMQ 4 is available by default for Ubuntu 14.04 and newer. However, for Ubuntu 12.04 LTS, starting with Salt version 2014.7.5, ZeroMQ 4 is included with the Salt installation package and nothing additional needs to be done.

Post-installation tasks

Now go to the Configuring Salt page.

Windows

Salt has full support for running the Salt Minion on Windows.

There are no plans for the foreseeable future to develop a Salt Master on Windows. For now you must run your Salt Master on a supported operating system to control your Salt Minions on Windows.

Many of the standard Salt modules have been ported to work on Windows and many of the Salt States currently work on Windows, as well.

Windows Installer

Salt Minion Windows installers can be found here. The output of md5sum <salt minion exe> should match the contents of the corresponding md5 file.

Download here

Note

The 2014.7.0 installers have been removed because of a regression. Please use the 2014.7.1 release instead.

Note

The executables above will install dependencies that the Salt minion requires.

The 64bit installer has been tested on Windows 7 64bit and Windows Server 2008R2 64bit. The 32bit installer has been tested on Windows 2003 Server 32bit. Please file a bug report on our GitHub repo if issues for other platforms are found.

The installer asks for 2 bits of information; the master hostname and the minion name. The installer will update the minion config with these options and then start the minion.

The salt-minion service will appear in the Windows Service Manager and can be started and stopped there or with the command line program sc like any other Windows service.

If the minion won't start, try installing the Microsoft Visual C++ 2008 x64 SP1 redistributable. Allow all Windows updates to run salt-minion smoothly.

Silent Installer option

The installer can be run silently by providing the /S option at the command line. The options /master and /minion-name allow for configuring the master hostname and minion name, respectively. Here's an example of using the silent installer:

Salt-Minion-0.17.0-Setup-amd64.exe /S /master=yoursaltmaster /minion-name=yourminionname
Setting up a Windows build environment

This document will explain how to set up a development environment for salt on Windows. The development environment allows you to work with the source code to customize or fix bugs. It will also allow you to build your own installation.

The Easy Way
Prerequisite Software

To do this the easy way you only need to install Git for Windows.

Create the Build Environment
  1. Clone the Salt-Windows-Dev repo from github.

    Open a command line and type:

    git clone https://github.com/saltstack/salt-windows-dev
    
  2. Build the Python Environment

    Go into the salt-windows-dev directory. Right-click the file named dev_env.ps1 and select Run with PowerShell

    If you get an error, you may need to change the execution policy.

    Open a powershell window and type the following:

    Set-ExecutionPolicy RemoteSigned
    

    This will download and install Python with all the dependencies needed to develop and build salt.

  3. Build the Salt Environment

    Right-click on the file named dev_env_salt.ps1 and select Run with Powershell

    This will clone salt into C:\Salt-Dev\salt and set it to the 2015.5 branch. You could optionally run the command from a powershell window with a -Version switch to pull a different version. For example:

    dev_env_salt.ps1 -Version '2014.7'
    

    To view a list of available branches and tags, open a command prompt in your C:Salt-Devsalt directory and type:

    git branch -a
    git tag -n
    
The Hard Way
Prerequisite Software

Install the following software:

  1. Git for Windows
  2. Nullsoft Installer

Download the Prerequisite zip file for your CPU architecture from the SaltStack download site:

These files contain all sofware required to build and develop salt. Unzip the contents of the file to C:\Salt-Dev\temp.

Create the Build Environment
  1. Build the Python Environment

    • Install Python:

      Browse to the C:\Salt-Dev\temp directory and find the Python installation file for your CPU Architecture under the corresponding subfolder. Double-click the file to install python.

      Make sure the following are in your PATH environment variable:

      C:\Python27
      C:\Python27\Scripts
      
    • Install Pip

      Open a command prompt and navigate to C:\Salt-Dev\temp Run the following command:

      python get-pip.py
      
    • Easy Install compiled binaries.

      M2Crypto, PyCrypto, and PyWin32 need to be installed using Easy Install. Open a command prompt and navigate to C:\Salt-Dev\temp\<cpuarch>. Run the following commands:

      easy_install -Z <M2Crypto file name>
      easy_install -Z <PyCrypto file name>
      easy_install -Z <PyWin32 file name>
      

      Note

      You can type the first part of the file name and then press the tab key to auto-complete the name of the file.

    • Pip Install Additional Prerequisites

      All remaining prerequisites need to be pip installed. These prerequisites are as follow:

      • MarkupSafe
      • Jinja
      • MsgPack
      • PSUtil
      • PyYAML
      • PyZMQ
      • WMI
      • Requests
      • Certifi

      Open a command prompt and navigate to C:\Salt-Dev\temp. Run the following commands:

      pip install <cpuarch>\<MarkupSafe file name>
      pip install <Jinja file name>
      pip install <cpuarch>\<MsgPack file name>
      pip install <cpuarch>\<psutil file name>
      pip install <cpuarch>\<PyYAML file name>
      pip install <cpuarch>\<pyzmq file name>
      pip install <WMI file name>
      pip install <requests file name>
      pip install <certifi file name>
      
  2. Build the Salt Environment

    • Clone Salt

      Open a command prompt and navigate to C:\Salt-Dev. Run the following command to clone salt:

      git clone https://github.com/saltstack/salt
      
    • Checkout Branch

      Checkout the branch or tag of salt you want to work on or build. Open a command prompt and navigate to C:\Salt-Dev\salt. Get a list of available tags and branches by running the following commands:

      git fetch --all
      
      To view a list of available branches:
      git branch -a
      
      To view a list of availabel tags:
      git tag -n
      

      Checkout the branch or tag by typing the following command:

      git checkout <branch/tag name>
      
    • Clean the Environment

      When switching between branches residual files can be left behind that will interfere with the functionality of salt. Therefore, after you check out the branch you want to work on, type the following commands to clean the salt environment:

Developing with Salt

There are two ways to develop with salt. You can run salt's setup.py each time you make a change to source code or you can use the setup tools develop mode.

Configure the Minion

Both methods require that the minion configuration be in the C:\salt directory. Copy the conf and var directories from C:\Salt-Dev\salt\pkg\ windows\buildenv to C:\salt. Now go into the C:\salt\conf directory and edit the file name minion (no extension). You need to configure the master and id parameters in this file. Edit the following lines:

master: <ip or name of your master>
id: <name of your minion>
Setup.py Method

Go into the C:\Salt-Dev\salt directory from a cmd prompt and type:

python setup.py install --force

This will install python into your python installation at C:\Python27. Everytime you make an edit to your source code, you'll have to stop the minion, run the setup, and start the minion.

To start the salt-minion go into C:\Python27\Scripts from a cmd prompt and type:

salt-minion

For debug mode type:

salt-minion -l debug

To stop the minion press Ctrl+C.

Setup Tools Develop Mode (Preferred Method)

To use the Setup Tools Develop Mode go into C:\Salt-Dev\salt from a cmd prompt and type:

pip install -e .

This will install pointers to your source code that resides at C:\Salt-Dev\salt. When you edit your source code you only have to restart the minion.

Build the windows installer

This is the method of building the installer as of version 2014.7.4.

Clean the Environment

Make sure you don't have any leftover salt files from previous versions of salt in your Python directory.

  1. Remove all files that start with salt in the C:\Python27\Scripts directory
  2. Remove all files and directorys that start with salt in the C:\Python27\Lib\site-packages directory
Install Salt

Install salt using salt's setup.py. From the C:\Salt-Dev\salt directory type the following command:

python setup.py install --force
Build the Installer

From cmd prompt go into the C:\Salt-Dev\salt\pkg\windows directory. Type the following command for the branch or tag of salt you're building:

BuildSalt.bat <branch or tag>

This will copy python with salt installed to the buildenv\bin directory, make it portable, and then create the windows installer . The .exe for the windows installer will be placed in the installer directory.

Testing the Salt minion
  1. Create the directory C:\salt (if it doesn't exist already)

  2. Copy the example conf and var directories from pkg/windows/buildenv/ into C:\salt

  3. Edit C:\salt\conf\minion

    master: ipaddress or hostname of your salt-master
    
  4. Start the salt-minion

    cd C:\Python27\Scripts
    python salt-minion
    
  5. On the salt-master accept the new minion's key

    sudo salt-key -A
    

    This accepts all unaccepted keys. If you're concerned about security just accept the key for this specific minion.

  6. Test that your minion is responding

    On the salt-master run:

    sudo salt '*' test.ping
    

You should get the following response: {'your minion hostname': True}

Single command bootstrap script

On a 64 bit Windows host the following script makes an unattended install of salt, including all dependencies:

Not up to date.

This script is not up to date. Please use the installer found above

# (All in one line.)

"PowerShell (New-Object System.Net.WebClient).DownloadFile('http://csa-net.dk/salt/bootstrap64.bat','C:\bootstrap.bat');(New-Object -com Shell.Application).ShellExecute('C:\bootstrap.bat');"

You can execute the above command remotely from a Linux host using winexe:

winexe -U "administrator" //fqdn "PowerShell (New-Object ......);"

For more info check http://csa-net.dk/salt

Packages management under Windows 2003

On windows Server 2003, you need to install optional component "wmi windows installer provider" to have full list of installed packages. If you don't have this, salt-minion can't report some installed softwares.

SUSE Installation

With openSUSE 13.1, Salt 0.16.4 has been available in the primary repositories. The devel:language:python repo will have more up to date versions of salt, all package development will be done there.

Installation

Salt can be installed using zypper and is available in the standard openSUSE 13.1 repositories.

Stable Release

Salt is packaged separately for the minion and the master. It is necessary only to install the appropriate package for the role the machine will play. Typically, there will be one master and multiple minions.

zypper install salt-master
zypper install salt-minion
Post-installation tasks openSUSE

Master

To have the Master start automatically at boot time:

systemctl enable salt-master.service

To start the Master:

systemctl start salt-master.service

Minion

To have the Minion start automatically at boot time:

systemctl enable salt-minion.service

To start the Minion:

systemctl start salt-minion.service
Post-installation tasks SLES

Master

To have the Master start automatically at boot time:

chkconfig salt-master on

To start the Master:

rcsalt-master start

Minion

To have the Minion start automatically at boot time:

chkconfig salt-minion on

To start the Minion:

rcsalt-minion start
Unstable Release
openSUSE

For openSUSE Factory run the following as root:

zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_Factory/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master

For openSUSE 13.1 run the following as root:

zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_13.1/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master

For openSUSE 12.3 run the following as root:

zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_12.3/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master

For openSUSE 12.2 run the following as root:

zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_12.2/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master

For openSUSE 12.1 run the following as root:

zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_12.1/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master

For bleeding edge python Factory run the following as root:

zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/bleeding_edge_python_Factory/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master
Suse Linux Enterprise

For SLE 12 run the following as root:

zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/SLE_12/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master

For SLE 11 SP3 run the following as root:

zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/SLE_11_SP3/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master

For SLE 11 SP2 run the following as root:

zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/SLE_11_SP2/devel:languages:python.repo
zypper refresh
zypper install salt salt-minion salt-master

Now go to the Configuring Salt page.

Dependencies

Salt should run on any Unix-like platform so long as the dependencies are met.

  • Python 2.6 >= 2.6 <3.0
  • msgpack-python - High-performance message interchange format
  • YAML - Python YAML bindings
  • Jinja2 - parsing Salt States (configurable in the master settings)
  • MarkupSafe - Implements a XML/HTML/XHTML Markup safe string for Python
  • apache-libcloud - Python lib for interacting with many of the popular cloud service providers using a unified API
  • Requests - HTTP library

Depending on the chosen Salt transport, ZeroMQ or RAET, dependencies vary:

  • ZeroMQ:
    • ZeroMQ >= 3.2.0
    • pyzmq >= 2.2.0 - ZeroMQ Python bindings
    • PyCrypto - The Python cryptography toolkit
    • M2Crypto - "Me Too Crypto" - Python OpenSSL wrapper
  • RAET:
    • libnacl - Python bindings to libsodium
    • ioflo - The flo programming interface raet and salt-raet is built on
    • RAET - The worlds most awesome UDP protocol

Salt defaults to the ZeroMQ transport, and the choice can be made at install time, for example:

python setup.py --salt-transport=raet install

This way, only the required dependencies are pulled by the setup script if need be.

If installing using pip, the --salt-transport install option can be provided like:

pip install --install-option="--salt-transport=raet" salt

Optional Dependencies

  • mako - an optional parser for Salt States (configurable in the master settings)
  • gcc - dynamic Cython module compiling

Upgrading Salt

When upgrading Salt, the master(s) should always be upgraded first. Backward compatibility for minions running newer versions of salt than their masters is not guaranteed.

Whenever possible, backward compatibility between new masters and old minions will be preserved. Generally, the only exception to this policy is in case of a security vulnerability.

Tutorials

Introduction

Salt Masterless Quickstart

Running a masterless salt-minion lets you use Salt's configuration management for a single machine without calling out to a Salt master on another machine.

Since the Salt minion contains such extensive functionality it can be useful to run it standalone. A standalone minion can be used to do a number of things:

  • Stand up a master server via States (Salting a Salt Master)
  • Use salt-call commands on a system without connectivity to a master
  • Masterless States, run states entirely from files local to the minion

It is also useful for testing out state trees before deploying to a production setup.

Bootstrap Salt Minion

The salt-bootstrap script makes bootstrapping a server with Salt simple for any OS with a Bourne shell:

curl -L https://bootstrap.saltstack.com -o install_salt.sh
sudo sh install_salt.sh

See the salt-bootstrap documentation for other one liners. When using Vagrant to test out salt, the Vagrant salt provisioner will provision the VM for you.

Telling Salt to Run Masterless

To instruct the minion to not look for a master, the file_client configuration option needs to be set in the minion configuration file. By default the file_client is set to remote so that the minion gathers file server and pillar data from the salt master. When setting the file_client option to local the minion is configured to not gather this data from the master.

file_client: local

Now the salt minion will not look for a master and will assume that the local system has all of the file and pillar resources.

Note

When running Salt in masterless mode, do not run the salt-minion daemon. Otherwise, it will attempt to connect to a master and fail. The salt-call command stands on its own and does not need the salt-minion daemon.

Create State Tree

Following the successful installation of a salt-minion, the next step is to create a state tree, which is where the SLS files that comprise the possible states of the minion are stored.

The following example walks through the steps necessary to create a state tree that ensures that the server has the Apache webserver installed.

Note

For a complete explanation on Salt States, see the tutorial.

  1. Create the top.sls file:

/srv/salt/top.sls:

base:
  '*':
    - webserver
  1. Create the webserver state tree:

/srv/salt/webserver.sls:

apache:               # ID declaration
  pkg:                # state declaration
    - installed       # function declaration

Note

The apache package has different names on different platforms, for instance on Debian/Ubuntu it is apache2, on Fedora/RHEL it is httpd and on Arch it is apache

The only thing left is to provision our minion using salt-call and the highstate command.

Salt-call

The salt-call command is used to run module functions locally on a minion instead of executing them from the master. Normally the salt-call command checks into the master to retrieve file server and pillar data, but when running standalone salt-call needs to be instructed to not check the master for this data:

salt-call --local state.highstate

The --local flag tells the salt-minion to look for the state tree in the local file system and not to contact a Salt Master for instructions.

To provide verbose output, use -l debug:

salt-call --local state.highstate -l debug

The minion first examines the top.sls file and determines that it is a part of the group matched by * glob and that the webserver SLS should be applied.

It then examines the webserver.sls file and finds the apache state, which installs the Apache package.

The minion should now have Apache installed, and the next step is to begin learning how to write more complex states.

Basics

Standalone Minion

Since the Salt minion contains such extensive functionality it can be useful to run it standalone. A standalone minion can be used to do a number of things:

  • Use salt-call commands on a system without connectivity to a master
  • Masterless States, run states entirely from files local to the minion

Note

When running Salt in masterless mode, do not run the salt-minion daemon. Otherwise, it will attempt to connect to a master and fail. The salt-call command stands on its own and does not need the salt-minion daemon.

Telling Salt Call to Run Masterless

The salt-call command is used to run module functions locally on a minion instead of executing them from the master. Normally the salt-call command checks into the master to retrieve file server and pillar data, but when running standalone salt-call needs to be instructed to not check the master for this data. To instruct the minion to not look for a master when running salt-call the file_client configuration option needs to be set. By default the file_client is set to remote so that the minion knows that file server and pillar data are to be gathered from the master. When setting the file_client option to local the minion is configured to not gather this data from the master.

file_client: local

Now the salt-call command will not look for a master and will assume that the local system has all of the file and pillar resources.

Running States Masterless

The state system can be easily run without a Salt master, with all needed files local to the minion. To do this the minion configuration file needs to be set up to know how to return file_roots information like the master. The file_roots setting defaults to /srv/salt for the base environment just like on the master:

file_roots:
  base:
    - /srv/salt

Now set up the Salt State Tree, top file, and SLS modules in the same way that they would be set up on a master. Now, with the file_client option set to local and an available state tree then calls to functions in the state module will use the information in the file_roots on the minion instead of checking in with the master.

Remember that when creating a state tree on a minion there are no syntax or path changes needed, SLS modules written to be used from a master do not need to be modified in any way to work with a minion.

This makes it easy to "script" deployments with Salt states without having to set up a master, and allows for these SLS modules to be easily moved into a Salt master as the deployment grows.

The declared state can now be executed with:

salt-call state.highstate

Or the salt-call command can be executed with the --local flag, this makes it unnecessary to change the configuration file:

salt-call state.highstate --local

Opening the Firewall up for Salt

The Salt master communicates with the minions using an AES-encrypted ZeroMQ connection. These communications are done over TCP ports 4505 and 4506, which need to be accessible on the master only. This document outlines suggested firewall rules for allowing these incoming connections to the master.

Note

No firewall configuration needs to be done on Salt minions. These changes refer to the master only.

Fedora 18 and beyond / RHEL 7 / CentOS 7

Starting with Fedora 18 FirewallD is the tool that is used to dynamically manage the firewall rules on a host. It has support for IPv4/6 settings and the separation of runtime and permanent configurations. To interact with FirewallD use the command line client firewall-cmd.

firewall-cmd example:

firewall-cmd --permanent --zone=<zone> --add-port=4505-4506/tcp

Please choose the desired zone according to your setup. Don't forget to reload after you made your changes.

firewall-cmd --reload
RHEL 6 / CentOS 6

The lokkit command packaged with some Linux distributions makes opening iptables firewall ports very simple via the command line. Just be careful to not lock out access to the server by neglecting to open the ssh port.

lokkit example:

lokkit -p 22:tcp -p 4505:tcp -p 4506:tcp

The system-config-firewall-tui command provides a text-based interface to modifying the firewall.

system-config-firewall-tui:

system-config-firewall-tui
openSUSE

Salt installs firewall rules in /etc/sysconfig/SuSEfirewall2.d/services/salt. Enable with:

SuSEfirewall2 open
SuSEfirewall2 start

If you have an older package of Salt where the above configuration file is not included, the SuSEfirewall2 command makes opening iptables firewall ports very simple via the command line.

SuSEfirewall example:

SuSEfirewall2 open EXT TCP 4505
SuSEfirewall2 open EXT TCP 4506

The firewall module in YaST2 provides a text-based interface to modifying the firewall.

YaST2:

yast2 firewall
iptables

Different Linux distributions store their iptables (also known as netfilter) rules in different places, which makes it difficult to standardize firewall documentation. Included are some of the more common locations, but your mileage may vary.

Fedora / RHEL / CentOS:

/etc/sysconfig/iptables

Arch Linux:

/etc/iptables/iptables.rules

Debian

Follow these instructions: https://wiki.debian.org/iptables

Once you've found your firewall rules, you'll need to add the two lines below to allow traffic on tcp/4505 and tcp/4506:

-A INPUT -m state --state new -m tcp -p tcp --dport 4505 -j ACCEPT
-A INPUT -m state --state new -m tcp -p tcp --dport 4506 -j ACCEPT

Ubuntu

Salt installs firewall rules in /etc/ufw/applications.d/salt.ufw. Enable with:

ufw allow salt
pf.conf

The BSD-family of operating systems uses packet filter (pf). The following example describes the additions to pf.conf needed to access the Salt master.

pass in on $int_if proto tcp from any to $int_if port 4505
pass in on $int_if proto tcp from any to $int_if port 4506

Once these additions have been made to the pf.conf the rules will need to be reloaded. This can be done using the pfctl command.

pfctl -vf /etc/pf.conf

Whitelist communication to Master

There are situations where you want to selectively allow Minion traffic from specific hosts or networks into your Salt Master. The first scenario which comes to mind is to prevent unwanted traffic to your Master out of security concerns, but another scenario is to handle Minion upgrades when there are backwards incompatible changes between the installed Salt versions in your environment.

Here is an example Linux iptables ruleset to be set on the Master:

# Allow Minions from these networks
-I INPUT -s 10.1.2.0/24 -p tcp -m multiport --dports 4505,4506 -j ACCEPT
-I INPUT -s 10.1.3.0/24 -p tcp -m multiport --dports 4505,4506 -j ACCEPT
# Allow Salt to communicate with Master on the loopback interface
-A INPUT -i lo -p tcp -m multiport --dports 4505,4506 -j ACCEPT
# Reject everything else
-A INPUT -p tcp -m multiport --dports 4505,4506 -j REJECT

Note

The important thing to note here is that the salt command needs to communicate with the listening network socket of salt-master on the loopback interface. Without this you will see no outgoing Salt traffic from the master, even for a simple salt '*' test.ping, because the salt client never reached the salt-master to tell it to carry out the execution.

Using cron with Salt

The Salt Minion can initiate its own highstate using the salt-call command.

$ salt-call state.highstate

This will cause the minion to check in with the master and ensure it is in the correct 'state'.

Use cron to initiate a highstate

If you would like the Salt Minion to regularly check in with the master you can use the venerable cron to run the salt-call command.

# PATH=/bin:/sbin:/usr/bin:/usr/sbin

00 00 * * * salt-call state.highstate

The above cron entry will run a highstate every day at midnight.

Note

Be aware that you may need to ensure the PATH for cron includes any scripts or commands that need to be executed.

Remote execution tutorial

Before continuing make sure you have a working Salt installation by following the installation and the configuration instructions.

Stuck?

There are many ways to get help from the Salt community including our mailing list and our IRC channel #salt.

Order your minions around

Now that you have a master and at least one minion communicating with each other you can perform commands on the minion via the salt command. Salt calls are comprised of three main components:

salt '<target>' <function> [arguments]

See also

salt manpage

target

The target component allows you to filter which minions should run the following function. The default filter is a glob on the minion id. For example:

salt '*' test.ping
salt '*.example.org' test.ping

Targets can be based on minion system information using the Grains system:

salt -G 'os:Ubuntu' test.ping

See also

Grains system

Targets can be filtered by regular expression:

salt -E 'virtmach[0-9]' test.ping

Targets can be explicitly specified in a list:

salt -L 'foo,bar,baz,quo' test.ping

Or Multiple target types can be combined in one command:

salt -C 'G@os:Ubuntu and webser* or E@database.*' test.ping
function

A function is some functionality provided by a module. Salt ships with a large collection of available functions. List all available functions on your minions:

salt '*' sys.doc

Here are some examples:

Show all currently available minions:

salt '*' test.ping

Run an arbitrary shell command:

salt '*' cmd.run 'uname -a'
arguments

Space-delimited arguments to the function:

salt '*' cmd.exec_code python 'import sys; print sys.version'

Optional, keyword arguments are also supported:

salt '*' pip.install salt timeout=5 upgrade=True

They are always in the form of kwarg=argument.

Pillar Walkthrough

Note

This walkthrough assumes that the reader has already completed the initial Salt walkthrough.

Pillars are tree-like structures of data defined on the Salt Master and passed through to minions. They allow confidential, targeted data to be securely sent only to the relevant minion.

Note

Grains and Pillar are sometimes confused, just remember that Grains are data about a minion which is stored or generated from the minion. This is why information like the OS and CPU type are found in Grains. Pillar is information about a minion or many minions stored or generated on the Salt Master.

Pillar data is useful for:

Highly Sensitive Data:
Information transferred via pillar is guaranteed to only be presented to the minions that are targeted, making Pillar suitable for managing security information, such as cryptographic keys and passwords.
Minion Configuration:
Minion modules such as the execution modules, states, and returners can often be configured via data stored in pillar.
Variables:
Variables which need to be assigned to specific minions or groups of minions can be defined in pillar and then accessed inside sls formulas and template files.
Arbitrary Data:
Pillar can contain any basic data structure, so a list of values, or a key/value store can be defined making it easy to iterate over a group of values in sls formulas

Pillar is therefore one of the most important systems when using Salt. This walkthrough is designed to get a simple Pillar up and running in a few minutes and then to dive into the capabilities of Pillar and where the data is available.

Setting Up Pillar

The pillar is already running in Salt by default. To see the minion's pillar data:

salt '*' pillar.items

Note

Prior to version 0.16.2, this function is named pillar.data. This function name is still supported for backwards compatibility.

By default the contents of the master configuration file are loaded into pillar for all minions. This enables the master configuration file to be used for global configuration of minions.

Similar to the state tree, the pillar is comprised of sls files and has a top file. The default location for the pillar is in /srv/pillar.

Note

The pillar location can be configured via the pillar_roots option inside the master configuration file. It must not be in a subdirectory of the state tree.

To start setting up the pillar, the /srv/pillar directory needs to be present:

mkdir /srv/pillar

Now create a simple top file, following the same format as the top file used for states:

/srv/pillar/top.sls:

base:
  '*':
    - data

This top file associates the data.sls file to all minions. Now the /srv/pillar/data.sls file needs to be populated:

/srv/pillar/data.sls:

info: some data

To ensure that the minions have the new pillar data, issue a command to them asking that they fetch their pillars from the master:

salt '*' saltutil.refresh_pillar

Now that the minions have the new pillar, it can be retrieved:

salt '*' pillar.items

The key info should now appear in the returned pillar data.

More Complex Data

Unlike states, pillar files do not need to define formulas. This example sets up user data with a UID:

/srv/pillar/users/init.sls:

users:
  thatch: 1000
  shouse: 1001
  utahdave: 1002
  redbeard: 1003

Note

The same directory lookups that exist in states exist in pillar, so the file users/init.sls can be referenced with users in the top file.

The top file will need to be updated to include this sls file:

/srv/pillar/top.sls:

base:
  '*':
    - data
    - users

Now the data will be available to the minions. To use the pillar data in a state, you can use Jinja:

/srv/salt/users/init.sls

{% for user, uid in pillar.get('users', {}).items() %}
{{user}}:
  user.present:
    - uid: {{uid}}
{% endfor %}

This approach allows for users to be safely defined in a pillar and then the user data is applied in an sls file.

Parameterizing States With Pillar

Pillar data can be accessed in state files to customise behavior for each minion. All pillar (and grain) data applicable to each minion is substituted into the state files through templating before being run. Typical uses include setting directories appropriate for the minion and skipping states that don't apply.

A simple example is to set up a mapping of package names in pillar for separate Linux distributions:

/srv/pillar/pkg/init.sls:

pkgs:
  {% if grains['os_family'] == 'RedHat' %}
  apache: httpd
  vim: vim-enhanced
  {% elif grains['os_family'] == 'Debian' %}
  apache: apache2
  vim: vim
  {% elif grains['os'] == 'Arch' %}
  apache: apache
  vim: vim
  {% endif %}

The new pkg sls needs to be added to the top file:

/srv/pillar/top.sls:

base:
  '*':
    - data
    - users
    - pkg

Now the minions will auto map values based on respective operating systems inside of the pillar, so sls files can be safely parameterized:

/srv/salt/apache/init.sls:

apache:
  pkg.installed:
    - name: {{ pillar['pkgs']['apache'] }}

Or, if no pillar is available a default can be set as well:

Note

The function pillar.get used in this example was added to Salt in version 0.14.0

/srv/salt/apache/init.sls:

apache:
  pkg.installed:
    - name: {{ salt['pillar.get']('pkgs:apache', 'httpd') }}

In the above example, if the pillar value pillar['pkgs']['apache'] is not set in the minion's pillar, then the default of httpd will be used.

Note

Under the hood, pillar is just a Python dict, so Python dict methods such as get and items can be used.

Pillar Makes Simple States Grow Easily

One of the design goals of pillar is to make simple sls formulas easily grow into more flexible formulas without refactoring or complicating the states.

A simple formula:

/srv/salt/edit/vim.sls:

vim:
  pkg.installed: []

/etc/vimrc:
  file.managed:
    - source: salt://edit/vimrc
    - mode: 644
    - user: root
    - group: root
    - require:
      - pkg: vim

Can be easily transformed into a powerful, parameterized formula:

/srv/salt/edit/vim.sls:

vim:
  pkg.installed:
    - name: {{ pillar['pkgs']['vim'] }}

/etc/vimrc:
  file.managed:
    - source: {{ pillar['vimrc'] }}
    - mode: 644
    - user: root
    - group: root
    - require:
      - pkg: vim

Where the vimrc source location can now be changed via pillar:

/srv/pillar/edit/vim.sls:

{% if grains['id'].startswith('dev') %}
vimrc: salt://edit/dev_vimrc
{% elif grains['id'].startswith('qa') %}
vimrc: salt://edit/qa_vimrc
{% else %}
vimrc: salt://edit/vimrc
{% endif %}

Ensuring that the right vimrc is sent out to the correct minions.

Setting Pillar Data on the Command Line

Pillar data can be set on the command line like so:

salt '*' state.highstate pillar='{"foo": "bar"}'

The state.sls command can also be used to set pillar values via the command line:

salt '*' state.sls my_sls_file pillar='{"hello": "world"}'

Lists can be passed in pillar as well:

salt '*' state.highstate pillar='["foo", "bar", "baz"]'

Note

If a key is passed on the command line that already exists on the minion, the key that is passed in will overwrite the entire value of that key, rather than merging only the specified value set via the command line.

More On Pillar

Pillar data is generated on the Salt master and securely distributed to minions. Salt is not restricted to the pillar sls files when defining the pillar but can retrieve data from external sources. This can be useful when information about an infrastructure is stored in a separate location.

Reference information on pillar and the external pillar interface can be found in the Salt documentation:

Pillar

Minion Config in Pillar

Minion configuration options can be set on pillars. Any option that you want to modify, should be in the first level of the pillars, in the same way you set the options in the config file. For example, to configure the MySQL root password to be used by MySQL Salt execution module:

mysql.pass: hardtoguesspassword

This is very convenient when you need some dynamic configuration change that you want to be applied on the fly. For example, there is a chicken and the egg problem if you do this:

mysql-admin-passwd:
  mysql_user.present:
    - name: root
    - password: somepasswd

mydb:
  mysql_db.present

The second state will fail, because you changed the root password and the minion didn't notice it. Setting mysql.pass in the pillar, will help to sort out the issue. But always change the root admin password in the first place.

This is very helpful for any module that needs credentials to apply state changes: mysql, keystone, etc.

States

How Do I Use Salt States?

Simplicity, Simplicity, Simplicity

Many of the most powerful and useful engineering solutions are founded on simple principles. Salt States strive to do just that: K.I.S.S. (Keep It Stupidly Simple)

The core of the Salt State system is the SLS, or SaLt State file. The SLS is a representation of the state in which a system should be in, and is set up to contain this data in a simple format. This is often called configuration management.

Note

This is just the beginning of using states, make sure to read up on pillar Pillar next.

It is All Just Data

Before delving into the particulars, it will help to understand that the SLS file is just a data structure under the hood. While understanding that the SLS is just a data structure isn't critical for understanding and making use of Salt States, it should help bolster knowledge of where the real power is.

SLS files are therefore, in reality, just dictionaries, lists, strings, and numbers. By using this approach Salt can be much more flexible. As one writes more state files, it becomes clearer exactly what is being written. The result is a system that is easy to understand, yet grows with the needs of the admin or developer.

The Top File

The example SLS files in the below sections can be assigned to hosts using a file called top.sls. This file is described in-depth here.

Default Data - YAML

By default Salt represents the SLS data in what is one of the simplest serialization formats available - YAML.

A typical SLS file will often look like this in YAML:

Note

These demos use some generic service and package names, different distributions often use different names for packages and services. For instance apache should be replaced with httpd on a Red Hat system. Salt uses the name of the init script, systemd name, upstart name etc. based on what the underlying service management for the platform. To get a list of the available service names on a platform execute the service.get_all salt function.

Information on how to make states work with multiple distributions is later in the tutorial.

apache:
  pkg.installed: []
  service.running:
    - require:
      - pkg: apache

This SLS data will ensure that the package named apache is installed, and that the apache service is running. The components can be explained in a simple way.

The first line is the ID for a set of data, and it is called the ID Declaration. This ID sets the name of the thing that needs to be manipulated.

The second and third lines contain the state module function to be run, in the format <state_module>.<function>. The pkg.installed state module function ensures that a software package is installed via the system's native package manager. The service.running state module function ensures that a given system daemon is running.

Finally, on line five, is the word require. This is called a Requisite Statement, and it makes sure that the Apache service is only started after a successful installation of the apache package.

Adding Configs and Users

When setting up a service like an Apache web server, many more components may need to be added. The Apache configuration file will most likely be managed, and a user and group may need to be set up.

apache:
  pkg.installed: []
  service.running:
    - watch:
      - pkg: apache
      - file: /etc/httpd/conf/httpd.conf
      - user: apache
  user.present:
    - uid: 87
    - gid: 87
    - home: /var/www/html
    - shell: /bin/nologin
    - require:
      - group: apache
  group.present:
    - gid: 87
    - require:
      - pkg: apache

/etc/httpd/conf/httpd.conf:
  file.managed:
    - source: salt://apache/httpd.conf
    - user: root
    - group: root
    - mode: 644

This SLS data greatly extends the first example, and includes a config file, a user, a group and new requisite statement: watch.

Adding more states is easy, since the new user and group states are under the Apache ID, the user and group will be the Apache user and group. The require statements will make sure that the user will only be made after the group, and that the group will be made only after the Apache package is installed.

Next, the require statement under service was changed to watch, and is now watching 3 states instead of just one. The watch statement does the same thing as require, making sure that the other states run before running the state with a watch, but it adds an extra component. The watch statement will run the state's watcher function for any changes to the watched states. So if the package was updated, the config file changed, or the user uid modified, then the service state's watcher will be run. The service state's watcher just restarts the service, so in this case, a change in the config file will also trigger a restart of the respective service.

Moving Beyond a Single SLS

When setting up Salt States in a scalable manner, more than one SLS will need to be used. The above examples were in a single SLS file, but two or more SLS files can be combined to build out a State Tree. The above example also references a file with a strange source - salt://apache/httpd.conf. That file will need to be available as well.

The SLS files are laid out in a directory structure on the Salt master; an SLS is just a file and files to download are just files.

The Apache example would be laid out in the root of the Salt file server like this:

apache/init.sls
apache/httpd.conf

So the httpd.conf is just a file in the apache directory, and is referenced directly.

Do not use dots in SLS file names

The initial implementation of top.sls and Include declaration followed the python import model where a slash is represented as a period. This means that a SLS file with a period in the name ( besides the suffix period) can not be referenced. For example, webserver_1.0.sls is not referenceable because webserver_1.0 would refer to the directory/file webserver_1/0.sls

But when using more than one single SLS file, more components can be added to the toolkit. Consider this SSH example:

ssh/init.sls:

openssh-client:
  pkg.installed

/etc/ssh/ssh_config:
  file.managed:
    - user: root
    - group: root
    - mode: 644
    - source: salt://ssh/ssh_config
    - require:
      - pkg: openssh-client

ssh/server.sls:

include:
  - ssh

openssh-server:
  pkg.installed

sshd:
  service.running:
    - require:
      - pkg: openssh-client
      - pkg: openssh-server
      - file: /etc/ssh/banner
      - file: /etc/ssh/sshd_config

/etc/ssh/sshd_config:
  file.managed:
    - user: root
    - group: root
    - mode: 644
    - source: salt://ssh/sshd_config
    - require:
      - pkg: openssh-server

/etc/ssh/banner:
  file:
    - managed
    - user: root
    - group: root
    - mode: 644
    - source: salt://ssh/banner
    - require:
      - pkg: openssh-server

Note

Notice that we use two similar ways of denoting that a file is managed by Salt. In the /etc/ssh/sshd_config state section above, we use the file.managed state declaration whereas with the /etc/ssh/banner state section, we use the file state declaration and add a managed attribute to that state declaration. Both ways produce an identical result; the first way -- using file.managed -- is merely a shortcut.

Now our State Tree looks like this:

apache/init.sls
apache/httpd.conf
ssh/init.sls
ssh/server.sls
ssh/banner
ssh/ssh_config
ssh/sshd_config

This example now introduces the include statement. The include statement includes another SLS file so that components found in it can be required, watched or as will soon be demonstrated - extended.

The include statement allows for states to be cross linked. When an SLS has an include statement it is literally extended to include the contents of the included SLS files.

Note that some of the SLS files are called init.sls, while others are not. More info on what this means can be found in the States Tutorial.

Extending Included SLS Data

Sometimes SLS data needs to be extended. Perhaps the apache service needs to watch additional resources, or under certain circumstances a different file needs to be placed.

In these examples, the first will add a custom banner to ssh and the second will add more watchers to apache to include mod_python.

ssh/custom-server.sls:

include:
  - ssh.server

extend:
  /etc/ssh/banner:
    file:
      - source: salt://ssh/custom-banner

python/mod_python.sls:

include:
  - apache

extend:
  apache:
    service:
      - watch:
        - pkg: mod_python

mod_python:
  pkg.installed

The custom-server.sls file uses the extend statement to overwrite where the banner is being downloaded from, and therefore changing what file is being used to configure the banner.

In the new mod_python SLS the mod_python package is added, but more importantly the apache service was extended to also watch the mod_python package.

Using extend with require or watch

The extend statement works differently for require or watch. It appends to, rather than replacing the requisite component.

Understanding the Render System

Since SLS data is simply that (data), it does not need to be represented with YAML. Salt defaults to YAML because it is very straightforward and easy to learn and use. But the SLS files can be rendered from almost any imaginable medium, so long as a renderer module is provided.

The default rendering system is the yaml_jinja renderer. The yaml_jinja renderer will first pass the template through the Jinja2 templating system, and then through the YAML parser. The benefit here is that full programming constructs are available when creating SLS files.

Other renderers available are yaml_mako and yaml_wempy which each use the Mako or Wempy templating system respectively rather than the jinja templating system, and more notably, the pure Python or py, pydsl & pyobjects renderers. The py renderer allows for SLS files to be written in pure Python, allowing for the utmost level of flexibility and power when preparing SLS data; while the pydsl renderer provides a flexible, domain-specific language for authoring SLS data in Python; and the pyobjects renderer gives you a "Pythonic" interface to building state data.

Note

The templating engines described above aren't just available in SLS files. They can also be used in file.managed states, making file management much more dynamic and flexible. Some examples for using templates in managed files can be found in the documentation for the file states, as well as the MooseFS example below.

Getting to Know the Default - yaml_jinja

The default renderer - yaml_jinja, allows for use of the jinja templating system. A guide to the Jinja templating system can be found here: http://jinja.pocoo.org/docs

When working with renderers a few very useful bits of data are passed in. In the case of templating engine based renderers, three critical components are available, salt, grains, and pillar. The salt object allows for any Salt function to be called from within the template, and grains allows for the Grains to be accessed from within the template. A few examples:

apache/init.sls:

apache:
  pkg.installed:
    {% if grains['os'] == 'RedHat'%}
    - name: httpd
    {% endif %}
  service.running:
    {% if grains['os'] == 'RedHat'%}
    - name: httpd
    {% endif %}
    - watch:
      - pkg: apache
      - file: /etc/httpd/conf/httpd.conf
      - user: apache
  user.present:
    - uid: 87
    - gid: 87
    - home: /var/www/html
    - shell: /bin/nologin
    - require:
      - group: apache
  group.present:
    - gid: 87
    - require:
      - pkg: apache

/etc/httpd/conf/httpd.conf:
  file.managed:
    - source: salt://apache/httpd.conf
    - user: root
    - group: root
    - mode: 644

This example is simple. If the os grain states that the operating system is Red Hat, then the name of the Apache package and service needs to be httpd.

A more aggressive way to use Jinja can be found here, in a module to set up a MooseFS distributed filesystem chunkserver:

moosefs/chunk.sls:

include:
  - moosefs

{% for mnt in salt['cmd.run']('ls /dev/data/moose*').split() %}
/mnt/moose{{ mnt[-1] }}:
  mount.mounted:
    - device: {{ mnt }}
    - fstype: xfs
    - mkmnt: True
  file.directory:
    - user: mfs
    - group: mfs
    - require:
      - user: mfs
      - group: mfs
{% endfor %}

/etc/mfshdd.cfg:
  file.managed:
    - source: salt://moosefs/mfshdd.cfg
    - user: root
    - group: root
    - mode: 644
    - template: jinja
    - require:
      - pkg: mfs-chunkserver

/etc/mfschunkserver.cfg:
  file.managed:
    - source: salt://moosefs/mfschunkserver.cfg
    - user: root
    - group: root
    - mode: 644
    - template: jinja
    - require:
      - pkg: mfs-chunkserver

mfs-chunkserver:
  pkg.installed: []
mfschunkserver:
  service.running:
    - require:
{% for mnt in salt['cmd.run']('ls /dev/data/moose*') %}
      - mount: /mnt/moose{{ mnt[-1] }}
      - file: /mnt/moose{{ mnt[-1] }}
{% endfor %}
      - file: /etc/mfschunkserver.cfg
      - file: /etc/mfshdd.cfg
      - file: /var/lib/mfs

This example shows much more of the available power of Jinja. Multiple for loops are used to dynamically detect available hard drives and set them up to be mounted, and the salt object is used multiple times to call shell commands to gather data.

Introducing the Python, PyDSL, and the Pyobjects Renderers

Sometimes the chosen default renderer might not have enough logical power to accomplish the needed task. When this happens, the Python renderer can be used. Normally a YAML renderer should be used for the majority of SLS files, but an SLS file set to use another renderer can be easily added to the tree.

This example shows a very basic Python SLS file:

python/django.sls:

#!py

def run():
    '''
    Install the django package
    '''
    return {'include': ['python'],
            'django': {'pkg': ['installed']}}

This is a very simple example; the first line has an SLS shebang that tells Salt to not use the default renderer, but to use the py renderer. Then the run function is defined, the return value from the run function must be a Salt friendly data structure, or better known as a Salt HighState data structure.

Alternatively, using the pydsl renderer, the above example can be written more succinctly as:

#!pydsl

include('python', delayed=True)
state('django').pkg.installed()

The pyobjects renderer provides an "Pythonic" object based approach for building the state data. The above example could be written as:

#!pyobjects

include('python')
Pkg.installed("django")

These Python examples would look like this if they were written in YAML:

include:
  - python

django:
  pkg.installed

This example clearly illustrates that; one, using the YAML renderer by default is a wise decision and two, unbridled power can be obtained where needed by using a pure Python SLS.

Running and debugging salt states.

Once the rules in an SLS are ready, they should be tested to ensure they work properly. To invoke these rules, simply execute salt '*' state.highstate on the command line. If you get back only hostnames with a : after, but no return, chances are there is a problem with one or more of the sls files. On the minion, use the salt-call command: salt-call state.highstate -l debug to examine the output for errors. This should help troubleshoot the issue. The minions can also be started in the foreground in debug mode: salt-minion -l debug.

Next Reading

With an understanding of states, the next recommendation is to become familiar with Salt's pillar interface:

States tutorial, part 1 - Basic Usage

The purpose of this tutorial is to demonstrate how quickly you can configure a system to be managed by Salt States. For detailed information about the state system please refer to the full states reference.

This tutorial will walk you through using Salt to configure a minion to run the Apache HTTP server and to ensure the server is running.

Before continuing make sure you have a working Salt installation by following the installation and the configuration instructions.

Stuck?

There are many ways to get help from the Salt community including our mailing list and our IRC channel #salt.

Setting up the Salt State Tree

States are stored in text files on the master and transferred to the minions on demand via the master's File Server. The collection of state files make up the State Tree.

To start using a central state system in Salt, the Salt File Server must first be set up. Edit the master config file (file_roots) and uncomment the following lines:

file_roots:
  base:
    - /srv/salt

Note

If you are deploying on FreeBSD via ports, the file_roots path defaults to /usr/local/etc/salt/states.

Restart the Salt master in order to pick up this change:

pkill salt-master
salt-master -d
Preparing the Top File

On the master, in the directory uncommented in the previous step, (/srv/salt by default), create a new file called top.sls and add the following:

base:
  '*':
    - webserver

The top file is separated into environments (discussed later). The default environment is base. Under the base environment a collection of minion matches is defined; for now simply specify all hosts (*).

Targeting minions

The expressions can use any of the targeting mechanisms used by Salt — minions can be matched by glob, PCRE regular expression, or by grains. For example:

base:
  'os:Fedora':
    - match: grain
    - webserver
Create an sls file

In the same directory as the top file, create a file named webserver.sls, containing the following:

apache:                 # ID declaration
  pkg:                  # state declaration
    - installed         # function declaration

The first line, called the ID declaration, is an arbitrary identifier. In this case it defines the name of the package to be installed.

Note

The package name for the Apache httpd web server may differ depending on OS or distro — for example, on Fedora it is httpd but on Debian/Ubuntu it is apache2.

The second line, called the State declaration, defines which of the Salt States we are using. In this example, we are using the pkg state to ensure that a given package is installed.

The third line, called the Function declaration, defines which function in the pkg state module to call.

Renderers

States sls files can be written in many formats. Salt requires only a simple data structure and is not concerned with how that data structure is built. Templating languages and DSLs are a dime-a-dozen and everyone has a favorite.

Building the expected data structure is the job of Salt renderers and they are dead-simple to write.

In this tutorial we will be using YAML in Jinja2 templates, which is the default format. The default can be changed by editing renderer in the master configuration file.

Install the package

Next, let's run the state we created. Open a terminal on the master and run:

% salt '*' state.highstate

Our master is instructing all targeted minions to run state.highstate. When a minion executes a highstate call it will download the top file and attempt to match the expressions. When it does match an expression the modules listed for it will be downloaded, compiled, and executed.

Once completed, the minion will report back with a summary of all actions taken and all changes made.

Warning

If you have created custom grain modules, they will not be available in the top file until after the first highstate. To make custom grains available on a minion's first highstate, it is recommended to use this example to ensure that the custom grains are synced when the minion starts.

SLS File Namespace

Note that in the example above, the SLS file webserver.sls was referred to simply as webserver. The namespace for SLS files when referenced in top.sls or an Include declaration follows a few simple rules:

  1. The .sls is discarded (i.e. webserver.sls becomes webserver).

  2. Subdirectories can be used for better organization.
    1. Each subdirectory can be represented with a dot (following the python import model) or a slash. webserver/dev.sls can also be referred to as webserver.dev
    2. Because slashes can be represented as dots, SLS files can not contain dots in the name besides the dot for the SLS suffix. The SLS file webserver_1.0.sls can not be matched, and webserver_1.0 would match the directory/file webserver_1/0.sls
  3. A file called init.sls in a subdirectory is referred to by the path of the directory. So, webserver/init.sls is referred to as webserver.

  4. If both webserver.sls and webserver/init.sls happen to exist, webserver/init.sls will be ignored and webserver.sls will be the file referred to as webserver.

Troubleshooting Salt

If the expected output isn't seen, the following tips can help to narrow down the problem.

Turn up logging

Salt can be quite chatty when you change the logging setting to debug:

salt-minion -l debug
Run the minion in the foreground

By not starting the minion in daemon mode (-d) one can view any output from the minion as it works:

salt-minion &

Increase the default timeout value when running salt. For example, to change the default timeout to 60 seconds:

salt -t 60

For best results, combine all three:

salt-minion -l debug &          # On the minion
salt '*' state.highstate -t 60  # On the master
Next steps

This tutorial focused on getting a simple Salt States configuration working. Part 2 will build on this example to cover more advanced sls syntax and will explore more of the states that ship with Salt.

States tutorial, part 2 - More Complex States, Requisites

Note

This tutorial builds on topics covered in part 1. It is recommended that you begin there.

In the last part of the Salt States tutorial we covered the basics of installing a package. We will now modify our webserver.sls file to have requirements, and use even more Salt States.

Call multiple States

You can specify multiple State declaration under an ID declaration. For example, a quick modification to our webserver.sls to also start Apache if it is not running:

1
2
3
4
5
apache:
  pkg.installed: []
  service.running:
    - require:
      - pkg: apache

Try stopping Apache before running state.highstate once again and observe the output.

Require other states

We now have a working installation of Apache so let's add an HTML file to customize our website. It isn't exactly useful to have a website without a webserver so we don't want Salt to install our HTML file until Apache is installed and running. Include the following at the bottom of your webserver/init.sls file:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
apache:
  pkg.installed: []
  service.running:
    - require:
      - pkg: apache

/var/www/index.html:                        # ID declaration
  file:                                     # state declaration
    - managed                               # function
    - source: salt://webserver/index.html   # function arg
    - require:                              # requisite declaration
      - pkg: apache                         # requisite reference

line 7 is the ID declaration. In this example it is the location we want to install our custom HTML file. (Note: the default location that Apache serves may differ from the above on your OS or distro. /srv/www could also be a likely place to look.)

Line 8 the State declaration. This example uses the Salt file state.

Line 9 is the Function declaration. The managed function will download a file from the master and install it in the location specified.

Line 10 is a Function arg declaration which, in this example, passes the source argument to the managed function.

Line 11 is a Requisite declaration.

Line 12 is a Requisite reference which refers to a state and an ID. In this example, it is referring to the ID declaration from our example in part 1. This declaration tells Salt not to install the HTML file until Apache is installed.

Next, create the index.html file and save it in the webserver directory:

<!DOCTYPE html>
<html>
    <head><title>Salt rocks</title></head>
    <body>
        <h1>This file brought to you by Salt</h1>
    </body>
</html>

Last, call state.highstate again and the minion will fetch and execute the highstate as well as our HTML file from the master using Salt's File Server:

salt '*' state.highstate

Verify that Apache is now serving your custom HTML.

require vs. watch

There are two Requisite declaration, “require”, and “watch”. Not every state supports “watch”. The service state does support “watch” and will restart a service based on the watch condition.

For example, if you use Salt to install an Apache virtual host configuration file and want to restart Apache whenever that file is changed you could modify our Apache example from earlier as follows:

/etc/httpd/extra/httpd-vhosts.conf:
  file.managed:
    - source: salt://webserver/httpd-vhosts.conf

apache:
  pkg.installed: []
  service.running:
    - watch:
      - file: /etc/httpd/extra/httpd-vhosts.conf
    - require:
      - pkg: apache

If the pkg and service names differ on your OS or distro of choice you can specify each one separately using a Name declaration which explained in Part 3.

Next steps

In part 3 we will discuss how to use includes, extends, and templating to make a more complete State Tree configuration.

States tutorial, part 3 - Templating, Includes, Extends

Note

This tutorial builds on topics covered in part 1 and part 2. It is recommended that you begin there.

This part of the tutorial will cover more advanced templating and configuration techniques for sls files.

Templating SLS modules

SLS modules may require programming logic or inline execution. This is accomplished with module templating. The default module templating system used is Jinja2 and may be configured by changing the renderer value in the master config.

All states are passed through a templating system when they are initially read. To make use of the templating system, simply add some templating markup. An example of an sls module with templating markup may look like this:

{% for usr in ['moe','larry','curly'] %}
{{ usr }}:
  user.present
{% endfor %}

This templated sls file once generated will look like this:

moe:
  user.present
larry:
  user.present
curly:
  user.present

Here's a more complex example:

{% for usr in 'moe','larry','curly' %}
{{ usr }}:
  group:
    - present
  user:
    - present
    - gid_from_name: True
    - require:
      - group: {{ usr }}
{% endfor %}
Using Grains in SLS modules

Often times a state will need to behave differently on different systems. Salt grains objects are made available in the template context. The grains can be used from within sls modules:

apache:
  pkg.installed:
    {% if grains['os'] == 'RedHat' %}
    - name: httpd
    {% elif grains['os'] == 'Ubuntu' %}
    - name: apache2
    {% endif %}
Calling Salt modules from templates

All of the Salt modules loaded by the minion are available within the templating system. This allows data to be gathered in real time on the target system. It also allows for shell commands to be run easily from within the sls modules.

The Salt module functions are also made available in the template context as salt:

moe:
  user.present:
    - gid: {{ salt['file.group_to_gid']('some_group_that_exists') }}

Note that for the above example to work, some_group_that_exists must exist before the state file is processed by the templating engine.

Below is an example that uses the network.hw_addr function to retrieve the MAC address for eth0:

salt['network.hw_addr']('eth0')
Advanced SLS module syntax

Lastly, we will cover some incredibly useful techniques for more complex State trees.

Include declaration

A previous example showed how to spread a Salt tree across several files. Similarly, requisites span multiple files by using an Include declaration. For example:

python/python-libs.sls:

python-dateutil:
  pkg.installed

python/django.sls:

include:
  - python.python-libs

django:
  pkg.installed:
    - require:
      - pkg: python-dateutil
Extend declaration

You can modify previous declarations by using an Extend declaration. For example the following modifies the Apache tree to also restart Apache when the vhosts file is changed:

apache/apache.sls:

apache:
  pkg.installed

apache/mywebsite.sls:

include:
  - apache.apache

extend:
  apache:
    service:
      - running
      - watch:
        - file: /etc/httpd/extra/httpd-vhosts.conf

/etc/httpd/extra/httpd-vhosts.conf:
  file.managed:
    - source: salt://apache/httpd-vhosts.conf

Using extend with require or watch

The extend statement works differently for require or watch. It appends to, rather than replacing the requisite component.

Name declaration

You can override the ID declaration by using a Name declaration. For example, the previous example is a bit more maintainable if rewritten as follows:

apache/mywebsite.sls:

include:
  - apache.apache

extend:
  apache:
    service:
      - running
      - watch:
        - file: mywebsite

mywebsite:
  file.managed:
    - name: /etc/httpd/extra/httpd-vhosts.conf
    - source: salt://apache/httpd-vhosts.conf
Names declaration

Even more powerful is using a Names declaration to override the ID declaration for multiple states at once. This often can remove the need for looping in a template. For example, the first example in this tutorial can be rewritten without the loop:

stooges:
  user.present:
    - names:
      - moe
      - larry
      - curly
Next steps

In part 4 we will discuss how to use salt's file_roots to set up a workflow in which states can be "promoted" from dev, to QA, to production.

States tutorial, part 4

Note

This tutorial builds on topics covered in part 1, part 2 and part 3. It is recommended that you begin there.

This part of the tutorial will show how to use salt's file_roots to set up a workflow in which states can be "promoted" from dev, to QA, to production.

Salt fileserver path inheritance

Salt's fileserver allows for more than one root directory per environment, like in the below example, which uses both a local directory and a secondary location shared to the salt master via NFS:

# In the master config file (/etc/salt/master)
file_roots:
  base:
    - /srv/salt
    - /mnt/salt-nfs/base

Salt's fileserver collapses the list of root directories into a single virtual environment containing all files from each root. If the same file exists at the same relative path in more than one root, then the top-most match "wins". For example, if /srv/salt/foo.txt and /mnt/salt-nfs/base/foo.txt both exist, then salt://foo.txt will point to /srv/salt/foo.txt.

Note

When using multiple fileserver backends, the order in which they are listed in the fileserver_backend parameter also matters. If both roots and git backends contain a file with the same relative path, and roots appears before git in the fileserver_backend list, then the file in roots will "win", and the file in gitfs will be ignored.

A more thorough explanation of how Salt's modular fileserver works can be found here. We recommend reading this.

Environment configuration

Configure a multiple-environment setup like so:

file_roots:
  base:
    - /srv/salt/prod
  qa:
    - /srv/salt/qa
    - /srv/salt/prod
  dev:
    - /srv/salt/dev
    - /srv/salt/qa
    - /srv/salt/prod

Given the path inheritance described above, files within /srv/salt/prod would be available in all environments. Files within /srv/salt/qa would be available in both qa, and dev. Finally, the files within /srv/salt/dev would only be available within the dev environment.

Based on the order in which the roots are defined, new files/states can be placed within /srv/salt/dev, and pushed out to the dev hosts for testing.

Those files/states can then be moved to the same relative path within /srv/salt/qa, and they are now available only in the dev and qa environments, allowing them to be pushed to QA hosts and tested.

Finally, if moved to the same relative path within /srv/salt/prod, the files are now available in all three environments.

Practical Example

As an example, consider a simple website, installed to /var/www/foobarcom. Below is a top.sls that can be used to deploy the website:

/srv/salt/prod/top.sls:

base:
  'web*prod*':
    - webserver.foobarcom
qa:
  'web*qa*':
    - webserver.foobarcom
dev:
  'web*dev*':
    - webserver.foobarcom

Using pillar, roles can be assigned to the hosts:

/srv/pillar/top.sls:

base:
  'web*prod*':
    - webserver.prod
  'web*qa*':
    - webserver.qa
  'web*dev*':
    - webserver.dev

/srv/pillar/webserver/prod.sls:

webserver_role: prod

/srv/pillar/webserver/qa.sls:

webserver_role: qa

/srv/pillar/webserver/dev.sls:

webserver_role: dev

And finally, the SLS to deploy the website:

/srv/salt/prod/webserver/foobarcom.sls:

{% if pillar.get('webserver_role', '') %}
/var/www/foobarcom:
  file.recurse:
    - source: salt://webserver/src/foobarcom
    - env: {{ pillar['webserver_role'] }}
    - user: www
    - group: www
    - dir_mode: 755
    - file_mode: 644
{% endif %}

Given the above SLS, the source for the website should initially be placed in /srv/salt/dev/webserver/src/foobarcom.

First, let's deploy to dev. Given the configuration in the top file, this can be done using state.highstate:

salt --pillar 'webserver_role:dev' state.highstate

However, in the event that it is not desirable to apply all states configured in the top file (which could be likely in more complex setups), it is possible to apply just the states for the foobarcom website, using state.sls:

salt --pillar 'webserver_role:dev' state.sls webserver.foobarcom

Once the site has been tested in dev, then the files can be moved from /srv/salt/dev/webserver/src/foobarcom to /srv/salt/qa/webserver/src/foobarcom, and deployed using the following:

salt --pillar 'webserver_role:qa' state.sls webserver.foobarcom

Finally, once the site has been tested in qa, then the files can be moved from /srv/salt/qa/webserver/src/foobarcom to /srv/salt/prod/webserver/src/foobarcom, and deployed using the following:

salt --pillar 'webserver_role:prod' state.sls webserver.foobarcom

Thanks to Salt's fileserver inheritance, even though the files have been moved to within /srv/salt/prod, they are still available from the same salt:// URI in both the qa and dev environments.

Continue Learning

The best way to continue learning about Salt States is to read through the reference documentation and to look through examples of existing state trees. Many pre-configured state trees can be found on GitHub in the saltstack-formulas collection of repositories.

If you have any questions, suggestions, or just want to chat with other people who are using Salt, we have a very active community and we'd love to hear from you.

In addition, by continuing to part 5, you can learn about the powerful orchestration of which Salt is capable.

States Tutorial, Part 5 - Orchestration with Salt

Note

This tutorial builds on some of the topics covered in the earlier States Walkthrough pages. It is recommended to start with Part 1 if you are not familiar with how to use states.

Orchestration is accomplished in salt primarily through the Orchestrate Runner. Added in version 0.17.0, this Salt Runner can use the full suite of requisites available in states, and can also execute states/functions using salt-ssh. This runner replaces the OverState.

The Orchestrate Runner

New in version 0.17.0.

As noted above in the introduction, the Orchestrate Runner (originally called the state.sls runner) offers all the functionality of the OverState, but with a couple advantages:

  • All requisites available in states can be used.
  • The states/functions can be executed using salt-ssh.

The Orchestrate Runner was added with the intent to eventually deprecate the OverState system, however the OverState will still be maintained for the foreseeable future.

Configuration Syntax

The configuration differs slightly from that of the OverState, and more closely resembles the configuration schema used for states.

To execute a state, use salt.state:

install_nginx:
  salt.state:
    - tgt: 'web*'
    - sls:
      - nginx

To execute a function, use salt.function:

cmd.run:
  salt.function:
    - tgt: '*'
    - arg:
      - rm -rf /tmp/foo
Triggering a Highstate

Whereas with the OverState, a Highstate is run by simply omitting an sls or function argument, with the Orchestrate Runner the Highstate must explicitly be requested by using highstate: True:

webserver_setup:
  salt.state:
    - tgt: 'web*'
    - highstate: True
Executing the Orchestrate Runner

The Orchestrate Runner can be executed using the state.orchestrate runner function. state.orch also works, for those that would like to type less.

Assuming that your base environment is located at /srv/salt, and you have placed a configuration file in /srv/salt/orchestration/webserver.sls, then the following could both be used:

salt-run state.orchestrate orchestration.webserver
salt-run state.orch orchestration.webserver

Changed in version 2014.1.1: The runner function was renamed to state.orchestrate. In versions 0.17.0 through 2014.1.0, state.sls must be used. This was renamed to avoid confusion with the state.sls execution function.

salt-run state.sls orchestration.webserver
More Complex Orchestration

Many states/functions can be configured in a single file, which when combined with the full suite of requisites, can be used to easily configure complex orchestration tasks. Additionally, the states/functions will be executed in the order in which they are defined, unless prevented from doing so by any requisites, as is the default in SLS files since 0.17.0.

cmd.run:
  salt.function:
    - tgt: 10.0.0.0/24
    - tgt_type: ipcidr
    - arg:
      - bootstrap

storage_setup:
  salt.state:
    - tgt: 'role:storage'
    - tgt_type: grain
    - sls: ceph
    - require:
      - salt: webserver_setup

webserver_setup:
  salt.state:
    - tgt: 'web*'
    - highstate: True

Given the above setup, the orchestration will be carried out as follows:

  1. The shell command bootstrap will be executed on all minions in the 10.0.0.0/24 subnet.
  2. A Highstate will be run on all minions whose ID starts with "web", since the storage_setup state requires it.
  3. Finally, the ceph SLS target will be executed on all minions which have a grain called role with a value of storage.
The OverState System

Warning

The OverState runner is deprecated, and will be removed in the feature release of Salt codenamed Boron. (Three feature releases after 2014.7.0, which is codenamed Helium)

Often, servers need to be set up and configured in a specific order, and systems should only be set up if systems earlier in the sequence have been set up without any issues.

The OverState system can be used to orchestrate deployment in a smooth and reliable way across multiple systems in small to large environments.

The OverState SLS

The OverState system is managed by an SLS file named overstate.sls, located in the root of a Salt fileserver environment.

The overstate.sls configures an unordered list of stages, each stage defines the minions on which to execute the state, and can define what sls files to run, execute a state.highstate, or execute a function. Here's a sample overstate.sls:

mysql:
  match: 'db*'
  sls:
    - mysql.server
    - drbd
webservers:
  match: 'web*'
  require:
    - mysql
all:
  match: '*'
  require:
    - mysql
    - webservers

Note

The match argument uses compound matching

Given the above setup, the OverState will be carried out as follows:

  1. The mysql stage will be executed first because it is required by the webservers and all stages. It will execute state.sls once for each of the two listed SLS targets (mysql.server and drbd). These states will be executed on all minions whose minion ID starts with "db".
  2. The webservers stage will then be executed, but only if the mysql stage executes without any failures. The webservers stage will execute a state.highstate on all minions whose minion IDs start with "web".
  3. Finally, the all stage will execute, running state.highstate on all systems, if, and only if the mysql and webservers stages completed without any failures.

Any failure in the above steps would cause the requires to fail, preventing the dependent stages from executing.

Using Functions with OverState

In the above example, you'll notice that the stages lacking an sls entry run a state.highstate. As mentioned earlier, it is also possible to execute other functions in a stage. This functionality was added in version 0.15.0.

Running a function is easy:

http:
  function:
    pkg.install:
      - httpd

The list of function arguments are defined after the declared function. So, the above stage would run pkg.install http. Requisites only function properly if the given function supports returning a custom return code.

Executing an OverState

Since the OverState is a Runner, it is executed using the salt-run command. The runner function for the OverState is state.over.

salt-run state.over

The function will by default look in the root of the base environment (as defined in file_roots) for a file called overstate.sls, and then execute the stages defined within that file.

Different environments and paths can be used as well, by adding them as positional arguments:

salt-run state.over dev /root/other-overstate.sls

The above would run an OverState using the dev fileserver environment, with the stages defined in /root/other-overstate.sls.

Warning

Since these are positional arguments, when defining the path to the overstate file the environment must also be specified, even if it is the base environment.

Note

Remember, salt-run is always executed on the master.

Syslog-ng usage

Overview

Syslog_ng state module is for generating syslog-ng configurations. You can do the following things:

  • generate syslog-ng configuration from YAML,
  • use non-YAML configuration,
  • start, stop or reload syslog-ng.

There is also an execution module, which can check the syntax of the configuration, get the version and other information about syslog-ng.

Configuration

Users can create syslog-ng configuration statements with the syslog_ng.config function. It requires a name and a config parameter. The name parameter determines the name of the generated statement and the config parameter holds a parsed YAML structure.

A statement can be declared in the following forms (both are equivalent):

source.s_localhost:
  syslog_ng.config:
    - config:
        - tcp:
          - ip: "127.0.0.1"
          - port: 1233
s_localhost:
  syslog_ng.config:
    - config:
        source:
          - tcp:
            - ip: "127.0.0.1"
            - port: 1233

The first one is called short form, because it needs less typing. Users can use lists and dictionaries to specify their configuration. The format is quite self describing and there are more examples [at the end](#examples) of this document.

Quotation
The quotation can be tricky sometimes but here are some rules to follow:
  • when a string meant to be "string" in the generated configuration, it should be like '"string"' in the YAML document
  • similarly, users should write "'string'" to get 'string' in the generated configuration
Full example

The following configuration is an example, how a complete syslog-ng configuration looks like:

# Set the location of the configuration file
set_location:
  module.run:
    - name: syslog_ng.set_config_file
    - m_name: "/home/tibi/install/syslog-ng/etc/syslog-ng.conf"

# The syslog-ng and syslog-ng-ctl binaries are here. You needn't use
# this method if these binaries can be found in a directory in your PATH.
set_bin_path:
  module.run:
    - name: syslog_ng.set_binary_path
    - m_name: "/home/tibi/install/syslog-ng/sbin"

# Writes the first lines into the config file, also erases its previous
# content
write_version:
  module.run:
    - name: syslog_ng.write_version
    - m_name: "3.6"

# There is a shorter form to set the above variables
set_variables:
  module.run:
    - name: syslog_ng.set_parameters
    - version: "3.6"
    - binary_path: "/home/tibi/install/syslog-ng/sbin"
    - config_file: "/home/tibi/install/syslog-ng/etc/syslog-ng.conf"


# Some global options
options.global_options:
  syslog_ng.config:
    - config:
        - time_reap: 30
        - mark_freq: 10
        - keep_hostname: "yes"

source.s_localhost:
  syslog_ng.config:
    - config:
        - tcp:
          - ip: "127.0.0.1"
          - port: 1233

destination.d_log_server:
  syslog_ng.config:
    - config:
        - tcp:
          - "127.0.0.1"
          - port: 1234

log.l_log_to_central_server:
  syslog_ng.config:
    - config:
        - source: s_localhost
        - destination: d_log_server

some_comment:
  module.run:
    - name: syslog_ng.write_config
    - config: |
        # Multi line
        # comment

# An other mode to use comments or existing configuration snippets
config.other_comment_form:
  syslog_ng.config:
    - config: |
        # Multi line
        # comment

The syslog_ng.reloaded function can generate syslog-ng configuration from YAML. If the statement (source, destination, parser, etc.) has a name, this function uses the id as the name, otherwise (log statement) it's purpose is like a mandatory comment.

After execution this example the syslog_ng state will generate this file:

#Generated by Salt on 2014-08-18 00:11:11
@version: 3.6

options {
    time_reap(
        30
    );
    mark_freq(
        10
    );
    keep_hostname(
        yes
    );
};


source s_localhost {
    tcp(
        ip(
            127.0.0.1
        ),
        port(
            1233
        )
    );
};


destination d_log_server {
    tcp(
        127.0.0.1,
        port(
            1234
        )
    );
};


log {
    source(
        s_localhost
    );
    destination(
        d_log_server
    );
};


# Multi line
# comment


# Multi line
# comment

Users can include arbitrary texts in the generated configuration with using the config statement (see the example above).

Syslog_ng module functions

You can use syslog_ng.set_binary_path to set the directory which contains the syslog-ng and syslog-ng-ctl binaries. If this directory is in your PATH, you don't need to use this function. There is also a syslog_ng.set_config_file function to set the location of the configuration file.

Examples
Simple source
source s_tail {
 file(
   "/var/log/apache/access.log",
   follow_freq(1),
   flags(no-parse, validate-utf8)
 );
};
s_tail:
  # Salt will call the source function of syslog_ng module
  syslog_ng.config:
    - config:
        source:
          - file:
            - file: ''"/var/log/apache/access.log"''
            - follow_freq : 1
            - flags:
              - no-parse
              - validate-utf8

OR

s_tail:
  syslog_ng.config:
    - config:
        source:
            - file:
              - ''"/var/log/apache/access.log"''
              - follow_freq : 1
              - flags:
                - no-parse
                - validate-utf8

OR

source.s_tail:
  syslog_ng.config:
    - config:
        - file:
          - ''"/var/log/apache/access.log"''
          - follow_freq : 1
          - flags:
            - no-parse
            - validate-utf8
Complex source
source s_gsoc2014 {
 tcp(
   ip("0.0.0.0"),
   port(1234),
   flags(no-parse)
 );
};
s_gsoc2014:
  syslog_ng.config:
    - config:
        source:
          - tcp:
            - ip: 0.0.0.0
            - port: 1234
            - flags: no-parse
Filter
filter f_json {
 match(
   "@json:"
 );
};
f_json:
  syslog_ng.config:
    - config:
        filter:
          - match:
            - ''"@json:"''
Template
template t_demo_filetemplate {
 template(
   "$ISODATE $HOST $MSG "
 );
 template_escape(
   no
 );
};
t_demo_filetemplate:
  syslog_ng.config:
    -config:
        template:
          - template:
            - '"$ISODATE $HOST $MSG\n"'
          - template_escape:
            - "no"
Rewrite
rewrite r_set_message_to_MESSAGE {
 set(
   "${.json.message}",
   value("$MESSAGE")
 );
};
r_set_message_to_MESSAGE:
  syslog_ng.config:
    - config:
        rewrite:
          - set:
            - '"${.json.message}"'
            - value : '"$MESSAGE"'
Global options
options {
   time_reap(30);
   mark_freq(10);
   keep_hostname(yes);
};
global_options:
  syslog_ng.config:
    - config:
        options:
          - time_reap: 30
          - mark_freq: 10
          - keep_hostname: "yes"
Log
log {
 source(s_gsoc2014);
 junction {
  channel {
   filter(f_json);
   parser(p_json);
   rewrite(r_set_json_tag);
   rewrite(r_set_message_to_MESSAGE);
   destination {
    file(
      "/tmp/json-input.log",
      template(t_gsoc2014)
    );
   };
   flags(final);
  };
  channel {
   filter(f_not_json);
   parser {
    syslog-parser(

    );
   };
   rewrite(r_set_syslog_tag);
   flags(final);
  };
 };
 destination {
  file(
    "/tmp/all.log",
    template(t_gsoc2014)
  );
 };
};
l_gsoc2014:
  syslog_ng.config:
    - config:
        log:
          - source: s_gsoc2014
          - junction:
            - channel:
              - filter: f_json
              - parser: p_json
              - rewrite: r_set_json_tag
              - rewrite: r_set_message_to_MESSAGE
              - destination:
                - file:
                  - '"/tmp/json-input.log"'
                  - template: t_gsoc2014
              - flags: final
            - channel:
              - filter: f_not_json
              - parser:
                - syslog-parser: []
              - rewrite: r_set_syslog_tag
              - flags: final
          - destination:
            - file:
              - "/tmp/all.log"
              - template: t_gsoc2014

Advanced Topics

SaltStack Walk-through

Note

Welcome to SaltStack! I am excited that you are interested in Salt and starting down the path to better infrastructure management. I developed (and am continuing to develop) Salt with the goal of making the best software available to manage computers of almost any kind. I hope you enjoy working with Salt and that the software can solve your real world needs!

  • Thomas S Hatch
  • Salt creator and Chief Developer
  • CTO of SaltStack, Inc.
Getting Started
What is Salt?

Salt is a different approach to infrastructure management, founded on the idea that high-speed communication with large numbers of systems can open up new capabilities. This approach makes Salt a powerful multitasking system that can solve many specific problems in an infrastructure.

The backbone of Salt is the remote execution engine, which creates a high-speed, secure and bi-directional communication net for groups of systems. On top of this communication system, Salt provides an extremely fast, flexible, and easy-to-use configuration management system called Salt States.

Installing Salt

SaltStack has been made to be very easy to install and get started. Setting up Salt should be as easy as installing Salt via distribution packages on Linux or via the Windows installer. The installation documents cover platform-specific installation in depth.

Starting Salt

Salt functions on a master/minion topology. A master server acts as a central control bus for the clients, which are called minions. The minions connect back to the master.

Setting Up the Salt Master

Turning on the Salt Master is easy -- just turn it on! The default configuration is suitable for the vast majority of installations. The Salt Master can be controlled by the local Linux/Unix service manager:

On Systemd based platforms (OpenSuse, Fedora):

systemctl start salt-master

On Upstart based systems (Ubuntu, older Fedora/RHEL):

service salt-master start

On SysV Init systems (Debian, Gentoo etc.):

/etc/init.d/salt-master start

Alternatively, the Master can be started directly on the command-line:

salt-master -d

The Salt Master can also be started in the foreground in debug mode, thus greatly increasing the command output:

salt-master -l debug

The Salt Master needs to bind to two TCP network ports on the system. These ports are 4505 and 4506. For more in depth information on firewalling these ports, the firewall tutorial is available here.

Setting up a Salt Minion

Note

The Salt Minion can operate with or without a Salt Master. This walk-through assumes that the minion will be connected to the master, for information on how to run a master-less minion please see the master-less quick-start guide:

Masterless Minion Quickstart

The Salt Minion only needs to be aware of one piece of information to run, the network location of the master.

By default the minion will look for the DNS name salt for the master, making the easiest approach to set internal DNS to resolve the name salt back to the Salt Master IP.

Otherwise, the minion configuration file will need to be edited so that the configuration option master points to the DNS name or the IP of the Salt Master:

Note

The default location of the configuration files is /etc/salt. Most platforms adhere to this convention, but platforms such as FreeBSD and Microsoft Windows place this file in different locations.

/etc/salt/minion:

master: saltmaster.example.com

Now that the master can be found, start the minion in the same way as the master; with the platform init system or via the command line directly:

As a daemon:

salt-minion -d

In the foreground in debug mode:

salt-minion -l debug

When the minion is started, it will generate an id value, unless it has been generated on a previous run and cached in the configuration directory, which is /etc/salt by default. This is the name by which the minion will attempt to authenticate to the master. The following steps are attempted, in order to try to find a value that is not localhost:

  1. The Python function socket.getfqdn() is run
  2. /etc/hostname is checked (non-Windows only)
  3. /etc/hosts (%WINDIR%\system32\drivers\etc\hosts on Windows hosts) is checked for hostnames that map to anything within 127.0.0.0/8.

If none of the above are able to produce an id which is not localhost, then a sorted list of IP addresses on the minion (excluding any within 127.0.0.0/8) is inspected. The first publicly-routable IP address is used, if there is one. Otherwise, the first privately-routable IP address is used.

If all else fails, then localhost is used as a fallback.

Note

Overriding the id

The minion id can be manually specified using the id parameter in the minion config file. If this configuration value is specified, it will override all other sources for the id.

Now that the minion is started, it will generate cryptographic keys and attempt to connect to the master. The next step is to venture back to the master server and accept the new minion's public key.

Using salt-key

Salt authenticates minions using public-key encryption and authentication. For a minion to start accepting commands from the master, the minion keys need to be accepted by the master.

The salt-key command is used to manage all of the keys on the master. To list the keys that are on the master:

salt-key -L

The keys that have been rejected, accepted, and pending acceptance are listed. The easiest way to accept the minion key is to accept all pending keys:

salt-key -A

Note

Keys should be verified! The secure thing to do before accepting a key is to run salt-key -f minion-id to print the fingerprint of the minion's public key. This fingerprint can then be compared against the fingerprint generated on the minion.

On the master:

# salt-key -f foo.domain.com
Unaccepted Keys:
foo.domain.com:  39:f9:e4:8a:aa:74:8d:52:1a:ec:92:03:82:09:c8:f9

On the minion:

# salt-call key.finger --local
local:
    39:f9:e4:8a:aa:74:8d:52:1a:ec:92:03:82:09:c8:f9

If they match, approve the key with salt-key -a foo.domain.com.

Sending the First Commands

Now that the minion is connected to the master and authenticated, the master can start to command the minion.

Salt commands allow for a vast set of functions to be executed and for specific minions and groups of minions to be targeted for execution.

The salt command is comprised of command options, target specification, the function to execute, and arguments to the function.

A simple command to start with looks like this:

salt '*' test.ping

The * is the target, which specifies all minions.

test.ping tells the minion to run the test.ping function.

In the case of test.ping, test refers to a execution module. ping refers to the ping function contained in the aforementioned test module.

Note

Execution modules are the workhorses of Salt. They do the work on the system to perform various tasks, such as manipulating files and restarting services.

The result of running this command will be the master instructing all of the minions to execute test.ping in parallel and return the result.

This is not an actual ICMP ping, but rather a simple function which returns True. Using test.ping is a good way of confirming that a minion is connected.

Note

Each minion registers itself with a unique minion ID. This ID defaults to the minion's hostname, but can be explicitly defined in the minion config as well by using the id parameter.

Of course, there are hundreds of other modules that can be called just as test.ping can. For example, the following would return disk usage on all targeted minions:

salt '*' disk.usage
Getting to Know the Functions

Salt comes with a vast library of functions available for execution, and Salt functions are self-documenting. To see what functions are available on the minions execute the sys.doc function:

salt '*' sys.doc

This will display a very large list of available functions and documentation on them.

Note

Module documentation is also available on the web.

These functions cover everything from shelling out to package management to manipulating database servers. They comprise a powerful system management API which is the backbone to Salt configuration management and many other aspects of Salt.

Note

Salt comes with many plugin systems. The functions that are available via the salt command are called Execution Modules.

Helpful Functions to Know

The cmd module contains functions to shell out on minions, such as cmd.run and cmd.run_all:

salt '*' cmd.run 'ls -l /etc'

The pkg functions automatically map local system package managers to the same salt functions. This means that pkg.install will install packages via yum on Red Hat based systems, apt on Debian systems, etc.:

salt '*' pkg.install vim

Note

Some custom Linux spins and derivatives of other distributions are not properly detected by Salt. If the above command returns an error message saying that pkg.install is not available, then you may need to override the pkg provider. This process is explained here.

The network.interfaces function will list all interfaces on a minion, along with their IP addresses, netmasks, MAC addresses, etc:

salt '*' network.interfaces
Changing the Output Format

The default output format used for most Salt commands is called the nested outputter, but there are several other outputters that can be used to change the way the output is displayed. For instance, the pprint outputter can be used to display the return data using Python's pprint module:

root@saltmaster:~# salt myminion grains.item pythonpath --out=pprint
{'myminion': {'pythonpath': ['/usr/lib64/python2.7',
                             '/usr/lib/python2.7/plat-linux2',
                             '/usr/lib64/python2.7/lib-tk',
                             '/usr/lib/python2.7/lib-tk',
                             '/usr/lib/python2.7/site-packages',
                             '/usr/lib/python2.7/site-packages/gst-0.10',
                             '/usr/lib/python2.7/site-packages/gtk-2.0']}}

The full list of Salt outputters, as well as example output, can be found here.

salt-call

The examples so far have described running commands from the Master using the salt command, but when troubleshooting it can be more beneficial to login to the minion directly and use salt-call.

Doing so allows you to see the minion log messages specific to the command you are running (which are not part of the return data you see when running the command from the Master using salt), making it unnecessary to tail the minion log. More information on salt-call and how to use it can be found here.

Grains

Salt uses a system called Grains to build up static data about minions. This data includes information about the operating system that is running, CPU architecture and much more. The grains system is used throughout Salt to deliver platform data to many components and to users.

Grains can also be statically set, this makes it easy to assign values to minions for grouping and managing.

A common practice is to assign grains to minions to specify what the role or roles a minion might be. These static grains can be set in the minion configuration file or via the grains.setval function.

Targeting

Salt allows for minions to be targeted based on a wide range of criteria. The default targeting system uses globular expressions to match minions, hence if there are minions named larry1, larry2, curly1, and curly2, a glob of larry* will match larry1 and larry2, and a glob of *1 will match larry1 and curly1.

Many other targeting systems can be used other than globs, these systems include:

Regular Expressions
Target using PCRE-compliant regular expressions
Grains
Target based on grains data: Targeting with Grains
Pillar
Target based on pillar data: Targeting with Pillar
IP
Target based on IP address/subnet/range
Compound
Create logic to target based on multiple targets: Targeting with Compound
Nodegroup
Target with nodegroups: Targeting with Nodegroup

The concepts of targets are used on the command line with Salt, but also function in many other areas as well, including the state system and the systems used for ACLs and user permissions.

Passing in Arguments

Many of the functions available accept arguments which can be passed in on the command line:

salt '*' pkg.install vim

This example passes the argument vim to the pkg.install function. Since many functions can accept more complex input than just a string, the arguments are parsed through YAML, allowing for more complex data to be sent on the command line:

salt '*' test.echo 'foo: bar'

In this case Salt translates the string 'foo: bar' into the dictionary "{'foo': 'bar'}"

Note

Any line that contains a newline will not be parsed by YAML.

Salt States

Now that the basics are covered the time has come to evaluate States. Salt States, or the State System is the component of Salt made for configuration management.

The state system is already available with a basic Salt setup, no additional configuration is required. States can be set up immediately.

Note

Before diving into the state system, a brief overview of how states are constructed will make many of the concepts clearer. Salt states are based on data modeling and build on a low level data structure that is used to execute each state function. Then more logical layers are built on top of each other.

The high layers of the state system which this tutorial will cover consists of everything that needs to be known to use states, the two high layers covered here are the sls layer and the highest layer highstate.

Understanding the layers of data management in the State System will help with understanding states, but they never need to be used. Just as understanding how a compiler functions assists when learning a programming language, understanding what is going on under the hood of a configuration management system will also prove to be a valuable asset.

The First SLS Formula

The state system is built on SLS formulas. These formulas are built out in files on Salt's file server. To make a very basic SLS formula open up a file under /srv/salt named vim.sls. The following state ensures that vim is installed on a system to which that state has been applied.

/srv/salt/vim.sls:

vim:
  pkg.installed

Now install vim on the minions by calling the SLS directly:

salt '*' state.sls vim

This command will invoke the state system and run the vim SLS.

Now, to beef up the vim SLS formula, a vimrc can be added:

/srv/salt/vim.sls:

vim:
  pkg.installed: []

/etc/vimrc:
  file.managed:
    - source: salt://vimrc
    - mode: 644
    - user: root
    - group: root

Now the desired vimrc needs to be copied into the Salt file server to /srv/salt/vimrc. In Salt, everything is a file, so no path redirection needs to be accounted for. The vimrc file is placed right next to the vim.sls file. The same command as above can be executed to all the vim SLS formulas and now include managing the file.

Note

Salt does not need to be restarted/reloaded or have the master manipulated in any way when changing SLS formulas. They are instantly available.

Adding Some Depth

Obviously maintaining SLS formulas right in a single directory at the root of the file server will not scale out to reasonably sized deployments. This is why more depth is required. Start by making an nginx formula a better way, make an nginx subdirectory and add an init.sls file:

/srv/salt/nginx/init.sls:

nginx:
  pkg.installed: []
  service.running:
    - require:
      - pkg: nginx

A few concepts are introduced in this SLS formula.

First is the service statement which ensures that the nginx service is running.

Of course, the nginx service can't be started unless the package is installed -- hence the require statement which sets up a dependency between the two.

The require statement makes sure that the required component is executed before and that it results in success.

Note

The require option belongs to a family of options called requisites. Requisites are a powerful component of Salt States, for more information on how requisites work and what is available see: Requisites

Also evaluation ordering is available in Salt as well: Ordering States

This new sls formula has a special name -- init.sls. When an SLS formula is named init.sls it inherits the name of the directory path that contains it. This formula can be referenced via the following command:

salt '*' state.sls nginx

Note

Reminder!

Just as one could call the test.ping or disk.usage execution modules, state.sls is simply another execution module. It simply takes the name of an SLS file as an argument.

Now that subdirectories can be used, the vim.sls formula can be cleaned up. To make things more flexible, move the vim.sls and vimrc into a new subdirectory called edit and change the vim.sls file to reflect the change:

/srv/salt/edit/vim.sls:

vim:
  pkg.installed

/etc/vimrc:
  file.managed:
    - source: salt://edit/vimrc
    - mode: 644
    - user: root
    - group: root

Only the source path to the vimrc file has changed. Now the formula is referenced as edit.vim because it resides in the edit subdirectory. Now the edit subdirectory can contain formulas for emacs, nano, joe or any other editor that may need to be deployed.

Next Reading

Two walk-throughs are specifically recommended at this point. First, a deeper run through States, followed by an explanation of Pillar.

  1. Starting States
  2. Pillar Walkthrough

An understanding of Pillar is extremely helpful in using States.

Getting Deeper Into States

Two more in-depth States tutorials exist, which delve much more deeply into States functionality.

  1. Thomas' original states tutorial, How Do I Use Salt States?, covers much more to get off the ground with States.
  2. The States Tutorial also provides a fantastic introduction.

These tutorials include much more in-depth information including templating SLS formulas etc.

So Much More!

This concludes the initial Salt walk-through, but there are many more things still to learn! These documents will cover important core aspects of Salt:

A few more tutorials are also available:

This still is only scratching the surface, many components such as the reactor and event systems, extending Salt, modular components and more are not covered here. For an overview of all Salt features and documentation, look at the Table of Contents.

MinionFS Backend Walkthrough

New in version 2014.1.0.

Sometimes, you might need to propagate files that are generated on a minion. Salt already has a feature to send files from a minion to the master:

salt 'minion-id' cp.push /path/to/the/file

This command will store the file, including its full path, under cachedir /master/minions/minion-id/files. With the default cachedir the example file above would be stored as /var/cache/salt/master/minions/minion-id/files/path/to/the/file.

Note

This walkthrough assumes basic knowledge of Salt and cp.push. To get up to speed, check out the walkthrough.

Since it is not a good idea to expose the whole cachedir, MinionFS should be used to send these files to other minions.

Simple Configuration

To use the minionfs backend only two configuration changes are required on the master. The fileserver_backend option needs to contain a value of minion and file_recv needs to be set to true:

fileserver_backend:
  - roots
  - minion

file_recv: True

These changes require a restart of the master, then new requests for the salt://minion-id/ protocol will send files that are pushed by cp.push from minion-id to the master.

Note

All of the files that are pushed to the master are going to be available to all of the minions. If this is not what you want, please remove minion from fileserver_backend in the master config file.

Note

Having directories with the same name as your minions in the root that can be accessed like salt://minion-id/ might cause confusion.

Commandline Example

Lets assume that we are going to generate SSH keys on a minion called minion-source and put the public part in ~/.ssh/authorized_keys of root user of a minion called minion-destination.

First, lets make sure that /root/.ssh exists and has the right permissions:

[root@salt-master file]# salt '*' file.mkdir dir_path=/root/.ssh user=root group=root mode=700
minion-source:
    None
minion-destination:
    None

We create an RSA key pair without a passphrase [*]:

[root@salt-master file]# salt 'minion-source' cmd.run 'ssh-keygen -N "" -f /root/.ssh/id_rsa'
minion-source:
    Generating public/private rsa key pair.
    Your identification has been saved in /root/.ssh/id_rsa.
    Your public key has been saved in /root/.ssh/id_rsa.pub.
    The key fingerprint is:
    9b:cd:1c:b9:c2:93:8e:ad:a3:52:a0:8b:0a:cc:d4:9b root@minion-source
    The key's randomart image is:
    +--[ RSA 2048]----+
    |                 |
    |                 |
    |                 |
    |  o        .     |
    | o o    S o      |
    |=   +  . B o     |
    |o+ E    B =      |
    |+ .   .+ o       |
    |o  ...ooo        |
    +-----------------+

and we send the public part to the master to be available to all minions:

[root@salt-master file]# salt 'minion-source' cp.push /root/.ssh/id_rsa.pub
minion-source:
    True

now it can be seen by everyone:

[root@salt-master file]# salt 'minion-destination' cp.list_master_dirs
minion-destination:
    - .
    - etc
    - minion-source/root
    - minion-source/root/.ssh

Lets copy that as the only authorized key to minion-destination:

[root@salt-master file]# salt 'minion-destination' cp.get_file salt://minion-source/root/.ssh/id_rsa.pub /root/.ssh/authorized_keys
minion-destination:
    /root/.ssh/authorized_keys

Or we can use a more elegant and salty way to add an SSH key:

[root@salt-master file]# salt 'minion-destination' ssh.set_auth_key_from_file user=root source=salt://minion-source/root/.ssh/id_rsa.pub
minion-destination:
    new
[*]Yes, that was the actual key on my server, but the server is already destroyed.

Automatic Updates / Frozen Deployments

New in version 0.10.3.d.

Salt has support for the Esky application freezing and update tool. This tool allows one to build a complete zipfile out of the salt scripts and all their dependencies - including shared objects / DLLs.

Getting Started

To build frozen applications, suitable build environment will be needed for each platform. You should probably set up a virtualenv in order to limit the scope of Q/A.

This process does work on Windows. Directions are available at https://github.com/saltstack/salt-windows-install for details on installing Salt in Windows. Only the 32-bit Python and dependencies have been tested, but they have been tested on 64-bit Windows.

Install bbfreeze, and then esky from PyPI in order to enable the bdist_esky command in setup.py. Salt itself must also be installed, in addition to its dependencies.

Building and Freezing

Once you have your tools installed and the environment configured, use setup.py to prepare the distribution files.

python setup.py sdist
python setup.py bdist

Once the distribution files are in place, Esky can be used traverse the module tree and pack all the scripts up into a redistributable.

python setup.py bdist_esky

There will be an appropriately versioned salt-VERSION.zip in dist/ if everything went smoothly.

Windows

C:\Python27\lib\site-packages\zmq will need to be added to the PATH variable. This helps bbfreeze find the zmq DLL so it can pack it up.

Using the Frozen Build

Unpack the zip file in the desired install location. Scripts like salt-minion and salt-call will be in the root of the zip file. The associated libraries and bootstrapping will be in the directories at the same level. (Check the Esky documentation for more information)

To support updating your minions in the wild, put the builds on a web server that the minions can reach. salt.modules.saltutil.update() will trigger an update and (optionally) a restart of the minion service under the new version.

Troubleshooting
A Windows minion isn't responding

The process dispatch on Windows is slower than it is on *nix. It may be necessary to add '-t 15' to salt commands to give minions plenty of time to return.

Windows and the Visual Studio Redist

The Visual C++ 2008 32-bit redistributable will need to be installed on all Windows minions. Esky has an option to pack the library into the zipfile, but OpenSSL does not seem to acknowledge the new location. If a no OPENSSL_Applink error appears on the console when trying to start a frozen minion, the redistributable is not installed.

Mixed Linux environments and Yum

The Yum Python module doesn't appear to be available on any of the standard Python package mirrors. If RHEL/CentOS systems need to be supported, the frozen build should created on that platform to support all the Linux nodes. Remember to build the virtualenv with --system-site-packages so that the yum module is included.

Automatic (Python) module discovery

Automatic (Python) module discovery does not work with the late-loaded scheme that Salt uses for (Salt) modules. Any misbehaving modules will need to be explicitly added to the freezer_includes in Salt's setup.py. Always check the zipped application to make sure that the necessary modules were included.

Multi Master Tutorial

As of Salt 0.16.0, the ability to connect minions to multiple masters has been made available. The multi-master system allows for redundancy of Salt masters and facilitates multiple points of communication out to minions. When using a multi-master setup, all masters are running hot, and any active master can be used to send commands out to the minions.

Note

If you need failover capabilities with multiple masters, there is also a MultiMaster-PKI setup available, that uses a different topology MultiMaster-PKI with Failover Tutorial

In 0.16.0, the masters do not share any information, keys need to be accepted on both masters, and shared files need to be shared manually or use tools like the git fileserver backend to ensure that the file_roots are kept consistent.

Summary of Steps
  1. Create a redundant master server
  2. Copy primary master key to redundant master
  3. Start redundant master
  4. Configure minions to connect to redundant master
  5. Restart minions
  6. Accept keys on redundant master
Prepping a Redundant Master

The first task is to prepare the redundant master. If the redundant master is already running, stop it. There is only one requirement when preparing a redundant master, which is that masters share the same private key. When the first master was created, the master's identifying key pair was generated and placed in the master's pki_dir. The default location of the master's key pair is /etc/salt/pki/master/. Take the private key, master.pem, and copy it to the same location on the redundant master. Do the same for the master's public key, master.pub. Assuming that no minions have yet been connected to the new redundant master, it is safe to delete any existing key in this location and replace it.

Note

There is no logical limit to the number of redundant masters that can be used.

Once the new key is in place, the redundant master can be safely started.

Configure Minions

Since minions need to be master-aware, the new master needs to be added to the minion configurations. Simply update the minion configurations to list all connected masters:

master:
  - saltmaster1.example.com
  - saltmaster2.example.com

Now the minion can be safely restarted.

Now the minions will check into the original master and also check into the new redundant master. Both masters are first-class and have rights to the minions.

Note

Minions can automatically detect failed masters and attempt to reconnect to reconnect to them quickly. To enable this functionality, set master_alive_interval in the minion config and specify a number of seconds to poll the masters for connection status.

If this option is not set, minions will still reconnect to failed masters but the first command sent after a master comes back up may be lost while the minion authenticates.

Sharing Files Between Masters

Salt does not automatically share files between multiple masters. A number of files should be shared or sharing of these files should be strongly considered.

Minion Keys

Minion keys can be accepted the normal way using salt-key on both masters. Keys accepted, deleted, or rejected on one master will NOT be automatically managed on redundant masters; this needs to be taken care of by running salt-key on both masters or sharing the /etc/salt/pki/master/{minions,minions_pre,minions_rejected} directories between masters.

Note

While sharing the /etc/salt/pki/master directory will work, it is strongly discouraged, since allowing access to the master.pem key outside of Salt creates a SERIOUS security risk.

File_Roots

The file_roots contents should be kept consistent between masters. Otherwise state runs will not always be consistent on minions since instructions managed by one master will not agree with other masters.

The recommended way to sync these is to use a fileserver backend like gitfs or to keep these files on shared storage.

Pillar_Roots

Pillar roots should be given the same considerations as file_roots.

Master Configurations

While reasons may exist to maintain separate master configurations, it is wise to remember that each master maintains independent control over minions. Therefore, access controls should be in sync between masters unless a valid reason otherwise exists to keep them inconsistent.

These access control options include but are not limited to:

  • external_auth
  • client_acl
  • peer
  • peer_run

Multi-Master-PKI Tutorial With Failover

This tutorial will explain, how to run a salt-environment where a single minion can have multiple masters and fail-over between them if its current master fails.

The individual steps are

  • setup the master(s) to sign its auth-replies

  • setup minion(s) to verify master-public-keys

  • enable multiple masters on minion(s)

  • enable master-check on minion(s)

    Please note, that it is advised to have good knowledge of the salt- authentication and communication-process to understand this tutorial. All of the settings described here, go on top of the default authentication/communication process.

Motivation

The default behaviour of a salt-minion is to connect to a master and accept the masters public key. With each publication, the master sends his public-key for the minion to check and if this public-key ever changes, the minion complains and exits. Practically this means, that there can only be a single master at any given time.

Would it not be much nicer, if the minion could have any number of masters (1:n) and jump to the next master if its current master died because of a network or hardware failure?

Note

There is also a MultiMaster-Tutorial with a different approach and topology than this one, that might also suite your needs or might even be better suited Multi-Master Tutorial

It is also desirable, to add some sort of authenticity-check to the very first public key a minion receives from a master. Currently a minions takes the first masters public key for granted.

The Goal

Setup the master to sign the public key it sends to the minions and enable the minions to verify this signature for authenticity.

Prepping the master to sign its public key

For signing to work, both master and minion must have the signing and/or verification settings enabled. If the master signs the public key but the minion does not verify it, the minion will complain and exit. The same happens, when the master does not sign but the minion tries to verify.

The easiest way to have the master sign its public key is to set

master_sign_pubkey: True

After restarting the salt-master service, the master will automatically generate a new key-pair

master_sign.pem
master_sign.pub

A custom name can be set for the signing key-pair by setting

master_sign_key_name: <name_without_suffix>

The master will then generate that key-pair upon restart and use it for creating the public keys signature attached to the auth-reply.

The computation is done for every auth-request of a minion. If many minions auth very often, it is advised to use conf_master:master_pubkey_signature and conf_master:master_use_pubkey_signature settings described below.

If multiple masters are in use and should sign their auth-replies, the signing key-pair master_sign.* has to be copied to each master. Otherwise a minion will fail to verify the masters public when connecting to a different master than it did initially. That is because the public keys signature was created with a different signing key-pair.

Prepping the minion to verify received public keys

The minion must have the public key (and only that one!) available to be able to verify a signature it receives. That public key (defaults to master_sign.pub) must be copied from the master to the minions pki-directory.

/etc/salt/pki/minion/master_sign.pub

DO NOT COPY THE master_sign.pem FILE. IT MUST STAY ON THE MASTER AND
ONLY THERE!

When that is done, enable the signature checking in the minions configuration

verify_master_pubkey_sign: True

and restart the minion. For the first try, the minion should be run in manual debug mode.

$ salt-minion -l debug

Upon connecting to the master, the following lines should appear on the output:

[DEBUG   ] Attempting to authenticate with the Salt Master at 172.16.0.10
[DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG   ] salt.crypt.verify_signature: Loading public key
[DEBUG   ] salt.crypt.verify_signature: Verifying signature
[DEBUG   ] Successfully verified signature of master public key with verification public key master_sign.pub
[INFO    ] Received signed and verified master pubkey from master 172.16.0.10
[DEBUG   ] Decrypting the current master AES key

If the signature verification fails, something went wrong and it will look like this

[DEBUG   ] Attempting to authenticate with the Salt Master at 172.16.0.10
[DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG   ] salt.crypt.verify_signature: Loading public key
[DEBUG   ] salt.crypt.verify_signature: Verifying signature
[DEBUG   ] Failed to verify signature of public key
[CRITICAL] The Salt Master server's public key did not authenticate!

In a case like this, it should be checked, that the verification pubkey (master_sign.pub) on the minion is the same as the one on the master.

Once the verification is successful, the minion can be started in daemon mode again.

For the paranoid among us, its also possible to verify the public whenever it is received from the master. That is, for every single auth-attempt which can be quite frequent. For example just the start of the minion will force the signature to be checked 6 times for various things like auth, mine, highstate, etc.

If that is desired, enable the setting

always_verify_signature: True
Multiple Masters For A Minion

Configuring multiple masters on a minion is done by specifying two settings:

  • a list of masters addresses
  • what type of master is defined
master:
    - 172.16.0.10
    - 172.16.0.11
    - 172.16.0.12
master_type: failover

This tells the minion that all the master above are available for it to connect to. When started with this configuration, it will try the master in the order they are defined. To randomize that order, set

master_shuffle: True

The master-list will then be shuffled before the first connection attempt.

The first master that accepts the minion, is used by the minion. If the master does not yet know the minion, that counts as accepted and the minion stays on that master.

For the minion to be able to detect if its still connected to its current master enable the check for it

master_alive_interval: <seconds>

If the loss of the connection is detected, the minion will temporarily remove the failed master from the list and try one of the other masters defined (again shuffled if that is enabled).

Testing the setup

At least two running masters are needed to test the failover setup.

Both masters should be running and the minion should be running on the command line in debug mode

$ salt-minion -l debug

The minion will connect to the first master from its master list

[DEBUG   ] Attempting to authenticate with the Salt Master at 172.16.0.10
[DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG   ] salt.crypt.verify_signature: Loading public key
[DEBUG   ] salt.crypt.verify_signature: Verifying signature
[DEBUG   ] Successfully verified signature of master public key with verification public key master_sign.pub
[INFO    ] Received signed and verified master pubkey from master 172.16.0.10
[DEBUG   ] Decrypting the current master AES key

A test.ping on the master the minion is currently connected to should be run to test connectivity.

If successful, that master should be turned off. A firewall-rule denying the minions packets will also do the trick.

Depending on the configured conf_minion:master_alive_interval, the minion will notice the loss of the connection and log it to its logfile.

[INFO    ] Connection to master 172.16.0.10 lost
[INFO    ] Trying to tune in to next master from master-list

The minion will then remove the current master from the list and try connecting to the next master

[INFO    ] Removing possibly failed master 172.16.0.10 from list of masters
[WARNING ] Master ip address changed from 172.16.0.10 to 172.16.0.11
[DEBUG   ] Attempting to authenticate with the Salt Master at 172.16.0.11

If everything is configured correctly, the new masters public key will be verified successfully

[DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG   ] salt.crypt.verify_signature: Loading public key
[DEBUG   ] salt.crypt.verify_signature: Verifying signature
[DEBUG   ] Successfully verified signature of master public key with verification public key master_sign.pub

the authentication with the new master is successful

[INFO    ] Received signed and verified master pubkey from master 172.16.0.11
[DEBUG   ] Decrypting the current master AES key
[DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[INFO    ] Authentication with master successful!

and the minion can be pinged again from its new master.

Performance Tuning

With the setup described above, the master computes a signature for every auth-request of a minion. With many minions and many auth-requests, that can chew up quite a bit of CPU-Power.

To avoid that, the master can use a pre-created signature of its public-key. The signature is saved as a base64 encoded string which the master reads once when starting and attaches only that string to auth-replies.

Enabling this also gives paranoid users the possibility, to have the signing key-pair on a different system than the actual salt-master and create the public keys signature there. Probably on a system with more restrictive firewall rules, without internet access, less users, etc.

That signature can be created with

$ salt-key --gen-signature

This will create a default signature file in the master pki-directory

/etc/salt/pki/master/master_pubkey_signature

It is a simple text-file with the binary-signature converted to base64.

If no signing-pair is present yet, this will auto-create the signing pair and the signature file in one call

$ salt-key --gen-signature --auto-create

Telling the master to use the pre-created signature is done with

master_use_pubkey_signature: True

That requires the file 'master_pubkey_signature' to be present in the masters pki-directory with the correct signature.

If the signature file is named differently, its name can be set with

master_pubkey_signature: <filename>

With many masters and many public-keys (default and signing), it is advised to use the salt-masters hostname for the signature-files name. Signatures can be easily confused because they do not provide any information about the key the signature was created from.

Verifying that everything works is done the same way as above.

How the signing and verification works

The default key-pair of the salt-master is

/etc/salt/pki/master/master.pem
/etc/salt/pki/master/master.pub

To be able to create a signature of a message (in this case a public-key), another key-pair has to be added to the setup. Its default name is:

master_sign.pem
master_sign.pub

The combination of the master.* and master_sign.* key-pairs give the possibility of generating signatures. The signature of a given message is unique and can be verified, if the public-key of the signing-key-pair is available to the recipient (the minion).

The signature of the masters public-key in master.pub is computed with

master_sign.pem
master.pub
M2Crypto.EVP.sign_update()

This results in a binary signature which is converted to base64 and attached to the auth-reply send to the minion.

With the signing-pairs public-key available to the minion, the attached signature can be verified with

master_sign.pub
master.pub
M2Cryptos EVP.verify_update().

When running multiple masters, either the signing key-pair has to be present on all of them, or the master_pubkey_signature has to be pre-computed for each master individually (because they all have different public-keys).

DO NOT PUT THE SAME master.pub ON ALL MASTERS FOR EASE OF USE.

Preseed Minion with Accepted Key

In some situations, it is not convenient to wait for a minion to start before accepting its key on the master. For instance, you may want the minion to bootstrap itself as soon as it comes online. You may also want to to let your developers provision new development machines on the fly.

See also

Many ways to preseed minion keys

Salt has other ways to generate and pre-accept minion keys in addition to the manual steps outlined below.

salt-cloud performs these same steps automatically when new cloud VMs are created (unless instructed not to).

salt-api exposes an HTTP call to Salt's REST API to generate and download the new minion keys as a tarball.

There is a general four step process to do this:

  1. Generate the keys on the master:
root@saltmaster# salt-key --gen-keys=[key_name]

Pick a name for the key, such as the minion's id.

  1. Add the public key to the accepted minion folder:
root@saltmaster# cp key_name.pub /etc/salt/pki/master/minions/[minion_id]

It is necessary that the public key file has the same name as your minion id. This is how Salt matches minions with their keys. Also note that the pki folder could be in a different location, depending on your OS or if specified in the master config file.

  1. Distribute the minion keys.

There is no single method to get the keypair to your minion. The difficulty is finding a distribution method which is secure. For Amazon EC2 only, an AWS best practice is to use IAM Roles to pass credentials. (See blog post, http://blogs.aws.amazon.com/security/post/Tx610S2MLVZWEA/Using-IAM-roles-to-distribute-non-AWS-credentials-to-your-EC2-instances )

Security Warning

Since the minion key is already accepted on the master, distributing the private key poses a potential security risk. A malicious party will have access to your entire state tree and other sensitive data if they gain access to a preseeded minion key.

  1. Preseed the Minion with the keys

You will want to place the minion keys before starting the salt-minion daemon:

/etc/salt/pki/minion/minion.pem
/etc/salt/pki/minion/minion.pub

Once in place, you should be able to start salt-minion and run salt-call state.highstate or any other salt commands that require master authentication.

Salt Bootstrap

The Salt Bootstrap script allows for a user to install the Salt Minion or Master on a variety of system distributions and versions. This shell script known as bootstrap-salt.sh runs through a series of checks to determine the operating system type and version. It then installs the Salt binaries using the appropriate methods. The Salt Bootstrap script installs the minimum number of packages required to run Salt. This means that in the event you run the bootstrap to install via package, Git will not be installed. Installing the minimum number of packages helps ensure the script stays as lightweight as possible, assuming the user will install any other required packages after the Salt binaries are present on the system. The script source is available on GitHub: https://github.com/saltstack/salt-bootstrap

Supported Operating Systems
  • Amazon Linux 2012.09
  • Arch
  • CentOS 5/6
  • Debian 6.x/7.x/8(git installations only)
  • Fedora 17/18
  • FreeBSD 9.1/9.2/10
  • Gentoo
  • Linaro
  • Linux Mint 13/14
  • OpenSUSE 12.x
  • Oracle Linux 5/5
  • Red Hat 5/6
  • Red Hat Enterprise 5/6
  • Scientific Linux 5/6
  • SmartOS
  • SuSE 11 SP1/11 SP2
  • Ubuntu 10.x/11.x/12.x/13.04/13.10
  • Elementary OS 0.2

Note

In the event you do not see your distribution or version available please review the develop branch on GitHub as it main contain updates that are not present in the stable release: https://github.com/saltstack/salt-bootstrap/tree/develop

Example Usage

If you're looking for the one-liner to install salt, please scroll to the bottom and use the instructions for Installing via an Insecure One-Liner

Note

In every two-step example, you would be well-served to examine the downloaded file and examine it to ensure that it does what you expect.

Using curl to install latest git:

curl -L https://bootstrap.saltstack.com -o install_salt.sh
sudo sh install_salt.sh git develop

Using wget to install your distribution's stable packages:

wget -O install_salt.sh https://bootstrap.saltstack.com
sudo sh install_salt.sh

Install a specific version from git using wget:

wget -O install_salt.sh https://bootstrap.saltstack.com
sudo sh install_salt.sh -P git v0.16.4

If you already have python installed, python 2.6, then it's as easy as:

python -m urllib "https://bootstrap.saltstack.com" > install_salt.sh
sudo sh install_salt.sh git develop

All python versions should support the following one liner:

python -c 'import urllib; print urllib.urlopen("https://bootstrap.saltstack.com").read()' > install_salt.sh
sudo sh install_salt.sh git develop

On a FreeBSD base system you usually don't have either of the above binaries available. You do have fetch available though:

fetch -o install_salt.sh https://bootstrap.saltstack.com
sudo sh install_salt.sh

If all you want is to install a salt-master using latest git:

curl -o install_salt.sh -L https://bootstrap.saltstack.com
sudo sh install_salt.sh -M -N git develop

If you want to install a specific release version (based on the git tags):

curl -o install_salt.sh -L https://bootstrap.saltstack.com
sudo sh install_salt.sh git v0.16.4

To install a specific branch from a git fork:

curl -o install_salt.sh -L https://bootstrap.saltstack.com
sudo sh install_salt.sh -g https://github.com/myuser/salt.git git mybranch
Installing via an Insecure One-Liner

The following examples illustrate how to install Salt via a one-liner.

Note

Warning! These methods do not involve a verification step and assume that the delivered file is trustworthy.

Examples

Installing the latest develop branch of Salt:

curl -L https://bootstrap.saltstack.com | sudo sh -s -- git develop

Any of the example above which use two-lines can be made to run in a single-line configuration with minor modifications.

Example Usage

The Salt Bootstrap script has a wide variety of options that can be passed as well as several ways of obtaining the bootstrap script itself.

For example, using curl to install your distribution's stable packages:

curl -L https://bootstrap.saltstack.com | sudo sh

Using wget to install your distribution's stable packages:

wget -O - https://bootstrap.saltstack.com | sudo sh

Installing the latest version available from git with curl:

curl -L https://bootstrap.saltstack.com | sudo sh -s -- git develop

Install a specific version from git using wget:

wget -O - https://bootstrap.saltstack.com | sh -s -- -P git v0.16.4

If you already have python installed, python 2.6, then it's as easy as:

python -m urllib "https://bootstrap.saltstack.com" | sudo sh -s -- git develop

All python versions should support the following one liner:

python -c 'import urllib; print urllib.urlopen("https://bootstrap.saltstack.com").read()' | \
sudo  sh -s -- git develop

On a FreeBSD base system you usually don't have either of the above binaries available. You do have fetch available though:

fetch -o - https://bootstrap.saltstack.com | sudo sh

If all you want is to install a salt-master using latest git:

curl -L https://bootstrap.saltstack.com | sudo sh -s -- -M -N git develop

If you want to install a specific release version (based on the git tags):

curl -L https://bootstrap.saltstack.com | sudo sh -s -- git v0.16.4

Downloading the develop branch (from here standard command line options may be passed):

wget https://bootstrap.saltstack.com/develop
Command Line Options

Here's a summary of the command line options:

$ sh bootstrap-salt.sh -h

  Usage :  bootstrap-salt.sh [options] <install-type> <install-type-args>

  Installation types:
    - stable (default)
    - daily  (ubuntu specific)
    - git

  Examples:
    $ bootstrap-salt.sh
    $ bootstrap-salt.sh stable
    $ bootstrap-salt.sh daily
    $ bootstrap-salt.sh git
    $ bootstrap-salt.sh git develop
    $ bootstrap-salt.sh git v0.17.0
    $ bootstrap-salt.sh git 8c3fadf15ec183e5ce8c63739850d543617e4357

  Options:
  -h  Display this message
  -v  Display script version
  -n  No colours.
  -D  Show debug output.
  -c  Temporary configuration directory
  -g  Salt repository URL. (default: git://github.com/saltstack/salt.git)
  -k  Temporary directory holding the minion keys which will pre-seed
      the master.
  -M  Also install salt-master
  -S  Also install salt-syndic
  -N  Do not install salt-minion
  -X  Do not start daemons after installation
  -C  Only run the configuration function. This option automatically
      bypasses any installation.
  -P  Allow pip based installations. On some distributions the required salt
      packages or its dependencies are not available as a package for that
      distribution. Using this flag allows the script to use pip as a last
      resort method. NOTE: This only works for functions which actually
      implement pip based installations.
  -F  Allow copied files to overwrite existing(config, init.d, etc)
  -U  If set, fully upgrade the system prior to bootstrapping salt
  -K  If set, keep the temporary files in the temporary directories specified
      with -c and -k.
  -I  If set, allow insecure connections while downloading any files. For
      example, pass '--no-check-certificate' to 'wget' or '--insecure' to 'curl'
  -A  Pass the salt-master DNS name or IP. This will be stored under
      ${BS_SALT_ETC_DIR}/minion.d/99-master-address.conf
  -i  Pass the salt-minion id. This will be stored under
      ${BS_SALT_ETC_DIR}/minion_id
  -L  Install the Apache Libcloud package if possible(required for salt-cloud)
  -p  Extra-package to install while installing salt dependencies. One package
      per -p flag. You're responsible for providing the proper package name.

Git Fileserver Backend Walkthrough

Note

This walkthrough assumes basic knowledge of Salt. To get up to speed, check out the Salt Walkthrough.

The gitfs backend allows Salt to serve files from git repositories. It can be enabled by adding git to the fileserver_backend list, and configuring one or more repositories in gitfs_remotes.

Branches and tags become Salt fileserver environments.

Installing Dependencies

Beginning with version 2014.7.0, both pygit2 and Dulwich are supported as alternatives to GitPython. The desired provider can be configured using the gitfs_provider parameter in the master config file.

If gitfs_provider is not configured, then Salt will prefer pygit2 if a suitable version is available, followed by GitPython and Dulwich.

pygit2

The minimum supported version of pygit2 is 0.20.3. Availability for this version of pygit2 is still limited, though the SaltStack team is working to get compatible versions available for as many platforms as possible.

For the Fedora/EPEL versions which have a new enough version packaged, the following command would be used to install pygit2:

# yum install python-pygit2

Provided a valid version is packaged for Debian/Ubuntu (which is not currently the case), the package name would be the same, and the following command would be used to install it:

# apt-get install python-pygit2

If pygit2 is not packaged for the platform on which the Master is running, the pygit2 website has installation instructions here. Keep in mind however that following these instructions will install libgit2 and pygit2 without system packages. Additionally, keep in mind that SSH authentication in pygit2 requires libssh2 (not libssh) development libraries to be present before libgit2 is built. On some distros (debian based) pkg-config is also required to link libgit2 with libssh2.

GitPython

GitPython 0.3.0 or newer is required to use GitPython for gitfs. For RHEL-based Linux distros, a compatible version is available in EPEL, and can be easily installed on the master using yum:

# yum install GitPython

Ubuntu 14.04 LTS and Debian Wheezy (7.x) also have a compatible version packaged:

# apt-get install python-git

If your master is running an older version (such as Ubuntu 12.04 LTS or Debian Squeeze), then you will need to install GitPython using either pip or easy_install (it is recommended to use pip). Version 0.3.2.RC1 is now marked as the stable release in PyPI, so it should be a simple matter of running pip install GitPython (or easy_install GitPython) as root.

Warning

Keep in mind that if GitPython has been previously installed on the master using pip (even if it was subsequently uninstalled), then it may still exist in the build cache (typically /tmp/pip-build-root/GitPython) if the cache is not cleared after installation. The package in the build cache will override any requirement specifiers, so if you try upgrading to version 0.3.2.RC1 by running pip install 'GitPython==0.3.2.RC1' then it will ignore this and simply install the version from the cache directory. Therefore, it may be necessary to delete the GitPython directory from the build cache in order to ensure that the specified version is installed.

Dulwich

Dulwich 0.9.4 or newer is required to use Dulwich as backend for gitfs.

Dulwich is available in EPEL, and can be easily installed on the master using yum:

# yum install python-dulwich

For APT-based distros such as Ubuntu and Debian:

# apt-get install python-dulwich

Important

If switching to Dulwich from GitPython/pygit2, or switching from GitPython/pygit2 to Dulwich, it is necessary to clear the gitfs cache to avoid unpredictable behavior. This is probably a good idea whenever switching to a new gitfs_provider, but it is less important when switching between GitPython and pygit2.

Beginning in version 2015.5.0, the gitfs cache can be easily cleared using the fileserver.clear_cache runner.

salt-run fileserver.clear_cache backend=git

If the Master is running an earlier version, then the cache can be cleared by removing the gitfs and file_lists/gitfs directories (both paths relative to the master cache directory, usually /var/cache/salt/master).

rm -rf /var/cache/salt/master{,/file_lists}/gitfs
Simple Configuration

To use the gitfs backend, only two configuration changes are required on the master:

  1. Include git in the fileserver_backend list in the master config file:

    fileserver_backend:
      - git
    
  2. Specify one or more git://, https://, file://, or ssh:// URLs in gitfs_remotes to configure which repositories to cache and search for requested files:

    gitfs_remotes:
      - https://github.com/saltstack-formulas/salt-formula.git
    

    SSH remotes can also be configured using scp-like syntax:

    gitfs_remotes:
      - git@github.com:user/repo.git
      - ssh://user@domain.tld/path/to/repo.git
    

    Information on how to authenticate to SSH remotes can be found here.

    Note

    Dulwich does not recognize ssh:// URLs, git+ssh:// must be used instead. Salt version 2015.5.0 and later will automatically add the git+ to the beginning of these URLs before fetching, but earlier Salt versions will fail to fetch unless the URL is specified using git+ssh://.

  3. Restart the master to load the new configuration.

Note

In a master/minion setup, files from a gitfs remote are cached once by the master, so minions do not need direct access to the git repository.

Multiple Remotes

The gitfs_remotes option accepts an ordered list of git remotes to cache and search, in listed order, for requested files.

A simple scenario illustrates this cascading lookup behavior:

If the gitfs_remotes option specifies three remotes:

gitfs_remotes:
  - git://github.com/example/first.git
  - https://github.com/example/second.git
  - file:///root/third

And each repository contains some files:

first.git:
    top.sls
    edit/vim.sls
    edit/vimrc
    nginx/init.sls

second.git:
    edit/dev_vimrc
    haproxy/init.sls

third:
    haproxy/haproxy.conf
    edit/dev_vimrc

Salt will attempt to lookup the requested file from each gitfs remote repository in the order in which they are defined in the configuration. The git://github.com/example/first.git remote will be searched first. If the requested file is found, then it is served and no further searching is executed. For example:

  • A request for the file salt://haproxy/init.sls will be served from the https://github.com/example/second.git git repo.
  • A request for the file salt://haproxy/haproxy.conf will be served from the file:///root/third repo.

Note

This example is purposefully contrived to illustrate the behavior of the gitfs backend. This example should not be read as a recommended way to lay out files and git repos.

The file:// prefix denotes a git repository in a local directory. However, it will still use the given file:// URL as a remote, rather than copying the git repo to the salt cache. This means that any refs you want accessible must exist as local refs in the specified repo.

Warning

Salt versions prior to 2014.1.0 are not tolerant of changing the order of remotes or modifying the URI of existing remotes. In those versions, when modifying remotes it is a good idea to remove the gitfs cache directory (/var/cache/salt/master/gitfs) before restarting the salt-master service.

Per-remote Configuration Parameters

New in version 2014.7.0.

The following master config parameters are global (that is, they apply to all configured gitfs remotes):

These parameters can now be overridden on a per-remote basis. This allows for a tremendous amount of customization. Here's some example usage:

gitfs_provider: pygit2
gitfs_base: develop

gitfs_remotes:
  - https://foo.com/foo.git
  - https://foo.com/bar.git:
    - root: salt
    - mountpoint: salt://foo/bar/baz
    - base: salt-base
  - http://foo.com/baz.git:
    - root: salt/states
    - user: joe
    - password: mysupersecretpassword
    - insecure_auth: True

Important

There are two important distinctions which should be noted for per-remote configuration:

  1. The URL of a remote which has per-remote configuration must be suffixed with a colon.
  2. Per-remote configuration parameters are named like the global versions, with the gitfs_ removed from the beginning.

In the example configuration above, the following is true:

  1. The first and third gitfs remotes will use the develop branch/tag as the base environment, while the second one will use the salt-base branch/tag as the base environment.
  2. The first remote will serve all files in the repository. The second remote will only serve files from the salt directory (and its subdirectories), while the third remote will only serve files from the salt/states directory (and its subdirectories).
  3. The files from the second remote will be located under salt://foo/bar/baz, while the files from the first and third remotes will be located under the root of the Salt fileserver namespace (salt://).
  4. The third remote overrides the default behavior of not authenticating to insecure (non-HTTPS) remotes.
Serving from a Subdirectory

The gitfs_root parameter allows files to be served from a subdirectory within the repository. This allows for only part of a repository to be exposed to the Salt fileserver.

Assume the below layout:

.gitignore
README.txt
foo/
foo/bar/
foo/bar/one.txt
foo/bar/two.txt
foo/bar/three.txt
foo/baz/
foo/baz/top.sls
foo/baz/edit/vim.sls
foo/baz/edit/vimrc
foo/baz/nginx/init.sls

The below configuration would serve only the files under foo/baz, ignoring the other files in the repository:

gitfs_remotes:
  - git://mydomain.com/stuff.git

gitfs_root: foo/baz

The root can also be configured on a per-remote basis.

Mountpoints

New in version 2014.7.0.

The gitfs_mountpoint parameter will prepend the specified path to the files served from gitfs. This allows an existing repository to be used, rather than needing to reorganize a repository or design it around the layout of the Salt fileserver.

Before the addition of this feature, if a file being served up via gitfs was deeply nested within the root directory (for example, salt://webapps/foo/files/foo.conf, it would be necessary to ensure that the file was properly located in the remote repository, and that all of the the parent directories were present (for example, the directories webapps/foo/files/ would need to exist at the root of the repository).

The below example would allow for a file foo.conf at the root of the repository to be served up from the Salt fileserver path salt://webapps/foo/files/foo.conf.

gitfs_remotes:
  - https://mydomain.com/stuff.git

gitfs_mountpoint: salt://webapps/foo/files

Mountpoints can also be configured on a per-remote basis.

Using gitfs Alongside Other Backends

Sometimes it may make sense to use multiple backends; for instance, if sls files are stored in git but larger files are stored directly on the master.

The cascading lookup logic used for multiple remotes is also used with multiple backends. If the fileserver_backend option contains multiple backends:

fileserver_backend:
  - roots
  - git

Then the roots backend (the default backend of files in /srv/salt) will be searched first for the requested file; then, if it is not found on the master, each configured git remote will be searched.

Branches, Environments, and Top Files

When using the gitfs backend, branches, and tags will be mapped to environments using the branch/tag name as an identifier.

There is one exception to this rule: the master branch is implicitly mapped to the base environment.

So, for a typical base, qa, dev setup, the following branches could be used:

master
qa
dev

top.sls files from different branches will be merged into one at runtime. Since this can lead to overly complex configurations, the recommended setup is to have a separate repository, containing only the top.sls file with just one single master branch.

To map a branch other than master as the base environment, use the gitfs_base parameter.

gitfs_base: salt-base

The base can also be configured on a per-remote basis.

Environment Whitelist/Blacklist

New in version 2014.7.0.

The gitfs_env_whitelist and gitfs_env_blacklist parameters allow for greater control over which branches/tags are exposed as fileserver environments. Exact matches, globs, and regular expressions are supported, and are evaluated in that order. If using a regular expression, ^ and $ must be omitted, and the expression must match the entire branch/tag.

gitfs_env_whitelist:
  - base
  - v1.*
  - 'mybranch\d+'

Note

v1.*, in this example, will match as both a glob and a regular expression (though it will have been matched as a glob, since globs are evaluated before regular expressions).

The behavior of the blacklist/whitelist will differ depending on which combination of the two options is used:

  • If only gitfs_env_whitelist is used, then only branches/tags which match the whitelist will be available as environments
  • If only gitfs_env_blacklist is used, then the branches/tags which match the blacklist will not be available as environments
  • If both are used, then the branches/tags which match the whitelist, but do not match the blacklist, will be available as environments.
Authentication
pygit2

New in version 2014.7.0.

Both HTTPS and SSH authentication are supported as of version 0.20.3, which is the earliest version of pygit2 supported by Salt for gitfs.

Note

The examples below make use of per-remote configuration parameters, a feature new to Salt 2014.7.0. More information on these can be found here.

HTTPS

For HTTPS repositories which require authentication, the username and password can be provided like so:

gitfs_remotes:
  - https://domain.tld/myrepo.git:
    - user: git
    - password: mypassword

If the repository is served over HTTP instead of HTTPS, then Salt will by default refuse to authenticate to it. This behavior can be overridden by adding an insecure_auth parameter:

gitfs_remotes:
  - http://domain.tld/insecure_repo.git:
    - user: git
    - password: mypassword
    - insecure_auth: True
SSH

SSH repositories can be configured using the ssh:// protocol designation, or using scp-like syntax. So, the following two configurations are equivalent:

  • ssh://git@github.com/user/repo.git
  • git@github.com:user/repo.git

Both gitfs_pubkey and gitfs_privkey (or their per-remote counterparts) must be configured in order to authenticate to SSH-based repos. If the private key is protected with a passphrase, it can be configured using gitfs_passphrase (or simply passphrase if being configured per-remote). For example:

gitfs_remotes:
  - git@github.com:user/repo.git:
    - pubkey: /root/.ssh/id_rsa.pub
    - privkey: /root/.ssh/id_rsa
    - passphrase: myawesomepassphrase

Finally, the SSH host key must be added to the known_hosts file.

GitPython

With GitPython, only passphrase-less SSH public key authentication is supported. The auth parameters (pubkey, privkey, etc.) shown in the pygit2 authentication examples above do not work with GitPython.

gitfs_remotes:
  - ssh://git@github.com/example/salt-states.git

Since GitPython wraps the git CLI, the private key must be located in ~/.ssh/id_rsa for the user under which the Master is running, and should have permissions of 0600. Also, in the absence of a user in the repo URL, GitPython will (just as SSH does) attempt to login as the current user (in other words, the user under which the Master is running, usually root).

If a key needs to be used, then ~/.ssh/config can be configured to use the desired key. Information on how to do this can be found by viewing the manpage for ssh_config. Here's an example entry which can be added to the ~/.ssh/config to use an alternate key for gitfs:

Host github.com
    IdentityFile /root/.ssh/id_rsa_gitfs

The Host parameter should be a hostname (or hostname glob) that matches the domain name of the git repository.

It is also necessary to add the SSH host key to the known_hosts file. The exception to this would be if strict host key checking is disabled, which can be done by adding StrictHostKeyChecking no to the entry in ~/.ssh/config

Host github.com
    IdentityFile /root/.ssh/id_rsa_gitfs
    StrictHostKeyChecking no

However, this is generally regarded as insecure, and is not recommended.

Adding the SSH Host Key to the known_hosts File

To use SSH authentication, it is necessary to have the remote repository's SSH host key in the ~/.ssh/known_hosts file. If the master is also a minion, this can be done using the ssh.set_known_host function:

# salt mymaster ssh.set_known_host user=root hostname=github.com
mymaster:
    ----------
    new:
        ----------
        enc:
            ssh-rsa
        fingerprint:
            16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48
        hostname:
            |1|OiefWWqOD4kwO3BhoIGa0loR5AA=|BIXVtmcTbPER+68HvXmceodDcfI=
        key:
            AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==
    old:
        None
    status:
        updated

If not, then the easiest way to add the key is to su to the user (usually root) under which the salt-master runs and attempt to login to the server via SSH:

$ su
Password:
# ssh github.com
The authenticity of host 'github.com (192.30.252.128)' can't be established.
RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'github.com,192.30.252.128' (RSA) to the list of known hosts.
Permission denied (publickey).

It doesn't matter if the login was successful, as answering yes will write the fingerprint to the known_hosts file.

Verifying the Fingerprint

To verify that the correct fingerprint was added, it is a good idea to look it up. One way to do this is to use nmap:

$ nmap github.com --script ssh-hostkey

Starting Nmap 5.51 ( http://nmap.org ) at 2014-08-18 17:47 CDT
Nmap scan report for github.com (192.30.252.129)
Host is up (0.17s latency).
Not shown: 996 filtered ports
PORT     STATE SERVICE
22/tcp   open  ssh
| ssh-hostkey: 1024 ad:1c:08:a4:40:e3:6f:9c:f5:66:26:5d:4b:33:5d:8c (DSA)
|_2048 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48 (RSA)
80/tcp   open  http
443/tcp  open  https
9418/tcp open  git

Nmap done: 1 IP address (1 host up) scanned in 28.78 seconds

Another way is to check one's own known_hosts file, using this one-liner:

$ ssh-keygen -l -f /dev/stdin <<<`ssh-keyscan -t rsa github.com 2>/dev/null` | awk '{print $2}'
16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48
Refreshing gitfs Upon Push

By default, Salt updates the remote fileserver backends every 60 seconds. However, if it is desirable to refresh quicker than that, the Reactor System can be used to signal the master to update the fileserver on each push, provided that the git server is also a Salt minion. There are three steps to this process:

  1. On the master, create a file /srv/reactor/update_fileserver.sls, with the following contents:

    update_fileserver:
      runner.fileserver.update
    
  2. Add the following reactor configuration to the master config file:

    reactor:
      - 'salt/fileserver/gitfs/update':
        - /srv/reactor/update_fileserver.sls
    
  3. On the git server, add a post-receive hook with the following contents:

    #!/usr/bin/env sh
    
    salt-call event.fire_master update salt/fileserver/gitfs/update
    

The update argument right after event.fire_master in this example can really be anything, as it represents the data being passed in the event, and the passed data is ignored by this reactor.

Similarly, the tag name salt/fileserver/gitfs/update can be replaced by anything, so long as the usage is consistent.

Using Git as an External Pillar Source

Git repositories can also be used to provide Pillar data, using the External Pillar system. Note that this is different from gitfs, and is not yet at feature parity with it.

To define a git external pillar, add a section like the following to the salt master config file:

ext_pillar:
  - git: <branch> <repo> [root=<gitroot>]

Changed in version 2014.7.0: The optional root parameter was added

The <branch> param is the branch containing the pillar SLS tree. The <repo> param is the URI for the repository. To add the master branch of the specified repo as an external pillar source:

ext_pillar:
  - git: master https://domain.com/pillar.git

Use the root parameter to use pillars from a subdirectory of a git repository:

ext_pillar:
  - git: master https://domain.com/pillar.git root=subdirectory

More information on the git external pillar can be found in the salt.pillar.get_pillar docs.

Why aren't my custom modules/states/etc. syncing to my Minions?

In versions 0.16.3 and older, when using the git fileserver backend, certain versions of GitPython may generate errors when fetching, which Salt fails to catch. While not fatal to the fetch process, these interrupt the fileserver update that takes place before custom types are synced, and thus interrupt the sync itself. Try disabling the git fileserver backend in the master config, restarting the master, and attempting the sync again.

This issue is worked around in Salt 0.16.4 and newer.

The MacOS X (Maverick) Developer Step By Step Guide To Salt Installation

This document provides a step-by-step guide to installing a Salt cluster consisting of one master, and one minion running on a local VM hosted on Mac OS X.

Note

This guide is aimed at developers who wish to run Salt in a virtual machine. The official (Linux) walkthrough can be found here.

The 5 Cent Salt Intro

Since you're here you've probably already heard about Salt, so you already know Salt lets you configure and run commands on hordes of servers easily. Here's a brief overview of a Salt cluster:

  • Salt works by having a "master" server sending commands to one or multiple "minion" servers [1]. The master server is the "command center". It is going to be the place where you store your configuration files, aka: "which server is the db, which is the web server, and what libraries and software they should have installed". The minions receive orders from the master. Minions are the servers actually performing work for your business.

  • Salt has two types of configuration files:

    1. the "salt communication channels" or "meta" or "config" configuration files (not official names): one for the master (usually is /etc/salt/master , on the master server), and one for minions (default is /etc/salt/minion or /etc/salt/minion.conf, on the minion servers). Those files are used to determine things like the Salt Master IP, port, Salt folder locations, etc.. If these are configured incorrectly, your minions will probably be unable to receive orders from the master, or the master will not know which software a given minion should install.

    2. the "business" or "service" configuration files (once again, not an official name): these are configuration files, ending with ".sls" extension, that describe which software should run on which server, along with particular configuration properties for the software that is being installed. These files should be created in the /srv/salt folder by default, but their location can be changed using ... /etc/salt/master configuration file!

Note

This tutorial contains a third important configuration file, not to be confused with the previous two: the virtual machine provisioning configuration file. This in itself is not specifically tied to Salt, but it also contains some Salt configuration. More on that in step 3. Also note that all configuration files are YAML files. So indentation matters.

[1]Salt also works with "masterless" configuration where a minion is autonomous (in which case salt can be seen as a local configuration tool), or in "multiple master" configuration. See the documentation for more on that.
Before Digging In, The Architecture Of The Salt Cluster
Salt Master

The "Salt master" server is going to be the Mac OS machine, directly. Commands will be run from a terminal app, so Salt will need to be installed on the Mac. This is going to be more convenient for toying around with configuration files.

Salt Minion

We'll only have one "Salt minion" server. It is going to be running on a Virtual Machine running on the Mac, using VirtualBox. It will run an Ubuntu distribution.

Step 1 - Configuring The Salt Master On Your Mac

official documentation

Because Salt has a lot of dependencies that are not built in Mac OS X, we will use Homebrew to install Salt. Homebrew is a package manager for Mac, it's great, use it (for this tutorial at least!). Some people spend a lot of time installing libs by hand to better understand dependencies, and then realize how useful a package manager is once they're configuring a brand new machine and have to do it all over again. It also lets you uninstall things easily.

Note

Brew is a Ruby program (Ruby is installed by default with your Mac). Brew downloads, compiles, and links software. The linking phase is when compiled software is deployed on your machine. It may conflict with manually installed software, especially in the /usr/local directory. It's ok, remove the manually installed version then refresh the link by typing brew link 'packageName'. Brew has a brew doctor command that can help you troubleshoot. It's a great command, use it often. Brew requires xcode command line tools. When you run brew the first time it asks you to install them if they're not already on your system. Brew installs software in /usr/local/bin (system bins are in /usr/bin). In order to use those bins you need your $PATH to search there first. Brew tells you if your $PATH needs to be fixed.

Tip

Use the keyboard shortcut cmd + shift + period in the "open" Mac OS X dialog box to display hidden files and folders, such as .profile.

Install Homebrew

Install Homebrew here http://brew.sh/ Or just type

ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"

Now type the following commands in your terminal (you may want to type brew doctor after each to make sure everything's fine):

brew install python
brew install swig
brew install zmq

Note

zmq is ZeroMQ. It's a fantastic library used for server to server network communication and is at the core of Salt efficiency.

Install Salt

You should now have everything ready to launch this command:

pip install salt

Note

There should be no need for sudo pip install salt. Brew installed Python for your user, so you should have all the access. In case you would like to check, type which python to ensure that it's /usr/local/bin/python, and which pip which should be /usr/local/bin/pip.

Now type python in a terminal then, import salt. There should be no errors. Now exit the Python terminal using exit().

Create The Master Configuration

If the default /etc/salt/master configuration file was not created, copy-paste it from here: http://docs.saltstack.com/ref/configuration/examples.html#configuration-examples-master

Note

/etc/salt/master is a file, not a folder.

Salt Master configuration changes. The Salt master needs a few customization to be able to run on Mac OS X:

sudo launchctl limit maxfiles 4096 8192

In the /etc/salt/master file, change max_open_files to 8192 (or just add the line: max_open_files: 8192 (no quote) if it doesn't already exists).

You should now be able to launch the Salt master:

sudo salt-master --log-level=all

There should be no errors when running the above command.

Note

This command is supposed to be a daemon, but for toying around, we'll keep it running on a terminal to monitor the activity.

Now that the master is set, let's configure a minion on a VM.

Step 2 - Configuring The Minion VM

The Salt minion is going to run on a Virtual Machine. There are a lot of software options that let you run virtual machines on a mac, But for this tutorial we're going to use VirtualBox. In addition to virtualBox, we will use Vagrant, which allows you to create the base VM configuration.

Vagrant lets you build ready to use VM images, starting from an OS image and customizing it using "provisioners". In our case, we'll use it to:

  • Download the base Ubuntu image
  • Install salt on that Ubuntu image (Salt is going to be the "provisioner" for the VM).
  • Launch the VM
  • SSH into the VM to debug
  • Stop the VM once you're done.
Install VirtualBox

Go get it here: https://www.virtualBox.org/wiki/Downloads (click on VirtualBox for OS X hosts => x86/amd64)

Install Vagrant

Go get it here: http://downloads.vagrantup.com/ and choose the latest version (1.3.5 at time of writing), then the .dmg file. Double-click to install it. Make sure the vagrant command is found when run in the terminal. Type vagrant. It should display a list of commands.

Create The Minion VM Folder

Create a folder in which you will store your minion's VM. In this tutorial, it's going to be a minion folder in the $home directory.

cd $home
mkdir minion
Initialize Vagrant

From the minion folder, type

vagrant init

This command creates a default Vagrantfile configuration file. This configuration file will be used to pass configuration parameters to the Salt provisioner in Step 3.

Import Precise64 Ubuntu Box
vagrant box add precise64 http://files.vagrantup.com/precise64.box

Note

This box is added at the global Vagrant level. You only need to do it once as each VM will use this same file.

Modify the Vagrantfile

Modify ./minion/Vagrantfile to use th precise64 box. Change the config.vm.box line to:

config.vm.box = "precise64"

Uncomment the line creating a host-only IP. This is the ip of your minion (you can change it to something else if that IP is already in use):

config.vm.network :private_network, ip: "192.168.33.10"

At this point you should have a VM that can run, although there won't be much in it. Let's check that.

Checking The VM

From the $home/minion folder type:

vagrant up

A log showing the VM booting should be present. Once it's done you'll be back to the terminal:

ping 192.168.33.10

The VM should respond to your ping request.

Now log into the VM in ssh using Vagrant again:

vagrant ssh

You should see the shell prompt change to something similar to vagrant@precise64:~$ meaning you're inside the VM. From there, enter the following:

ping 10.0.2.2

Note

That ip is the ip of your VM host (the Mac OS X OS). The number is a VirtualBox default and is displayed in the log after the Vagrant ssh command. We'll use that IP to tell the minion where the Salt master is. Once you're done, end the ssh session by typing exit.

It's now time to connect the VM to the salt master

Step 3 - Connecting Master and Minion

Creating The Minion Configuration File

Create the /etc/salt/minion file. In that file, put the following lines, giving the ID for this minion, and the IP of the master:

master: 10.0.2.2
id: 'minion1'
file_client: remote

Minions authenticate with the master using keys. Keys are generated automatically if you don't provide one and can accept them later on. However, this requires accepting the minion key every time the minion is destroyed or created (which could be quite often). A better way is to create those keys in advance, feed them to the minion, and authorize them once.

Preseed minion keys

From the minion folder on your Mac run:

sudo salt-key --gen-keys=minion1

This should create two files: minion1.pem, and minion1.pub. Since those files have been created using sudo, but will be used by vagrant, you need to change ownership:

sudo chown youruser:yourgroup minion1.pem
sudo chown youruser:yourgroup minion1.pub

Then copy the .pub file into the list of accepted minions:

sudo cp minion1.pub /etc/salt/pki/master/minions/minion1
Modify Vagrantfile to Use Salt Provisioner

Let's now modify the Vagrantfile used to provision the Salt VM. Add the following section in the Vagrantfile (note: it should be at the same indentation level as the other properties):

# salt-vagrant config
config.vm.provision :salt do |salt|
    salt.run_highstate = true
    salt.minion_config = "/etc/salt/minion"
    salt.minion_key = "./minion1.pem"
    salt.minion_pub = "./minion1.pub"
end

Now destroy the vm and recreate it from the /minion folder:

vagrant destroy
vagrant up

If everything is fine you should see the following message:

"Bootstrapping Salt... (this may take a while)
Salt successfully configured and installed!"
Checking Master-Minion Communication

To make sure the master and minion are talking to each other, enter the following:

sudo salt '*' test.ping

You should see your minion answering the ping. It's now time to do some configuration.

Step 4 - Configure Services to Install On the Minion

In this step we'll use the Salt master to instruct our minion to install Nginx.

Checking the system's original state

First, make sure that an HTTP server is not installed on our minion. When opening a browser directed at http://192.168.33.10/ You should get an error saying the site cannot be reached.

Initialize the top.sls file

System configuration is done in the /srv/salt/top.sls file (and subfiles/folders), and then applied by running the state.highstate command to have the Salt master give orders so minions will update their instructions and run the associated commands.

First Create an empty file on your Salt master (Mac OS X machine):

touch /srv/salt/top.sls

When the file is empty, or if no configuration is found for our minion an error is reported:

sudo salt 'minion1' state.highstate

Should return an error stating: "No Top file or external nodes data matches found".

Create The Nginx Configuration

Now is finally the time to enter the real meat of our server's configuration. For this tutorial our minion will be treated as a web server that needs to have Nginx installed.

Insert the following lines into the /srv/salt/top.sls file (which should current be empty).

base:
  'minion1':
    - bin.nginx

Now create a /srv/salt/bin/nginx.sls file containing the following:

nginx:
  pkg.installed:
    - name: nginx
  service.running:
    - enable: True
    - reload: True
Check Minion State

Finally run the state.highstate command again:

sudo salt 'minion1' state.highstate

You should see a log showing that the Nginx package has been installed and the service configured. To prove it, open your browser and navigate to http://192.168.33.10/, you should see the standard Nginx welcome page.

Congratulations!

Where To Go From Here

A full description of configuration management within Salt (sls files among other things) is available here: http://docs.saltstack.com/index.html#configuration-management

Writing Salt Tests

Note

THIS TUTORIAL IS A WORK IN PROGRESS

Salt comes with a powerful integration and unit test suite. The test suite allows for the fully automated run of integration and/or unit tests from a single interface. The integration tests are surprisingly easy to write and can be written to be either destructive or non-destructive.

Getting Set Up For Tests

To walk through adding an integration test, start by getting the latest development code and the test system from GitHub:

Note

The develop branch often has failing tests and should always be considered a staging area. For a checkout that tests should be running perfectly on, please check out a specific release tag (such as v2014.1.4).

git clone git@github.com:saltstack/salt.git
pip install git+https://github.com/saltstack/salt-testing.git#egg=SaltTesting

Now that a fresh checkout is available run the test suite

Destructive vs Non-destructive

Since Salt is used to change the settings and behavior of systems, often, the best approach to run tests is to make actual changes to an underlying system. This is where the concept of destructive integration tests comes into play. Tests can be written to alter the system they are running on. This capability is what fills in the gap needed to properly test aspects of system management like package installation.

To write a destructive test import and use the destructiveTest decorator for the test method:

import integration
from salttesting.helpers import destructiveTest

class PkgTest(integration.ModuleCase):
    @destructiveTest
    def test_pkg_install(self):
        ret = self.run_function('pkg.install', name='finch')
        self.assertSaltTrueReturn(ret)
        ret = self.run_function('pkg.purge', name='finch')
        self.assertSaltTrueReturn(ret)
Automated Test Runs

SaltStack maintains a Jenkins server which can be viewed at http://jenkins.saltstack.com. The tests executed from this Jenkins server create fresh virtual machines for each test run, then execute the destructive tests on the new clean virtual machine. This allows for the execution of tests across supported platforms.

HTTP Modules

This tutorial demonstrates using the various HTTP modules available in Salt. These modules wrap the Python tornado, urllib2, and requests libraries, extending them in a manner that is more consistent with Salt workflows.

The salt.utils.http Library

This library forms the core of the HTTP modules. Since it is designed to be used from the minion as an execution module, in addition to the master as a runner, it was abstracted into this multi-use library. This library can also be imported by 3rd-party programs wishing to take advantage of its extended functionality.

Core functionality of the execution, state, and runner modules is derived from this library, so common usages between them are described here. Documentation specific to each module is described below.

This library can be imported with:

import salt.utils.http
Configuring Libraries

This library can make use of either tornado, which is required by Salt, urllib2, which ships with Python, or requests, which can be installed separately. By default, tornado will be used. In order to switch to urllib2, set the following variable:

backend: urllib2

In order to switch to requests, set the following variable:

backend: requests

This can be set in the master or minion configuration file, or passed as an option directly to any http.query() functions.

salt.utils.http.query()

This function forms a basic query, but with some add-ons not present in the tornado, urllib2, and requests libraries. Not all functionality currently available in these libraries has been added, but can be in future iterations.

A basic query can be performed by calling this function with no more than a single URL:

salt.utils.http.query('http://example.com')

By default the query will be performed with a GET method. The method can be overridden with the method argument:

salt.utils.http.query('http://example.com/delete/url', 'DELETE')

When using the POST method (and others, such PUT), extra data is usually sent as well. This data can be either sent directly, in whatever format is required by the remote server (XML, JSON, plain text, etc).

salt.utils.http.query(
    'http://example.com/delete/url',
    method='POST',
    data=json.loads(mydict)
)

Bear in mind that this data must be sent pre-formatted; this function will not format it for you. However, a templated file stored on the local system may be passed through, along with variables to populate it with. To pass through only the file (untemplated):

salt.utils.http.query(
    'http://example.com/post/url',
    method='POST',
    data_file='/srv/salt/somefile.xml'
)

To pass through a file that contains jinja + yaml templating (the default):

salt.utils.http.query(
    'http://example.com/post/url',
    method='POST',
    data_file='/srv/salt/somefile.jinja',
    data_render=True,
    template_data={'key1': 'value1', 'key2': 'value2'}
)

To pass through a file that contains mako templating:

salt.utils.http.query(
    'http://example.com/post/url',
    method='POST',
    data_file='/srv/salt/somefile.mako',
    data_render=True,
    data_renderer='mako',
    template_data={'key1': 'value1', 'key2': 'value2'}
)

Because this function uses Salt's own rendering system, any Salt renderer can be used. Because Salt's renderer requires __opts__ to be set, an opts dictionary should be passed in. If it is not, then the default __opts__ values for the node type (master or minion) will be used. Because this library is intended primarily for use by minions, the default node type is minion. However, this can be changed to master if necessary.

salt.utils.http.query(
    'http://example.com/post/url',
    method='POST',
    data_file='/srv/salt/somefile.jinja',
    data_render=True,
    template_data={'key1': 'value1', 'key2': 'value2'},
    opts=__opts__
)

salt.utils.http.query(
    'http://example.com/post/url',
    method='POST',
    data_file='/srv/salt/somefile.jinja',
    data_render=True,
    template_data={'key1': 'value1', 'key2': 'value2'},
    node='master'
)

Headers may also be passed through, either as a header_list, a header_dict or as a header_file. As with the data_file, the header_file may also be templated. Take note that because HTTP headers are normally syntactically-correct YAML, they will automatically be imported as an a Python dict.

salt.utils.http.query(
    'http://example.com/delete/url',
    method='POST',
    header_file='/srv/salt/headers.jinja',
    header_render=True,
    header_renderer='jinja',
    template_data={'key1': 'value1', 'key2': 'value2'}
)

Because much of the data that would be templated between headers and data may be the same, the template_data is the same for both. Correcting possible variable name collisions is up to the user.

The query() function supports basic HTTP authentication. A username and password may be passed in as username and password, respectively.

salt.utils.http.query(
    'http://example.com',
    username='larry',
    password=`5700g3543v4r`,
)

Cookies are also supported, using Python's built-in cookielib. However, they are turned off by default. To turn cookies on, set cookies to True.

salt.utils.http.query(
    'http://example.com',
    cookies=True
)

By default cookies are stored in Salt's cache directory, normally /var/cache/salt, as a file called cookies.txt. However, this location may be changed with the cookie_jar argument:

salt.utils.http.query(
    'http://example.com',
    cookies=True,
    cookie_jar='/path/to/cookie_jar.txt'
)

By default, the format of the cookie jar is LWP (aka, lib-www-perl). This default was chosen because it is a human-readable text file. If desired, the format of the cookie jar can be set to Mozilla:

salt.utils.http.query(
    'http://example.com',
    cookies=True,
    cookie_jar='/path/to/cookie_jar.txt',
    cookie_format='mozilla'
)

Because Salt commands are normally one-off commands that are piped together, this library cannot normally behave as a normal browser, with session cookies that persist across multiple HTTP requests. However, the session can be persisted in a separate cookie jar. The default filename for this file, inside Salt's cache directory, is cookies.session.p. This can also be changed.

salt.utils.http.query(
    'http://example.com',
    persist_session=True,
    session_cookie_jar='/path/to/jar.p'
)

The format of this file is msgpack, which is consistent with much of the rest of Salt's internal structure. Historically, the extension for this file is .p. There are no current plans to make this configurable.

Return Data

By default, query() will attempt to decode the return data. Because it was designed to be used with REST interfaces, it will attempt to decode the data received from the remote server. First it will check the Content-type header to try and find references to XML. If it does not find any, it will look for references to JSON. If it does not find any, it will fall back to plain text, which will not be decoded.

JSON data is translated into a dict using Python's built-in json library. XML is translated using salt.utils.xml_util, which will use Python's built-in XML libraries to attempt to convert the XML into a dict. In order to force either JSON or XML decoding, the decode_type may be set:

salt.utils.http.query(
    'http://example.com',
    decode_type='xml'
)

Once translated, the return dict from query() will include a dict called dict.

If the data is not to be translated using one of these methods, decoding may be turned off.

salt.utils.http.query(
    'http://example.com',
    decode=False
)

If decoding is turned on, and references to JSON or XML cannot be found, then this module will default to plain text, and return the undecoded data as text (even if text is set to False; see below).

The query() function can return the HTTP status code, headers, and/or text as required. However, each must individually be turned on.

salt.utils.http.query(
    'http://example.com',
    status=True,
    headers=True,
    text=True
)

The return from these will be found in the return dict as status, headers and text, respectively.

Writing Return Data to Files

It is possible to write either the return data or headers to files, as soon as the response is received from the server, but specifying file locations via the text_out or headers_out arguments. text and headers do not need to be returned to the user in order to do this.

salt.utils.http.query(
    'http://example.com',
    text=False,
    headers=False,
    text_out='/path/to/url_download.txt',
    headers_out='/path/to/headers_download.txt',
)
SSL Verification

By default, this function will verify SSL certificates. However, for testing or debugging purposes, SSL verification can be turned off.

salt.utils.http.query(
    'https://example.com',
    ssl_verify=False,
)
CA Bundles

The requests library has its own method of detecting which CA (certficate authority) bundle file to use. Usually this is implemented by the packager for the specific operating system distribution that you are using. However, urllib2 requires a little more work under the hood. By default, Salt will try to auto-detect the location of this file. However, if it is not in an expected location, or a different path needs to be specified, it may be done so using the ca_bundle variable.

salt.utils.http.query(
    'https://example.com',
    ca_bundle='/path/to/ca_bundle.pem',
)
Updating CA Bundles

The update_ca_bundle() function can be used to update the bundle file at a specified location. If the target location is not specified, then it will attempt to auto-detect the location of the bundle file. If the URL to download the bundle from does not exist, a bundle will be downloaded from the cURL website.

CAUTION: The target and the source should always be specified! Failure to specify the target may result in the file being written to the wrong location on the local system. Failure to specify the source may cause the upstream URL to receive excess unnecessary traffic, and may cause a file to be download which is hazardous or does not meet the needs of the user.

salt.utils.http.update_ca_bundle(
    target='/path/to/ca-bundle.crt',
    source='https://example.com/path/to/ca-bundle.crt',
    opts=__opts__,
)

The opts parameter should also always be specified. If it is, then the target and the source may be specified in the relevant configuration file (master or minion) as ca_bundle and ca_bundle_url, respectively.

ca_bundle: /path/to/ca-bundle.crt
ca_bundle_url: https://example.com/path/to/ca-bundle.crt

If Salt is unable to auto-detect the location of the CA bundle, it will raise an error.

The update_ca_bundle() function can also be passed a string or a list of strings which represent files on the local system, which should be appended (in the specified order) to the end of the CA bundle file. This is useful in environments where private certs need to be made available, and are not otherwise reasonable to add to the bundle file.

salt.utils.http.update_ca_bundle(
    opts=__opts__,
    merge_files=[
        '/etc/ssl/private_cert_1.pem',
        '/etc/ssl/private_cert_2.pem',
        '/etc/ssl/private_cert_3.pem',
    ]
)
Test Mode

This function may be run in test mode. This mode will perform all work up until the actual HTTP request. By default, instead of performing the request, an empty dict will be returned. Using this function with TRACE logging turned on will reveal the contents of the headers and POST data to be sent.

Rather than returning an empty dict, an alternate test_url may be passed in. If this is detected, then test mode will replace the url with the test_url, set test to True in the return data, and perform the rest of the requested operations as usual. This allows a custom, non-destructive URL to be used for testing when necessary.

Execution Module

The http execution module is a very thin wrapper around the salt.utils.http library. The opts can be passed through as well, but if they are not specified, the minion defaults will be used as necessary.

Because passing complete data structures from the command line can be tricky at best and dangerous (in terms of execution injection attacks) at worse, the data_file, and header_file are likely to see more use here.

All methods for the library are available in the execution module, as kwargs.

salt myminion http.query http://example.com/restapi method=POST \
    username='larry' password='5700g3543v4r' headers=True text=True \
    status=True decode_type=xml data_render=True \
    header_file=/tmp/headers.txt data_file=/tmp/data.txt \
    header_render=True cookies=True persist_session=True
Runner Module

Like the execution module, the http runner module is a very thin wrapper around the salt.utils.http library. The only significant difference is that because runners execute on the master instead of a minion, a target is not required, and default opts will be derived from the master config, rather than the minion config.

All methods for the library are available in the runner module, as kwargs.

salt-run http.query http://example.com/restapi method=POST \
    username='larry' password='5700g3543v4r' headers=True text=True \
    status=True decode_type=xml data_render=True \
    header_file=/tmp/headers.txt data_file=/tmp/data.txt \
    header_render=True cookies=True persist_session=True
State Module

The state module is a wrapper around the runner module, which applies stateful logic to a query. All kwargs as listed above are specified as usual in state files, but two more kwargs are available to apply stateful logic. A required parameter is match, which specifies a pattern to look for in the return text. By default, this will perform a string comparison of looking for the value of match in the return text. In Python terms this looks like:

if match in html_text:
    return True

If more complex pattern matching is required, a regular expression can be used by specifying a match_type. By default this is set to string, but it can be manually set to pcre instead. Please note that despite the name, this will use Python's re.search() rather than re.match().

Therefore, the following states are valid:

http://example.com/restapi:
  http.query:
    - match: 'SUCCESS'
    - username: 'larry'
    - password: '5700g3543v4r'
    - data_render: True
    - header_file: /tmp/headers.txt
    - data_file: /tmp/data.txt
    - header_render: True
    - cookies: True
    - persist_session: True

http://example.com/restapi:
  http.query:
    - match_type: pcre
    - match: '(?i)succe[ss|ed]'
    - username: 'larry'
    - password: '5700g3543v4r'
    - data_render: True
    - header_file: /tmp/headers.txt
    - data_file: /tmp/data.txt
    - header_render: True
    - cookies: True
    - persist_session: True

In addition to, or instead of a match pattern, the status code for a URL can be checked. This is done using the status argument:

http://example.com/:
  http.query:
    - status: '200'

If both are specified, both will be checked, but if only one is True and the other is False, then False will be returned. In this case, the comments in the return data will contain information for troubleshooting.

Because this is a monitoring state, it will return extra data to code that expects it. This data will always include text and status. Optionally, headers and dict may also be requested by setting the headers and decode arguments to True, respectively.

LXC Management with Salt

Note

This walkthrough assumes basic knowledge of Salt. To get up to speed, check out the Salt Walkthrough.

Warning

Some features are only currently available in the develop branch, and are new in the upcoming 2015.5.0 release. These new features will be clearly labeled. Even in 2015.5 release, you will need up to the last changeset of this stable branch for the salt-cloud stuff to work correctly.

Dependencies

Manipulation of LXC containers in Salt requires the minion to have an LXC version of at least 1.0 (an alpha or beta release of LXC 1.0 is acceptable). The following distributions are known to have new enough versions of LXC packaged:

  • RHEL/CentOS 6 and later (via EPEL)
  • Fedora (All non-EOL releases)
  • Debian 8.0 (Jessie)
  • Ubuntu 14.04 LTS and later (LXC templates are packaged separately as lxc-templates, it is recommended to also install this package)
  • openSUSE 13.2 and later
Profiles

Profiles allow for a sort of shorthand for commonly-used configurations to be defined in the minion config file, grains, pillar, or the master config file. The profile is retrieved by Salt using the config.get function, which looks in those locations, in that order. This allows for profiles to be defined centrally in the master config file, with several options for overriding them (if necessary) on groups of minions or individual minions.

There are two types of profiles:

  • One for defining the parameters used in container creation/clone.
  • One for defining the container's network interface(s) settings.
Container Profiles

LXC container profiles are defined defined underneath the lxc.container_profile config option:

lxc.container_profile:
  centos:
    template: centos
    backing: lvm
    vgname: vg1
    lvname: lxclv
    size: 10G
  centos_big:
    template: centos
    backing: lvm
    vgname: vg1
    lvname: lxclv
    size: 20G

Profiles are retrieved using the config.get function, with the recurse merge strategy. This means that a profile can be defined at a lower level (for example, the master config file) and then parts of it can be overridden at a higher level (for example, in pillar data). Consider the following container profile data:

In the Master config file:

lxc.container_profile:
  centos:
    template: centos
    backing: lvm
    vgname: vg1
    lvname: lxclv
    size: 10G

In the Pillar data

lxc.container_profile:
  centos:
    size: 20G

Any minion with the above Pillar data would have the size parameter in the centos profile overriden to 20G, while those minions without the above Pillar data would have the 10G size value. This is another way of achieving the same result as the centos_big profile above, without having to define another whole profile that differs in just one value.

Note

In the 2014.7.x release cycle and earlier, container profiles are defined under lxc.profile. This parameter will still work in version 2015.5.0, but is deprecated and will be removed in a future release. Please note however that the profile merging feature described above will only work with profiles defined under lxc.container_profile, and only in versions 2015.5.0 and later.

Additionally, in version 2015.5.0 container profiles have been expanded to support passing template-specific CLI options to lxc.create. Below is a table describing the parameters which can be configured in container profiles:

Parameter 2015.5.0 and Newer 2014.7.x and Earlier
template1 Yes Yes
options1 Yes No
image1 Yes Yes
backing Yes Yes
snapshot2 Yes Yes
lvname1 Yes Yes
fstype1 Yes Yes
size Yes Yes
  1. Parameter is only supported for container creation, and will be ignored if the profile is used when cloning a container.
  2. Parameter is only supported for container cloning, and will be ignored if the profile is used when not cloning a container.
Network Profiles

LXC network profiles are defined defined underneath the lxc.network_profile config option. By default, the module uses a DHCP based configuration and try to guess a bridge to get connectivity.

Warning

on pre 2015.5.2, you need to specify explitly the network bridge

lxc.network_profile:
  centos:
    eth0:
      link: br0
      type: veth
      flags: up
  ubuntu:
    eth0:
      link: lxcbr0
      type: veth
      flags: up

As with container profiles, network profiles are retrieved using the config.get function, with the recurse merge strategy. Consider the following network profile data:

In the Master config file:

lxc.network_profile:
  centos:
    eth0:
      link: br0
      type: veth
      flags: up

In the Pillar data

lxc.network_profile:
  centos:
    eth0:
      link: lxcbr0

Any minion with the above Pillar data would use the lxcbr0 interface as the bridge interface for any container configured using the centos network profile, while those minions without the above Pillar data would use the br0 interface for the same.

Note

In the 2014.7.x release cycle and earlier, network profiles are defined under lxc.nic. This parameter will still work in version 2015.5.0, but is deprecated and will be removed in a future release. Please note however that the profile merging feature described above will only work with profiles defined under lxc.network_profile, and only in versions 2015.5.0 and later.

The following are parameters which can be configured in network profiles. These will directly correspond to a parameter in an LXC configuration file (see man 5 lxc.container.conf).

  • type - Corresponds to lxc.network.type
  • link - Corresponds to lxc.network.link
  • flags - Corresponds to lxc.network.flags

Interface-specific options (MAC address, IPv4/IPv6, etc.) must be passed on a container-by-container basis, for instance using the nic_opts argument to lxc.create:

salt myminion lxc.create container1 profile=centos network_profile=centos nic_opts='{eth0: {ipv4: 10.0.0.20/24, gateway: 10.0.0.1}}'

Warning

The ipv4, ipv6, gateway, and link (bridge) settings in network profiles / nic_opts will only work if the container doesnt redefine the network configuration (for example in /etc/sysconfig/network-scripts/ifcfg-<interface_name> on RHEL/CentOS, or /etc/network/interfaces on Debian/Ubuntu/etc.). Use these with caution. The container images installed using the download template, for instance, typically are configured for eth0 to use DHCP, which will conflict with static IP addresses set at the container level.

Note

For LXC < 1.0.7 and DHCP support, set ipv4.gateway: 'auto' is your network profile, ie.:

lxc.network_profile.nic:
  debian:
    eth0:
      link: lxcbr0
      ipv4.gateway: 'auto'
Old lxc support (<1.0.7)

With saltstack 2015.5.2 and above, normally the setting is autoselected, but before, you'll need to teach your network profile to set lxc.network.ipv4.gateway to auto when using a classic ipv4 configuration.

Thus you'll need

lxc.network_profile.foo:
  etho:
    link: lxcbr0
    ipv4.gateway: auto
Tricky network setups Examples

This example covers how to make a container with both an internal ip and a public routable ip, wired on two veth pairs.

The another interface which receives directly a public routable ip can't be on the first interface that we reserve for private inter LXC networking.

lxc.network_profile.foo:
  eth0: {gateway: null, bridge: lxcbr0}
  eth1:
    # replace that by your main interface
    'link': 'br0'
    'mac': '00:16:5b:01:24:e1'
    'gateway': '2.20.9.14'
    'ipv4': '2.20.9.1'
Creating a Container on the CLI
From a Template

LXC is commonly distributed with several template scripts in /usr/share/lxc/templates. Some distros may package these separately in an lxc-templates package, so make sure to check if this is the case.

There are LXC template scripts for several different operating systems, but some of them are designed to use tools specific to a given distribution. For instance, the ubuntu template uses deb_bootstrap, the centos template uses yum, etc., making these templates impractical when a container from a different OS is desired.

The lxc.create function is used to create containers using a template script. To create a CentOS container named container1 on a CentOS minion named mycentosminion, using the centos LXC template, one can simply run the following command:

salt mycentosminion lxc.create container1 template=centos

For these instances, there is a download template which retrieves minimal container images for several different operating systems. To use this template, it is necessary to provide an options parameter when creating the container, with three values:

  1. dist - the Linux distribution (i.e. ubuntu or centos)
  2. release - the release name/version (i.e. trusty or 6)
  3. arch - CPU architecture (i.e. amd64 or i386)

The lxc.images function (new in version 2015.5.0) can be used to list the available images. Alternatively, the releases can be viewed on http://images.linuxcontainers.org/images/. The images are organized in such a way that the dist, release, and arch can be determined using the following URL format: http://images.linuxcontainers.org/images/dist/release/arch. For example, http://images.linuxcontainers.org/images/centos/6/amd64 would correspond to a dist of centos, a release of 6, and an arch of amd64.

Therefore, to use the download template to create a new 64-bit CentOS 6 container, the following command can be used:

salt myminion lxc.create container1 template=download options='{dist: centos, release: 6, arch: amd64}'

Note

These command-line options can be placed into a container profile, like so:

lxc.container_profile.cent6:
  template: download
  options:
    dist: centos
    release: 6
    arch: amd64

The options parameter is not supported in profiles for the 2014.7.x release cycle and earlier, so it would still need to be provided on the command-line.

Cloning an Existing Container

To clone a container, use the lxc.clone function:

salt myminion lxc.clone container2 orig=container1
Using a Container Image

While cloning is a good way to create new containers from a common base container, the source container that is being cloned needs to already exist on the minion. This makes deploying a common container across minions difficult. For this reason, Salt's lxc.create is capable of installing a container from a tar archive of another container's rootfs. To create an image of a container named cent6, run the following command as root:

tar czf cent6.tar.gz -C /var/lib/lxc/cent6 rootfs

Note

Before doing this, it is recommended that the container is stopped.

The resulting tarball can then be placed alongside the files in the salt fileserver and referenced using a salt:// URL. To create a container using an image, use the image parameter with lxc.create:

salt myminion lxc.create new-cent6 image=salt://path/to/cent6.tar.gz

Note

Making images of containers with LVM backing

For containers with LVM backing, the rootfs is not mounted, so it is necessary to mount it first before creating the tar archive. When a container is created using LVM backing, an empty rootfs dir is handily created within /var/lib/lxc/container_name, so this can be used as the mountpoint. The location of the logical volume for the container will be /dev/vgname/lvname, where vgname is the name of the volume group, and lvname is the name of the logical volume. Therefore, assuming a volume group of vg1, a logical volume of lxc-cent6, and a container name of cent6, the following commands can be used to create a tar archive of the rootfs:

mount /dev/vg1/lxc-cent6 /var/lib/lxc/cent6/rootfs
tar czf cent6.tar.gz -C /var/lib/lxc/cent6 rootfs
umount /var/lib/lxc/cent6/rootfs

Warning

One caveat of using this method of container creation is that /etc/hosts is left unmodified. This could cause confusion for some distros if salt-minion is later installed on the container, as the functions that determine the hostname take /etc/hosts into account.

Additionally, when creating an rootfs image, be sure to remove /etc/salt/minion_id and make sure that id is not defined in /etc/salt/minion, as this will cause similar issues.

Initializing a New Container as a Salt Minion

The above examples illustrate a few ways to create containers on the CLI, but often it is desirable to also have the new container run as a Minion. To do this, the lxc.init function can be used. This function will do the following:

  1. Create a new container
  2. Optionally set password and/or DNS
  3. Bootstrap the minion (using either salt-bootstrap or a custom command)

By default, the new container will be pointed at the same Salt Master as the host machine on which the container was created. It will then request to authenticate with the Master like any other bootstrapped Minion, at which point it can be accepted.

salt myminion lxc.init test1 profile=centos
salt-key -a test1

For even greater convenience, the LXC runner contains a runner function of the same name (lxc.init), which creates a keypair, seeds the new minion with it, and pre-accepts the key, allowing for the new Minion to be created and authorized in a single step:

salt-run lxc.init test1 host=myminion profile=centos
Running Commands Within a Container

For containers which are not running their own Minion, commands can be run within the container in a manner similar to using (cmd.run <salt.modules.cmdmod.run). The means of doing this have been changed significantly in version 2015.5.0 (though the deprecated behavior will still be supported for a few releases). Both the old and new usage are documented below.

2015.5.0 and Newer

New functions have been added to mimic the behavior of the functions in the cmd module. Below is a table with the cmd functions and their lxc module equivalents:

Description cmd module lxc module
Run a command and get all output cmd.run lxc.run
Run a command and get just stdout cmd.run_stdout lxc.run_stdout
Run a command and get just stderr cmd.run_stderr lxc.run_stderr
Run a command and get just the retcode cmd.retcode lxc.retcode
Run a command and get all information cmd.run_all lxc.run_all
2014.7.x and Earlier

Earlier Salt releases use a single function (lxc.run_cmd) to run commands within containers. Whether stdout, stderr, etc. are returned depends on how the function is invoked.

To run a command and return the stdout:

salt myminion lxc.run_cmd web1 'tail /var/log/messages'

To run a command and return the stderr:

salt myminion lxc.run_cmd web1 'tail /var/log/messages' stdout=False stderr=True

To run a command and return the retcode:

salt myminion lxc.run_cmd web1 'tail /var/log/messages' stdout=False stderr=False

To run a command and return all information:

salt myminion lxc.run_cmd web1 'tail /var/log/messages' stdout=True stderr=True
Container Management Using salt-cloud

Salt cloud uses under the hood the salt runner and module to manage containers, Please look at this chapter

Container Management Using States

Several states are being renamed or otherwise modified in version 2015.5.0. The information in this tutorial refers to the new states. For 2014.7.x and earlier, please refer to the documentation for the LXC states.

Ensuring a Container Is Present

To ensure the existence of a named container, use the lxc.present state. Here are some examples:

# Using a template
web1:
  lxc.present:
    - template: download
    - options:
        dist: centos
        release: 6
        arch: amd64

# Cloning
web2:
  lxc.present:
    - clone_from: web-base

# Using a rootfs image
web3:
  lxc.present:
    - image: salt://path/to/cent6.tar.gz

# Using profiles
web4:
  lxc.present:
    - profile: centos_web
    - network_profile: centos

Warning

The lxc.present state will not modify an existing container (in other words, it will not re-create the container). If an lxc.present state is run on an existing container, there will be no change and the state will return a True result.

The lxc.present state also includes an optional running parameter which can be used to ensure that a container is running/stopped. Note that there are standalone lxc.running and lxc.stopped states which can be used for this purpose.

Ensuring a Container Does Not Exist

To ensure that a named container is not present, use the lxc.absent state. For example:

web1:
  lxc.absent
Ensuring a Container is Running/Stopped/Frozen

Containers can be in one of three states:

  • running - Container is running and active
  • frozen - Container is running, but all process are blocked and the container is essentially non-active until the container is "unfrozen"
  • stopped - Container is not running

Salt has three states (lxc.running, lxc.frozen, and lxc.stopped) which can be used to ensure a container is in one of these states:

web1:
  lxc.running

# Restart the container if it was already running
web2:
  lxc.running:
    - restart: True

web3:
  lxc.stopped

# Explicitly kill all tasks in container instead of gracefully stopping
web4:
  lxc.stopped:
    - kill: True

web5:
  lxc.frozen

# If container is stopped, do not start it (in which case the state will fail)
web6:
  lxc.frozen:
    - start: False

Salt Virt

Salt as a Cloud Controller

In Salt 0.14.0, an advanced cloud control system were introduced, allow private cloud vms to be managed directly with Salt. This system is generally referred to as Salt Virt.

The Salt Virt system already exists and is installed within Salt itself, this means that beside setting up Salt, no additional salt code needs to be deployed.

The main goal of Salt Virt is to facilitate a very fast and simple cloud. The cloud that can scale and fully featured. Salt Virt comes with the ability to set up and manage complex virtual machine networking, powerful image, and disk management, as well as virtual machine migration with and without shared storage.

This means that Salt Virt can be used to create a cloud from a blade center and a SAN, but can also create a cloud out of a swarm of Linux Desktops without a single shared storage system. Salt Virt can make clouds from truly commodity hardware, but can also stand up the power of specialized hardware as well.

Setting up Hypervisors

The first step to set up the hypervisors involves getting the correct software installed and setting up the hypervisor network interfaces.

Installing Hypervisor Software

Salt Virt is made to be hypervisor agnostic but currently the only fully implemented hypervisor is KVM via libvirt.

The required software for a hypervisor is libvirt and kvm. For advanced features install libguestfs or qemu-nbd.

Note

Libguestfs and qemu-nbd allow for virtual machine images to be mounted before startup and get pre-seeded with configurations and a salt minion

This sls will set up the needed software for a hypervisor, and run the routines to set up the libvirt pki keys.

Note

Package names and setup used is Red Hat specific, different package names will be required for different platforms

libvirt:
  pkg.installed: []
  file.managed:
    - name: /etc/sysconfig/libvirtd
    - contents: 'LIBVIRTD_ARGS="--listen"'
    - require:
      - pkg: libvirt
  libvirt.keys:
    - require:
      - pkg: libvirt
  service.running:
    - name: libvirtd
    - require:
      - pkg: libvirt
      - network: br0
      - libvirt: libvirt
    - watch:
      - file: libvirt

libvirt-python:
  pkg.installed: []

libguestfs:
  pkg.installed:
    - pkgs:
      - libguestfs
      - libguestfs-tools
Hypervisor Network Setup

The hypervisors will need to be running a network bridge to serve up network devices for virtual machines, this formula will set up a standard bridge on a hypervisor connecting the bridge to eth0:

eth0:
  network.managed:
    - enabled: True
    - type: eth
    - bridge: br0

br0:
  network.managed:
    - enabled: True
    - type: bridge
    - proto: dhcp
    - require:
      - network: eth0
Virtual Machine Network Setup

Salt Virt comes with a system to model the network interfaces used by the deployed virtual machines; by default a single interface is created for the deployed virtual machine and is bridged to br0. To get going with the default networking setup, ensure that the bridge interface named br0 exists on the hypervisor and is bridged to an active network device.

Note

To use more advanced networking in Salt Virt, read the Salt Virt Networking document:

Salt Virt Networking

Libvirt State

One of the challenges of deploying a libvirt based cloud is the distribution of libvirt certificates. These certificates allow for virtual machine migration. Salt comes with a system used to auto deploy these certificates. Salt manages the signing authority key and generates keys for libvirt clients on the master, signs them with the certificate authority and uses pillar to distribute them. This is managed via the libvirt state. Simply execute this formula on the minion to ensure that the certificate is in place and up to date:

Note

The above formula includes the calls needed to set up libvirt keys.

libvirt_keys:
  libvirt.keys
Getting Virtual Machine Images Ready

Salt Virt, requires that virtual machine images be provided as these are not generated on the fly. Generating these virtual machine images differs greatly based on the underlying platform.

Virtual machine images can be manually created using KVM and running through the installer, but this process is not recommended since it is very manual and prone to errors.

Virtual Machine generation applications are available for many platforms:

vm-builder:

https://wiki.debian.org/VMBuilder

Once virtual machine images are available, the easiest way to make them available to Salt Virt is to place them in the Salt file server. Just copy an image into /srv/salt and it can now be used by Salt Virt.

For purposes of this demo, the file name centos.img will be used.

Existing Virtual Machine Images

Many existing Linux distributions distribute virtual machine images which can be used with Salt Virt. Please be advised that NONE OF THESE IMAGES ARE SUPPORTED BY SALTSTACK.

CentOS

These images have been prepared for OpenNebula but should work without issue with Salt Virt, only the raw qcow image file is needed: http://wiki.centos.org/Cloud/OpenNebula

Fedora Linux

Images for Fedora Linux can be found here: http://fedoraproject.org/en/get-fedora#clouds

Ubuntu Linux

Images for Ubuntu Linux can be found here: http://cloud-images.ubuntu.com/

Using Salt Virt

With hypervisors set up and virtual machine images ready, Salt can start issuing cloud commands.

Start by running a Salt Virt hypervisor info command:

salt-run virt.hyper_info

This will query what the running hypervisor stats are and display information for all configured hypervisors. This command will also validate that the hypervisors are properly configured.

Now that hypervisors are available a virtual machine can be provisioned. The virt.init routine will create a new virtual machine:

salt-run virt.init centos1 2 512 salt://centos.img

This command assumes that the CentOS virtual machine image is sitting in the root of the Salt fileserver. Salt Virt will now select a hypervisor to deploy the new virtual machine on and copy the virtual machine image down to the hypervisor.

Once the VM image has been copied down the new virtual machine will be seeded. Seeding the VMs involves setting pre-authenticated Salt keys on the new VM and if needed, will install the Salt Minion on the new VM before it is started.

Note

The biggest bottleneck in starting VMs is when the Salt Minion needs to be installed. Making sure that the source VM images already have Salt installed will GREATLY speed up virtual machine deployment.

Now that the new VM has been prepared, it can be seen via the virt.query command:

salt-run virt.query

This command will return data about all of the hypervisors and respective virtual machines.

Now that the new VM is booted it should have contacted the Salt Master, a test.ping will reveal if the new VM is running.

Migrating Virtual Machines

Salt Virt comes with full support for virtual machine migration, and using the libvirt state in the above formula makes migration possible.

A few things need to be available to support migration. Many operating systems turn on firewalls when originally set up, the firewall needs to be opened up to allow for libvirt and kvm to cross communicate and execution migration routines. On Red Hat based hypervisors in particular port 16514 needs to be opened on hypervisors:

iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 16514 -j ACCEPT

Note

More in-depth information regarding distribution specific firewall settings can read in:

Opening the Firewall up for Salt

Salt also needs an additional flag to be turned on as well. The virt.tunnel option needs to be turned on. This flag tells Salt to run migrations securely via the libvirt TLS tunnel and to use port 16514. Without virt.tunnel libvirt tries to bind to random ports when running migrations. To turn on virt.tunnel simple apply it to the master config file:

virt.tunnel: True

Once the master config has been updated, restart the master and send out a call to the minions to refresh the pillar to pick up on the change:

salt \* saltutil.refresh_modules

Now, migration routines can be run! To migrate a VM, simply run the Salt Virt migrate routine:

salt-run virt.migrate centos <new hypervisor>
VNC Consoles

Salt Virt also sets up VNC consoles by default, allowing for remote visual consoles to be oped up. The information from a virt.query routine will display the vnc console port for the specific vms:

centos
  CPU: 2
  Memory: 524288
  State: running
  Graphics: vnc - hyper6:5900
  Disk - vda:
    Size: 2.0G
    File: /srv/salt-images/ubuntu2/system.qcow2
    File Format: qcow2
  Nic - ac:de:48:98:08:77:
    Source: br0
    Type: bridge

The line Graphics: vnc - hyper6:5900 holds the key. First the port named, in this case 5900, will need to be available in the hypervisor's firewall. Once the port is open, then the console can be easily opened via vncviewer:

vncviewer hyper6:5900

By default there is no VNC security set up on these ports, which suggests that keeping them firewalled and mandating that SSH tunnels be used to access these VNC interfaces. Keep in mind that activity on a VNC interface that is accessed can be viewed by any other user that accesses that same VNC interface, and any other user logging in can also operate with the logged in user on the virtual machine.

Conclusion

Now with Salt Virt running, new hypervisors can be seamlessly added just by running the above states on new bare metal machines, and these machines will be instantly available to Salt Virt.

LXC

LXC Management with Salt

Note

This walkthrough assumes basic knowledge of Salt. To get up to speed, check out the Salt Walkthrough.

Warning

Some features are only currently available in the develop branch, and are new in the upcoming 2015.5.0 release. These new features will be clearly labeled. Even in 2015.5 release, you will need up to the last changeset of this stable branch for the salt-cloud stuff to work correctly.

Dependencies

Manipulation of LXC containers in Salt requires the minion to have an LXC version of at least 1.0 (an alpha or beta release of LXC 1.0 is acceptable). The following distributions are known to have new enough versions of LXC packaged:

  • RHEL/CentOS 6 and later (via EPEL)
  • Fedora (All non-EOL releases)
  • Debian 8.0 (Jessie)
  • Ubuntu 14.04 LTS and later (LXC templates are packaged separately as lxc-templates, it is recommended to also install this package)
  • openSUSE 13.2 and later
Profiles

Profiles allow for a sort of shorthand for commonly-used configurations to be defined in the minion config file, grains, pillar, or the master config file. The profile is retrieved by Salt using the config.get function, which looks in those locations, in that order. This allows for profiles to be defined centrally in the master config file, with several options for overriding them (if necessary) on groups of minions or individual minions.

There are two types of profiles:

  • One for defining the parameters used in container creation/clone.
  • One for defining the container's network interface(s) settings.
Container Profiles

LXC container profiles are defined defined underneath the lxc.container_profile config option:

lxc.container_profile:
  centos:
    template: centos
    backing: lvm
    vgname: vg1
    lvname: lxclv
    size: 10G
  centos_big:
    template: centos
    backing: lvm
    vgname: vg1
    lvname: lxclv
    size: 20G

Profiles are retrieved using the config.get function, with the recurse merge strategy. This means that a profile can be defined at a lower level (for example, the master config file) and then parts of it can be overridden at a higher level (for example, in pillar data). Consider the following container profile data:

In the Master config file:

lxc.container_profile:
  centos:
    template: centos
    backing: lvm
    vgname: vg1
    lvname: lxclv
    size: 10G

In the Pillar data

lxc.container_profile:
  centos:
    size: 20G

Any minion with the above Pillar data would have the size parameter in the centos profile overriden to 20G, while those minions without the above Pillar data would have the 10G size value. This is another way of achieving the same result as the centos_big profile above, without having to define another whole profile that differs in just one value.

Note

In the 2014.7.x release cycle and earlier, container profiles are defined under lxc.profile. This parameter will still work in version 2015.5.0, but is deprecated and will be removed in a future release. Please note however that the profile merging feature described above will only work with profiles defined under lxc.container_profile, and only in versions 2015.5.0 and later.

Additionally, in version 2015.5.0 container profiles have been expanded to support passing template-specific CLI options to lxc.create. Below is a table describing the parameters which can be configured in container profiles:

Parameter 2015.5.0 and Newer 2014.7.x and Earlier
template1 Yes Yes
options1 Yes No
image1 Yes Yes
backing Yes Yes
snapshot2 Yes Yes
lvname1 Yes Yes
fstype1 Yes Yes
size Yes Yes
  1. Parameter is only supported for container creation, and will be ignored if the profile is used when cloning a container.
  2. Parameter is only supported for container cloning, and will be ignored if the profile is used when not cloning a container.
Network Profiles

LXC network profiles are defined defined underneath the lxc.network_profile config option. By default, the module uses a DHCP based configuration and try to guess a bridge to get connectivity.

Warning

on pre 2015.5.2, you need to specify explitly the network bridge

lxc.network_profile:
  centos:
    eth0:
      link: br0
      type: veth
      flags: up
  ubuntu:
    eth0:
      link: lxcbr0
      type: veth
      flags: up

As with container profiles, network profiles are retrieved using the config.get function, with the recurse merge strategy. Consider the following network profile data:

In the Master config file:

lxc.network_profile:
  centos:
    eth0:
      link: br0
      type: veth
      flags: up

In the Pillar data

lxc.network_profile:
  centos:
    eth0:
      link: lxcbr0

Any minion with the above Pillar data would use the lxcbr0 interface as the bridge interface for any container configured using the centos network profile, while those minions without the above Pillar data would use the br0 interface for the same.

Note

In the 2014.7.x release cycle and earlier, network profiles are defined under lxc.nic. This parameter will still work in version 2015.5.0, but is deprecated and will be removed in a future release. Please note however that the profile merging feature described above will only work with profiles defined under lxc.network_profile, and only in versions 2015.5.0 and later.

The following are parameters which can be configured in network profiles. These will directly correspond to a parameter in an LXC configuration file (see man 5 lxc.container.conf).

  • type - Corresponds to lxc.network.type
  • link - Corresponds to lxc.network.link
  • flags - Corresponds to lxc.network.flags

Interface-specific options (MAC address, IPv4/IPv6, etc.) must be passed on a container-by-container basis, for instance using the nic_opts argument to lxc.create:

salt myminion lxc.create container1 profile=centos network_profile=centos nic_opts='{eth0: {ipv4: 10.0.0.20/24, gateway: 10.0.0.1}}'

Warning

The ipv4, ipv6, gateway, and link (bridge) settings in network profiles / nic_opts will only work if the container doesnt redefine the network configuration (for example in /etc/sysconfig/network-scripts/ifcfg-<interface_name> on RHEL/CentOS, or /etc/network/interfaces on Debian/Ubuntu/etc.). Use these with caution. The container images installed using the download template, for instance, typically are configured for eth0 to use DHCP, which will conflict with static IP addresses set at the container level.

Note

For LXC < 1.0.7 and DHCP support, set ipv4.gateway: 'auto' is your network profile, ie.:

lxc.network_profile.nic:
  debian:
    eth0:
      link: lxcbr0
      ipv4.gateway: 'auto'
Old lxc support (<1.0.7)

With saltstack 2015.5.2 and above, normally the setting is autoselected, but before, you'll need to teach your network profile to set lxc.network.ipv4.gateway to auto when using a classic ipv4 configuration.

Thus you'll need

lxc.network_profile.foo:
  etho:
    link: lxcbr0
    ipv4.gateway: auto
Tricky network setups Examples

This example covers how to make a container with both an internal ip and a public routable ip, wired on two veth pairs.

The another interface which receives directly a public routable ip can't be on the first interface that we reserve for private inter LXC networking.

lxc.network_profile.foo:
  eth0: {gateway: null, bridge: lxcbr0}
  eth1:
    # replace that by your main interface
    'link': 'br0'
    'mac': '00:16:5b:01:24:e1'
    'gateway': '2.20.9.14'
    'ipv4': '2.20.9.1'
Creating a Container on the CLI
From a Template

LXC is commonly distributed with several template scripts in /usr/share/lxc/templates. Some distros may package these separately in an lxc-templates package, so make sure to check if this is the case.

There are LXC template scripts for several different operating systems, but some of them are designed to use tools specific to a given distribution. For instance, the ubuntu template uses deb_bootstrap, the centos template uses yum, etc., making these templates impractical when a container from a different OS is desired.

The lxc.create function is used to create containers using a template script. To create a CentOS container named container1 on a CentOS minion named mycentosminion, using the centos LXC template, one can simply run the following command:

salt mycentosminion lxc.create container1 template=centos

For these instances, there is a download template which retrieves minimal container images for several different operating systems. To use this template, it is necessary to provide an options parameter when creating the container, with three values:

  1. dist - the Linux distribution (i.e. ubuntu or centos)
  2. release - the release name/version (i.e. trusty or 6)
  3. arch - CPU architecture (i.e. amd64 or i386)

The lxc.images function (new in version 2015.5.0) can be used to list the available images. Alternatively, the releases can be viewed on http://images.linuxcontainers.org/images/. The images are organized in such a way that the dist, release, and arch can be determined using the following URL format: http://images.linuxcontainers.org/images/dist/release/arch. For example, http://images.linuxcontainers.org/images/centos/6/amd64 would correspond to a dist of centos, a release of 6, and an arch of amd64.

Therefore, to use the download template to create a new 64-bit CentOS 6 container, the following command can be used:

salt myminion lxc.create container1 template=download options='{dist: centos, release: 6, arch: amd64}'

Note

These command-line options can be placed into a container profile, like so:

lxc.container_profile.cent6:
  template: download
  options:
    dist: centos
    release: 6
    arch: amd64

The options parameter is not supported in profiles for the 2014.7.x release cycle and earlier, so it would still need to be provided on the command-line.

Cloning an Existing Container

To clone a container, use the lxc.clone function:

salt myminion lxc.clone container2 orig=container1
Using a Container Image

While cloning is a good way to create new containers from a common base container, the source container that is being cloned needs to already exist on the minion. This makes deploying a common container across minions difficult. For this reason, Salt's lxc.create is capable of installing a container from a tar archive of another container's rootfs. To create an image of a container named cent6, run the following command as root:

tar czf cent6.tar.gz -C /var/lib/lxc/cent6 rootfs

Note

Before doing this, it is recommended that the container is stopped.

The resulting tarball can then be placed alongside the files in the salt fileserver and referenced using a salt:// URL. To create a container using an image, use the image parameter with lxc.create:

salt myminion lxc.create new-cent6 image=salt://path/to/cent6.tar.gz

Note

Making images of containers with LVM backing

For containers with LVM backing, the rootfs is not mounted, so it is necessary to mount it first before creating the tar archive. When a container is created using LVM backing, an empty rootfs dir is handily created within /var/lib/lxc/container_name, so this can be used as the mountpoint. The location of the logical volume for the container will be /dev/vgname/lvname, where vgname is the name of the volume group, and lvname is the name of the logical volume. Therefore, assuming a volume group of vg1, a logical volume of lxc-cent6, and a container name of cent6, the following commands can be used to create a tar archive of the rootfs:

mount /dev/vg1/lxc-cent6 /var/lib/lxc/cent6/rootfs
tar czf cent6.tar.gz -C /var/lib/lxc/cent6 rootfs
umount /var/lib/lxc/cent6/rootfs

Warning

One caveat of using this method of container creation is that /etc/hosts is left unmodified. This could cause confusion for some distros if salt-minion is later installed on the container, as the functions that determine the hostname take /etc/hosts into account.

Additionally, when creating an rootfs image, be sure to remove /etc/salt/minion_id and make sure that id is not defined in /etc/salt/minion, as this will cause similar issues.

Initializing a New Container as a Salt Minion

The above examples illustrate a few ways to create containers on the CLI, but often it is desirable to also have the new container run as a Minion. To do this, the lxc.init function can be used. This function will do the following:

  1. Create a new container
  2. Optionally set password and/or DNS
  3. Bootstrap the minion (using either salt-bootstrap or a custom command)

By default, the new container will be pointed at the same Salt Master as the host machine on which the container was created. It will then request to authenticate with the Master like any other bootstrapped Minion, at which point it can be accepted.

salt myminion lxc.init test1 profile=centos
salt-key -a test1

For even greater convenience, the LXC runner contains a runner function of the same name (lxc.init), which creates a keypair, seeds the new minion with it, and pre-accepts the key, allowing for the new Minion to be created and authorized in a single step:

salt-run lxc.init test1 host=myminion profile=centos
Running Commands Within a Container

For containers which are not running their own Minion, commands can be run within the container in a manner similar to using (cmd.run <salt.modules.cmdmod.run). The means of doing this have been changed significantly in version 2015.5.0 (though the deprecated behavior will still be supported for a few releases). Both the old and new usage are documented below.

2015.5.0 and Newer

New functions have been added to mimic the behavior of the functions in the cmd module. Below is a table with the cmd functions and their lxc module equivalents:

Description cmd module lxc module
Run a command and get all output cmd.run lxc.run
Run a command and get just stdout cmd.run_stdout lxc.run_stdout
Run a command and get just stderr cmd.run_stderr lxc.run_stderr
Run a command and get just the retcode cmd.retcode lxc.retcode
Run a command and get all information cmd.run_all lxc.run_all
2014.7.x and Earlier

Earlier Salt releases use a single function (lxc.run_cmd) to run commands within containers. Whether stdout, stderr, etc. are returned depends on how the function is invoked.

To run a command and return the stdout:

salt myminion lxc.run_cmd web1 'tail /var/log/messages'

To run a command and return the stderr:

salt myminion lxc.run_cmd web1 'tail /var/log/messages' stdout=False stderr=True

To run a command and return the retcode:

salt myminion lxc.run_cmd web1 'tail /var/log/messages' stdout=False stderr=False

To run a command and return all information:

salt myminion lxc.run_cmd web1 'tail /var/log/messages' stdout=True stderr=True
Container Management Using salt-cloud

Salt cloud uses under the hood the salt runner and module to manage containers, Please look at this chapter

Container Management Using States

Several states are being renamed or otherwise modified in version 2015.5.0. The information in this tutorial refers to the new states. For 2014.7.x and earlier, please refer to the documentation for the LXC states.

Ensuring a Container Is Present

To ensure the existence of a named container, use the lxc.present state. Here are some examples:

# Using a template
web1:
  lxc.present:
    - template: download
    - options:
        dist: centos
        release: 6
        arch: amd64

# Cloning
web2:
  lxc.present:
    - clone_from: web-base

# Using a rootfs image
web3:
  lxc.present:
    - image: salt://path/to/cent6.tar.gz

# Using profiles
web4:
  lxc.present:
    - profile: centos_web
    - network_profile: centos

Warning

The lxc.present state will not modify an existing container (in other words, it will not re-create the container). If an lxc.present state is run on an existing container, there will be no change and the state will return a True result.

The lxc.present state also includes an optional running parameter which can be used to ensure that a container is running/stopped. Note that there are standalone lxc.running and lxc.stopped states which can be used for this purpose.

Ensuring a Container Does Not Exist

To ensure that a named container is not present, use the lxc.absent state. For example:

web1:
  lxc.absent
Ensuring a Container is Running/Stopped/Frozen

Containers can be in one of three states:

  • running - Container is running and active
  • frozen - Container is running, but all process are blocked and the container is essentially non-active until the container is "unfrozen"
  • stopped - Container is not running

Salt has three states (lxc.running, lxc.frozen, and lxc.stopped) which can be used to ensure a container is in one of these states:

web1:
  lxc.running

# Restart the container if it was already running
web2:
  lxc.running:
    - restart: True

web3:
  lxc.stopped

# Explicitly kill all tasks in container instead of gracefully stopping
web4:
  lxc.stopped:
    - kill: True

web5:
  lxc.frozen

# If container is stopped, do not start it (in which case the state will fail)
web6:
  lxc.frozen:
    - start: False

Using Salt at scale

Using salt at scale

The focus of this tutorial will be building a Salt infrastructure for handling large numbers of minions. This will include tuning, topology, and best practices.

For how to install the saltmaster please go here: Installing saltstack

Note

This tutorial is intended for large installations, although these same settings won't hurt, it may not be worth the complexity to smaller installations.

When used with minions, the term 'many' refers to at least a thousand and 'a few' always means 500.

For simplicity reasons, this tutorial will default to the standard ports used by salt.

The Master

The most common problems on the salt-master are:

  1. too many minions authing at once
  2. too many minions re-authing at once
  3. too many minions re-connecting at once
  4. too many minions returning at once
  5. too few resources (CPU/HDD)

The first three are all "thundering herd" problems. To mitigate these issues we must configure the minions to back-off appropriately when the master is under heavy load.

The fourth is caused by masters with little hardware resources in combination with a possible bug in ZeroMQ. At least thats what it looks like till today (Issue 118651, Issue 5948, Mail thread)

To fully understand each problem, it is important to understand, how salt works.

Very briefly, the saltmaster offers two services to the minions.

  • a job publisher on port 4505
  • an open port 4506 to receive the minions returns

All minions are always connected to the publisher on port 4505 and only connect to the open return port 4506 if necessary. On an idle master, there will only be connections on port 4505.

Too many minions authing

When the minion service is first started up, it will connect to its master's publisher on port 4505. If too many minions are started at once, this can cause a "thundering herd". This can be avoided by not starting too many minions at once.

The connection itself usually isn't the culprit, the more likely cause of master-side issues is the authentication that the minion must do with the master. If the master is too heavily loaded to handle the auth request it will time it out. The minion will then wait acceptance_wait_time to retry. If acceptance_wait_time_max is set then the minion will increase its wait time by the acceptance_wait_time each subsequent retry until reaching acceptance_wait_time_max.

Too many minions re-authing

This is most likely to happen in the testing phase, when all minion keys have already been accepted, the framework is being tested and parameters change frequently in the masters configuration file.

In a few cases (master restart, remove minion key, etc.) the salt-master generates a new AES-key to encrypt its publications with. The minions aren't notified of this but will realize this on the next pub job they receive. When the minion receives such a job it will then re-auth with the master. Since Salt does minion-side filtering this means that all the minions will re-auth on the next command published on the master-- causing another "thundering herd". This can be avoided by setting the

random_reauth_delay: 60

in the minions configuration file to a higher value and stagger the amount of re-auth attempts. Increasing this value will of course increase the time it takes until all minions are reachable via salt commands.

Too many minions re-connecting

By default the zmq socket will re-connect every 100ms which for some larger installations may be too quick. This will control how quickly the TCP session is re-established, but has no bearing on the auth load.

To tune the minions sockets reconnect attempts, there are a few values in the sample configuration file (default values)

recon_default: 100ms
recon_max: 5000
recon_randomize: True
  • recon_default: the default value the socket should use, i.e. 100ms
  • recon_max: the max value that the socket should use as a delay before trying to reconnect
  • recon_randomize: enables randomization between recon_default and recon_max

To tune this values to an existing environment, a few decision have to be made.

  1. How long can one wait, before the minions should be online and reachable via salt?
  2. How many reconnects can the master handle without a syn flood?

These questions can not be answered generally. Their answers depend on the hardware and the administrators requirements.

Here is an example scenario with the goal, to have all minions reconnect within a 60 second time-frame on a salt-master service restart.

recon_default: 1000
recon_max: 59000
recon_randomize: True

Each minion will have a randomized reconnect value between 'recon_default' and 'recon_default + recon_max', which in this example means between 1000ms and 60000ms (or between 1 and 60 seconds). The generated random-value will be doubled after each attempt to reconnect (ZeroMQ default behavior).

Lets say the generated random value is 11 seconds (or 11000ms).

reconnect 1: wait 11 seconds
reconnect 2: wait 22 seconds
reconnect 3: wait 33 seconds
reconnect 4: wait 44 seconds
reconnect 5: wait 55 seconds
reconnect 6: wait time is bigger than 60 seconds (recon_default + recon_max)
reconnect 7: wait 11 seconds
reconnect 8: wait 22 seconds
reconnect 9: wait 33 seconds
reconnect x: etc.

With a thousand minions this will mean

1000/60 = ~16

round about 16 connection attempts a second. These values should be altered to values that match your environment. Keep in mind though, that it may grow over time and that more minions might raise the problem again.

Too many minions returning at once

This can also happen during the testing phase, if all minions are addressed at once with

$ salt * test.ping

it may cause thousands of minions trying to return their data to the salt-master open port 4506. Also causing a flood of syn-flood if the master can't handle that many returns at once.

This can be easily avoided with salts batch mode:

$ salt * test.ping -b 50

This will only address 50 minions at once while looping through all addressed minions.

Too few resources

The masters resources always have to match the environment. There is no way to give good advise without knowing the environment the master is supposed to run in. But here are some general tuning tips for different situations:

The master is CPU bound

Salt uses RSA-Key-Pairs on the masters and minions end. Both generate 4096 bit key-pairs on first start. While the key-size for the master is currently not configurable, the minions keysize can be configured with different key-sizes. For example with a 2048 bit key:

keysize: 2048

With thousands of decryptions, the amount of time that can be saved on the masters end should not be neglected. See here for reference: Pull Request 9235 how much influence the key-size can have.

Downsizing the salt-masters key is not that important, because the minions do not encrypt as many messages as the master does.

The master is disk IO bound

By default, the master saves every minion's return for every job in its job-cache. The cache can then be used later, to lookup results for previous jobs. The default directory for this is:

cachedir: /var/cache/salt

and then in the /proc directory.

Each job return for every minion is saved in a single file. Over time this directory can grow quite large, depending on the number of published jobs. The amount of files and directories will scale with the number of jobs published and the retention time defined by

keep_jobs: 24
250 jobs/day * 2000 minions returns = 500.000 files a day

If no job history is needed, the job cache can be disabled:

job_cache: False

If the job cache is necessary there are (currently) 2 options:

  • ext_job_cache: this will have the minions store their return data directly into a returner (not sent through the master)
  • master_job_cache (New in 2014.7.0): this will make the master store the job data using a returner (instead of the local job cache on disk).

Targeting Minions

Targeting minions is specifying which minions should run a command or execute a state by matching against hostnames, or system information, or defined groups, or even combinations thereof.

For example the command salt web1 apache.signal restart to restart the Apache httpd server specifies the machine web1 as the target and the command will only be run on that one minion.

Similarly when using States, the following top file specifies that only the web1 minion should execute the contents of webserver.sls:

base:
  'web1':
    - webserver

There are many ways to target individual minions or groups of minions in Salt:

Matching the minion id

Each minion needs a unique identifier. By default when a minion starts for the first time it chooses its FQDN as that identifier. The minion id can be overridden via the minion's id configuration setting.

Tip

minion id and minion keys

The minion id is used to generate the minion's public/private keys and if it ever changes the master must then accept the new key as though the minion was a new host.

Globbing

The default matching that Salt utilizes is shell-style globbing around the minion id. This also works for states in the top file.

Note

You must wrap salt calls that use globbing in single-quotes to prevent the shell from expanding the globs before Salt is invoked.

Match all minions:

salt '*' test.ping

Match all minions in the example.net domain or any of the example domains:

salt '*.example.net' test.ping
salt '*.example.*' test.ping

Match all the webN minions in the example.net domain (web1.example.net, web2.example.netwebN.example.net):

salt 'web?.example.net' test.ping

Match the web1 through web5 minions:

salt 'web[1-5]' test.ping

Match the web1 and web3 minions:

salt 'web[1,3]' test.ping

Match the web-x, web-y, and web-z minions:

salt 'web-[x-z]' test.ping

Note

For additional targeting methods please review the compound matchers documentation.

Regular Expressions

Minions can be matched using Perl-compatible regular expressions (which is globbing on steroids and a ton of caffeine).

Match both web1-prod and web1-devel minions:

salt -E 'web1-(prod|devel)' test.ping

When using regular expressions in a State's top file, you must specify the matcher as the first option. The following example executes the contents of webserver.sls on the above-mentioned minions.

base:
  'web1-(prod|devel)':
  - match: pcre
  - webserver

Lists

At the most basic level, you can specify a flat list of minion IDs:

salt -L 'web1,web2,web3' test.ping

Grains

Salt comes with an interface to derive information about the underlying system. This is called the grains interface, because it presents salt with grains of information.

The grains interface is made available to Salt modules and components so that the right salt minion commands are automatically available on the right systems.

It is important to remember that grains are bits of information loaded when the salt minion starts, so this information is static. This means that the information in grains is unchanging, therefore the nature of the data is static. So grains information are things like the running kernel, or the operating system.

Note

Grains resolve to lowercase letters. For example, FOO, and foo target the same grain.

Match all CentOS minions:

salt -G 'os:CentOS' test.ping

Match all minions with 64-bit CPUs, and return number of CPU cores for each matching minion:

salt -G 'cpuarch:x86_64' grains.item num_cpus

Additionally, globs can be used in grain matches, and grains that are nested in a dictionary can be matched by adding a colon for each level that is traversed. For example, the following will match hosts that have a grain called ec2_tags, which itself is a dict with a key named environment, which has a value that contains the word production:

salt -G 'ec2_tags:environment:*production*'

Listing Grains

Available grains can be listed by using the 'grains.ls' module:

salt '*' grains.ls

Grains data can be listed by using the 'grains.items' module:

salt '*' grains.items

Grains in the Minion Config

Grains can also be statically assigned within the minion configuration file. Just add the option grains and pass options to it:

grains:
  roles:
    - webserver
    - memcache
  deployment: datacenter4
  cabinet: 13
  cab_u: 14-15

Then status data specific to your servers can be retrieved via Salt, or used inside of the State system for matching. It also makes targeting, in the case of the example above, simply based on specific data about your deployment.

Grains in /etc/salt/grains

If you do not want to place your custom static grains in the minion config file, you can also put them in /etc/salt/grains on the minion. They are configured in the same way as in the above example, only without a top-level grains: key:

roles:
  - webserver
  - memcache
deployment: datacenter4
cabinet: 13
cab_u: 14-15

Matching Grains in the Top File

With correctly configured grains on the Minion, the top file used in Pillar or during Highstate can be made very efficient. For example, consider the following configuration:

'node_type:web':
  - match: grain
  - webserver

'node_type:postgres':
  - match: grain
  - database

'node_type:redis':
  - match: grain
  - redis

'node_type:lb':
  - match: grain
  - lb

For this example to work, you would need to have defined the grain node_type for the minions you wish to match. This simple example is nice, but too much of the code is similar. To go one step further, Jinja templating can be used to simplify the top file.

{% set the_node_type = salt['grains.get']('node_type', '') %}

{% if the_node_type %}
  'node_type:{{ the_node_type }}':
    - match: grain
    - {{ the_node_type }}
{% endif %}

Using Jinja templating, only one match entry needs to be defined.

Note

The example above uses the grains.get function to account for minions which do not have the node_type grain set.

Writing Grains

The grains interface is derived by executing all of the "public" functions found in the modules located in the grains package or the custom grains directory. The functions in the modules of the grains must return a Python dict, where the keys in the dict are the names of the grains and the values are the values.

Custom grains should be placed in a _grains directory located under the file_roots specified by the master config file. The default path would be /srv/salt/_grains. Custom grains will be distributed to the minions when state.highstate is run, or by executing the saltutil.sync_grains or saltutil.sync_all functions.

Grains are easy to write, and only need to return a dictionary. A common approach would be code something similar to the following:

#!/usr/bin/env python
def yourfunction():
     # initialize a grains dictionary
     grains = {}
     # Some code for logic that sets grains like
     grains['yourcustomgrain'] = True
     grains['anothergrain'] = 'somevalue'
     return grains

Before adding a grain to Salt, consider what the grain is and remember that grains need to be static data. If the data is something that is likely to change, consider using Pillar instead.

Warning

Custom grains will not be available in the top file until after the first highstate. To make custom grains available on a minion's first highstate, it is recommended to use this example to ensure that the custom grains are synced when the minion starts.

Precedence

Core grains can be overridden by custom grains. As there are several ways of defining custom grains, there is an order of precedence which should be kept in mind when defining them. The order of evaluation is as follows:

  1. Core grains.
  2. Custom grain modules in _grains directory, synced to minions.
  3. Custom grains in /etc/salt/grains.
  4. Custom grains in /etc/salt/minion.

Each successive evaluation overrides the previous ones, so any grains defined by custom grains modules synced to minions that have the same name as a core grain will override that core grain. Similarly, grains from /etc/salt/grains override both core grains and custom grain modules, and grains in /etc/salt/minion will override any grains of the same name.

Examples of Grains

The core module in the grains package is where the main grains are loaded by the Salt minion and provides the principal example of how to write grains:

https://github.com/saltstack/salt/blob/develop/salt/grains/core.py

Syncing Grains

Syncing grains can be done a number of ways, they are automatically synced when state.highstate is called, or (as noted above) the grains can be manually synced and reloaded by calling the saltutil.sync_grains or saltutil.sync_all functions.

Subnet/IP Address Matching

Minions can easily be matched based on IP address, or by subnet (using CIDR notation).

salt -S 192.168.40.20 test.ping
salt -S 10.0.0.0/24 test.ping

Ipcidr matching can also be used in compound matches

salt -C 'S@10.0.0.0/24 and G@os:Debian' test.ping

It is also possible to use in both pillar and state-matching

'172.16.0.0/12':
   - match: ipcidr
   - internal

Note

Only IPv4 matching is supported at this time.

Compound matchers

Compound matchers allow very granular minion targeting using any of Salt's matchers. The default matcher is a glob match, just as with CLI and top file matching. To match using anything other than a glob, prefix the match string with the appropriate letter from the table below, followed by an @ sign.

Letter Delimiter Match Type Example
G x Grains glob G@os:Ubuntu
E   PCRE Minion ID E@web\d+\.(dev|qa|prod)\.loc
P x Grains PCRE P@os:(RedHat|Fedora|CentOS)
L   List of minions L@minion1.example.com,minion3.domain.com or bl*.domain.com
I x Pillar glob I@pdata:foobar
J x Pillar PCRE J@pdata:^(foo|bar)$
S   Subnet/IP address S@192.168.1.0/24 or S@192.168.1.100
R   Range cluster R@%foo.bar

Matchers can be joined using boolean and, or, and not operators.

For example, the following string matches all Debian minions with a hostname that begins with webserv, as well as any minions that have a hostname which matches the regular expression web-dc1-srv.*:

salt -C 'webserv* and G@os:Debian or E@web-dc1-srv.*' test.ping

That same example expressed in a top file looks like the following:

base:
  'webserv* and G@os:Debian or E@web-dc1-srv.*':
    - match: compound
    - webserver

New in version Beryllium.

Excluding a minion based on its ID is also possible:

salt -C 'not web-dc1-srv' test.ping

Versions prior to Beryllium a leading not was not supported in compound matches. Instead, something like the following was required:

salt -C '* and not G@kernel:Darwin' test.ping

Excluding a minion based on its ID was also possible:

salt -C '* and not web-dc1-srv' test.ping

Precedence Matching

Matches can be grouped together with parentheses to explicitly declare precedence amongst groups.

salt -C '( ms-1 or G@id:ms-3 ) and G@id:ms-3' test.ping

Note

Be certain to note that spaces are required between the parentheses and targets. Failing to obey this rule may result in incorrect targeting!

Alternate Delimiters

New in version Beryllium.

Some matchers allow an optional delimiter character specified between the leading matcher character and the @ pattern separator character. This can be essential when the globbing or PCRE pattern may use the default delimiter character :. This avoids incorrect interpretation of the pattern as part of the grain or pillar data structure traversal.

salt -C 'J|@foo|bar|^foo:bar$ or J!@gitrepo!https://github.com:example/project.git' test.ping

Node groups

Nodegroups are declared using a compound target specification. The compound target documentation can be found here.

The nodegroups master config file parameter is used to define nodegroups. Here's an example nodegroup configuration within /etc/salt/master:

nodegroups:
  group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com or bl*.domain.com'
  group2: 'G@os:Debian and foo.domain.com'
  group3: 'G@os:Debian and N@group1'
  group4:
    - 'G@foo:bar'
    - 'or'
    - 'G@foo:baz'

Note

The L within group1 is matching a list of minions, while the G in group2 is matching specific grains. See the compound matchers documentation for more details.

New in version Beryllium.

Note

Nodgroups can reference other nodegroups as seen in group3. Ensure that you do not have circular references. Circular references will be detected and cause partial expansion with a logged error message.

New in version Beryllium.

Compound nodegroups can be either string values or lists of string values. When the nodegroup is A string value will be tokenized by splitting on whitespace. This may be a problem if whitespace is necessary as part of a pattern. When a nodegroup is a list of strings then tokenization will happen for each list element as a whole.

To match a nodegroup on the CLI, use the -N command-line option:

salt -N group1 test.ping

To match a nodegroup in your top file, make sure to put - match: nodegroup on the line directly following the nodegroup name.

base:
  group1:
    - match: nodegroup
    - webserver

Note

When adding or modifying nodegroups to a master configuration file, the master must be restarted for those changes to be fully recognized.

A limited amount of functionality, such as targeting with -N from the command-line may be available without a restart.

Batch Size

The -b (or --batch-size) option allows commands to be executed on only a specified number of minions at a time. Both percentages and finite numbers are supported.

salt '*' -b 10 test.ping

salt -G 'os:RedHat' --batch-size 25% apache.signal restart

This will only run test.ping on 10 of the targeted minions at a time and then restart apache on 25% of the minions matching os:RedHat at a time and work through them all until the task is complete. This makes jobs like rolling web server restarts behind a load balancer or doing maintenance on BSD firewalls using carp much easier with salt.

The batch system maintains a window of running minions, so, if there are a total of 150 minions targeted and the batch size is 10, then the command is sent to 10 minions, when one minion returns then the command is sent to one additional minion, so that the job is constantly running on 10 minions.

SECO Range

SECO range is a cluster-based metadata store developed and maintained by Yahoo!

The Range project is hosted here:

https://github.com/ytoolshed/range

Learn more about range here:

https://github.com/ytoolshed/range/wiki/

Prerequisites

To utilize range support in Salt, a range server is required. Setting up a range server is outside the scope of this document. Apache modules are included in the range distribution.

With a working range server, cluster files must be defined. These files are written in YAML and define hosts contained inside a cluster. Full documentation on writing YAML range files is here:

https://github.com/ytoolshed/range/wiki/%22yamlfile%22-module-file-spec

Additionally, the Python seco range libraries must be installed on the salt master. One can verify that they have been installed correctly via the following command:

python -c 'import seco.range'

If no errors are returned, range is installed successfully on the salt master.

Preparing Salt

Range support must be enabled on the salt master by setting the hostname and port of the range server inside the master configuration file:

range_server: my.range.server.com:80

Following this, the master must be restarted for the change to have an effect.

Targeting with Range

Once a cluster has been defined, it can be targeted with a salt command by using the -R or --range flags.

For example, given the following range YAML file being served from a range server:

$ cat /etc/range/test.yaml
CLUSTER: host1..100.test.com
APPS:
  - frontend
  - backend
  - mysql

One might target host1 through host100 in the test.com domain with Salt as follows:

salt --range %test:CLUSTER test.ping

The following salt command would target three hosts: frontend, backend, and mysql:

salt --range %test:APPS test.ping

Storing Static Data in the Pillar

Pillar is an interface for Salt designed to offer global values that can be distributed to all minions. Pillar data is managed in a similar way as the Salt State Tree.

Pillar was added to Salt in version 0.9.8

Note

Storing sensitive data

Unlike state tree, pillar data is only available for the targeted minion specified by the matcher type. This makes it useful for storing sensitive data specific to a particular minion.

Declaring the Master Pillar

The Salt Master server maintains a pillar_roots setup that matches the structure of the file_roots used in the Salt file server. Like the Salt file server the pillar_roots option in the master config is based on environments mapping to directories. The pillar data is then mapped to minions based on matchers in a top file which is laid out in the same way as the state top file. Salt pillars can use the same matcher types as the standard top file.

The configuration for the pillar_roots in the master config file is identical in behavior and function as file_roots:

pillar_roots:
  base:
    - /srv/pillar

This example configuration declares that the base environment will be located in the /srv/pillar directory. It must not be in a subdirectory of the state tree.

The top file used matches the name of the top file used for States, and has the same structure:

/srv/pillar/top.sls

base:
  '*':
    - packages

In the above top file, it is declared that in the base environment, the glob matching all minions will have the pillar data found in the packages pillar available to it. Assuming the pillar_roots value of /srv/pillar taken from above, the packages pillar would be located at /srv/pillar/packages.sls.

Another example shows how to use other standard top matching types to deliver specific salt pillar data to minions with different properties.

Here is an example using the grains matcher to target pillars to minions by their os grain:

dev:
  'os:Debian':
    - match: grain
    - servers

/srv/pillar/packages.sls

{% if grains['os'] == 'RedHat' %}
apache: httpd
git: git
{% elif grains['os'] == 'Debian' %}
apache: apache2
git: git-core
{% endif %}

company: Foo Industries

The above pillar sets two key/value pairs. If a minion is running RedHat, then the apache key is set to httpd and the git key is set to the value of git. If the minion is running Debian, those values are changed to apache2 and git-core respctively. All minions that have this pillar targeting to them via a top file will have the key of company with a value of Foo Industries.

Consequently this data can be used from within modules, renderers, State SLS files, and more via the shared pillar dict:

apache:
  pkg.installed:
    - name: {{ pillar['apache'] }}
git:
  pkg.installed:
    - name: {{ pillar['git'] }}

Finally, the above states can utilize the values provided to them via Pillar. All pillar values targeted to a minion are available via the 'pillar' dictionary. As seen in the above example, Jinja substitution can then be utilized to access the keys and values in the Pillar dictionary.

Note that you cannot just list key/value-information in top.sls. Instead, target a minion to a pillar file and then list the keys and values in the pillar. Here is an example top file that illustrates this point:

base:
  '*':
     - common_pillar

And the actual pillar file at '/srv/pillar/common_pillar.sls':

foo: bar
boo: baz

Pillar namespace flattened

The separate pillar files all share the same namespace. Given a top.sls of:

base:
  '*':
    - packages
    - services

a packages.sls file of:

bind: bind9

and a services.sls file of:

bind: named

Then a request for the bind pillar will only return named; the bind9 value is not available. It is better to structure your pillar files with more hierarchy. For example your package.sls file could look like:

packages:
  bind: bind9

Pillar Namespace Merges

With some care, the pillar namespace can merge content from multiple pillar files under a single key, so long as conflicts are avoided as described above.

For example, if the above example were modified as follows, the values are merged below a single key:

base:
  '*':
    - packages
    - services

And a packages.sls file like:

bind:
  package-name: bind9
  version: 9.9.5

And a services.sls file like:

bind:
  port: 53
  listen-on: any

The resulting pillar will be as follows:

$ salt-call pillar.get bind
local:
    ----------
    listen-on:
        any
    package-name:
        bind9
    port:
        53
    version:
        9.9.5

Note

Remember: conflicting keys will be overwritten in a non-deterministic manner!

Including Other Pillars

New in version 0.16.0.

Pillar SLS files may include other pillar files, similar to State files. Two syntaxes are available for this purpose. The simple form simply includes the additional pillar as if it were part of the same file:

include:
  - users

The full include form allows two additional options -- passing default values to the templating engine for the included pillar file as well as an optional key under which to nest the results of the included pillar:

include:
  - users:
      defaults:
          sudo: ['bob', 'paul']
      key: users

With this form, the included file (users.sls) will be nested within the 'users' key of the compiled pillar. Additionally, the 'sudo' value will be available as a template variable to users.sls.

Viewing Minion Pillar

Once the pillar is set up the data can be viewed on the minion via the pillar module, the pillar module comes with functions, pillar.items and pillar.raw. pillar.items will return a freshly reloaded pillar and pillar.raw will return the current pillar without a refresh:

salt '*' pillar.items

Note

Prior to version 0.16.2, this function is named pillar.data. This function name is still supported for backwards compatibility.

Pillar "get" Function

New in version 0.14.0.

The pillar.get function works much in the same way as the get method in a python dict, but with an enhancement: nested dict components can be extracted using a : delimiter.

If a structure like this is in pillar:

foo:
  bar:
    baz: qux

Extracting it from the raw pillar in an sls formula or file template is done this way:

{{ pillar['foo']['bar']['baz'] }}

Now, with the new pillar.get function the data can be safely gathered and a default can be set, allowing the template to fall back if the value is not available:

{{ salt['pillar.get']('foo:bar:baz', 'qux') }}

This makes handling nested structures much easier.

Note

pillar.get() vs salt['pillar.get']()

It should be noted that within templating, the pillar variable is just a dictionary. This means that calling pillar.get() inside of a template will just use the default dictionary .get() function which does not include the extra : delimiter functionality. It must be called using the above syntax (salt['pillar.get']('foo:bar:baz', 'qux')) to get the salt function, instead of the default dictionary behavior.

Refreshing Pillar Data

When pillar data is changed on the master the minions need to refresh the data locally. This is done with the saltutil.refresh_pillar function.

salt '*' saltutil.refresh_pillar

This function triggers the minion to asynchronously refresh the pillar and will always return None.

Targeting with Pillar

Pillar data can be used when targeting minions. This allows for ultimate control and flexibility when targeting minions.

salt -I 'somekey:specialvalue' test.ping

Like with Grains, it is possible to use globbing as well as match nested values in Pillar, by adding colons for each level that is being traversed. The below example would match minions with a pillar named foo, which is a dict containing a key bar, with a value beginning with baz:

salt -I 'foo:bar:baz*' test.ping

Set Pillar Data at the Command Line

Pillar data can be set at the command line like the following example:

salt '*' state.highstate pillar='{"cheese": "spam"}'

This will create a dict with a key of 'cheese' and a value of 'spam'. A list can be created like this:

salt '*' state.highstate pillar='["cheese", "milk", "bread"]'

Master Config In Pillar

For convenience the data stored in the master configuration file is made available in all minion's pillars. This makes global configuration of services and systems very easy but may not be desired if sensitive data is stored in the master configuration.

To disable the master config from being added to the pillar set pillar_opts to False:

pillar_opts: False

Minion Config in Pillar

Minion configuration options can be set on pillars. Any option that you want to modify, should be in the first level of the pillars, in the same way you set the options in the config file. For example, to configure the MySQL root password to be used by MySQL Salt execution module, set the following pillar variable:

mysql.pass: hardtoguesspassword

Master Provided Pillar Error

By default if there is an error rendering a pillar, the detailed error is hidden and replaced with:

Rendering SLS 'my.sls' failed. Please see master log for details.

The error is protected because it's possible to contain templating data which would give that minion information it shouldn't know, like a password!

To have the master provide the detailed error that could potentially carry protected data set pillar_safe_render_error to False:

pillar_safe_render_error: True

Reactor System

Salt version 0.11.0 introduced the reactor system. The premise behind the reactor system is that with Salt's events and the ability to execute commands, a logic engine could be put in place to allow events to trigger actions, or more accurately, reactions.

This system binds sls files to event tags on the master. These sls files then define reactions. This means that the reactor system has two parts. First, the reactor option needs to be set in the master configuration file. The reactor option allows for event tags to be associated with sls reaction files. Second, these reaction files use highdata (like the state system) to define reactions to be executed.

Event System

A basic understanding of the event system is required to understand reactors. The event system is a local ZeroMQ PUB interface which fires salt events. This event bus is an open system used for sending information notifying Salt and other systems about operations.

The event system fires events with a very specific criteria. Every event has a tag. Event tags allow for fast top level filtering of events. In addition to the tag, each event has a data structure. This data structure is a dict, which contains information about the event.

Mapping Events to Reactor SLS Files

Reactor SLS files and event tags are associated in the master config file. By default this is /etc/salt/master, or /etc/salt/master.d/reactor.conf.

New in version 2014.7.0: Added Reactor support for salt:// file paths.

In the master config section 'reactor:' is a list of event tags to be matched and each event tag has a list of reactor SLS files to be run.

reactor:                            # Master config section "reactor"

  - 'salt/minion/*/start':          # Match tag "salt/minion/*/start"
    - /srv/reactor/start.sls        # Things to do when a minion starts
    - /srv/reactor/monitor.sls      # Other things to do

  - 'salt/cloud/*/destroyed':       # Globs can be used to matching tags
    - /srv/reactor/destroy/*.sls    # Globs can be used to match file names

  - 'myco/custom/event/tag':        # React to custom event tags
    - salt://reactor/mycustom.sls   # Put reactor files under file_roots

Reactor sls files are similar to state and pillar sls files. They are by default yaml + Jinja templates and are passed familiar context variables.

They differ because of the addition of the tag and data variables.

  • The tag variable is just the tag in the fired event.
  • The data variable is the event's data dict.

Here is a simple reactor sls:

{% if data['id'] == 'mysql1' %}
highstate_run:
  local.state.highstate:
    - tgt: mysql1
{% endif %}

This simple reactor file uses Jinja to further refine the reaction to be made. If the id in the event data is mysql1 (in other words, if the name of the minion is mysql1) then the following reaction is defined. The same data structure and compiler used for the state system is used for the reactor system. The only difference is that the data is matched up to the salt command API and the runner system. In this example, a command is published to the mysql1 minion with a function of state.highstate. Similarly, a runner can be called:

{% if data['data']['overstate'] == 'refresh' %}
overstate_run:
  runner.state.over
{% endif %}

This example will execute the state.overstate runner and initiate an overstate execution.

Fire an event

To fire an event from a minion call event.send

salt-call event.send 'foo' '{overstate: refresh}'

After this is called, any reactor sls files matching event tag foo will execute with {{ data['data']['overstate'] }} equal to 'refresh'.

See salt.modules.event for more information.

Knowing what event is being fired

The best way to see exactly what events are fired and what data is available in each event is to use the state.event runner.

Example usage:

salt-run state.event pretty=True

Example output:

salt/job/20150213001905721678/new       {
    "_stamp": "2015-02-13T00:19:05.724583",
    "arg": [],
    "fun": "test.ping",
    "jid": "20150213001905721678",
    "minions": [
        "jerry"
    ],
    "tgt": "*",
    "tgt_type": "glob",
    "user": "root"
}
salt/job/20150213001910749506/ret/jerry {
    "_stamp": "2015-02-13T00:19:11.136730",
    "cmd": "_return",
    "fun": "saltutil.find_job",
    "fun_args": [
        "20150213001905721678"
    ],
    "id": "jerry",
    "jid": "20150213001910749506",
    "retcode": 0,
    "return": {},
    "success": true
}

Debugging the Reactor

The best window into the Reactor is to run the master in the foreground with debug logging enabled. The output will include when the master sees the event, what the master does in response to that event, and it will also include the rendered SLS file (or any errors generated while rendering the SLS file).

  1. Stop the master.

  2. Start the master manually:

    salt-master -l debug
    
  3. Look for log entries in the form:

    [DEBUG   ] Gathering reactors for tag foo/bar
    [DEBUG   ] Compiling reactions for tag foo/bar
    [DEBUG   ] Rendered data from file: /path/to/the/reactor_file.sls:
    <... Rendered output appears here. ...>
    

    The rendered output is the result of the Jinja parsing and is a good way to view the result of referencing Jinja variables. If the result is empty then Jinja produced an empty result and the Reactor will ignore it.

Understanding the Structure of Reactor Formulas

I.e., when to use `arg` and `kwarg` and when to specify the function arguments directly.

While the reactor system uses the same basic data structure as the state system, the functions that will be called using that data structure are different functions than are called via Salt's state system. The Reactor can call Runner modules using the runner prefix, Wheel modules using the wheel prefix, and can also cause minions to run Execution modules using the local prefix.

Changed in version 2014.7.0: The cmd prefix was renamed to local for consistency with other parts of Salt. A backward-compatible alias was added for cmd.

The Reactor runs on the master and calls functions that exist on the master. In the case of Runner and Wheel functions the Reactor can just call those functions directly since they exist on the master and are run on the master.

In the case of functions that exist on minions and are run on minions, the Reactor still needs to call a function on the master in order to send the necessary data to the minion so the minion can execute that function.

The Reactor calls functions exposed in Salt's Python API documentation. and thus the structure of Reactor files very transparently reflects the function signatures of those functions.

Calling Execution modules on Minions

The Reactor sends commands down to minions in the exact same way Salt's CLI interface does. It calls a function locally on the master that sends the name of the function as well as a list of any arguments and a dictionary of any keyword arguments that the minion should use to execute that function.

Specifically, the Reactor calls the async version of this function. You can see that function has 'arg' and 'kwarg' parameters which are both values that are sent down to the minion.

Executing remote commands maps to the LocalClient interface which is used by the salt command. This interface more specifically maps to the cmd_async method inside of the LocalClient class. This means that the arguments passed are being passed to the cmd_async method, not the remote method. A field starts with local to use the LocalClient subsystem. The result is, to execute a remote command, a reactor formula would look like this:

clean_tmp:
  local.cmd.run:
    - tgt: '*'
    - arg:
      - rm -rf /tmp/*

The arg option takes a list of arguments as they would be presented on the command line, so the above declaration is the same as running this salt command:

salt '*' cmd.run 'rm -rf /tmp/*'

Use the expr_form argument to specify a matcher:

clean_tmp:
  local.cmd.run:
    - tgt: 'os:Ubuntu'
    - expr_form: grain
    - arg:
      - rm -rf /tmp/*


clean_tmp:
  local.cmd.run:
    - tgt: 'G@roles:hbase_master'
    - expr_form: compound
    - arg:
      - rm -rf /tmp/*

Any other parameters in the LocalClient().cmd() method can be specified as well.

Calling Runner modules and Wheel modules

Calling Runner modules and wheel modules from the Reactor uses a more direct syntax since the function is being executed locally instead of sending a command to a remote system to be executed there. There are no 'arg' or 'kwarg' parameters (unless the Runner function or Wheel function accepts a paramter with either of those names.)

For example:

clear_the_grains_cache_for_all_minions:
  runner.cache.clear_grains

If the runner takes arguments then they can be specified as well:

spin_up_more_web_machines:
  runner.cloud.profile:
    - prof: centos_6
    - instances:
      - web11       # These VM names would be generated via Jinja in a
      - web12       # real-world example.

Passing event data to Minions or Orchestrate as Pillar

An interesting trick to pass data from the Reactor script to state.highstate or state.sls is to pass it as inline Pillar data since both functions take a keyword argument named pillar.

The following example uses Salt's Reactor to listen for the event that is fired when the key for a new minion is accepted on the master using salt-key.

/etc/salt/master.d/reactor.conf:

reactor:
  - 'salt/key':
    - /srv/salt/haproxy/react_new_minion.sls

The Reactor then fires a state.sls command targeted to the HAProxy servers and passes the ID of the new minion from the event to the state file via inline Pillar.

/srv/salt/haproxy/react_new_minion.sls:

{% if data['act'] == 'accept' and data['id'].startswith('web') %}
add_new_minion_to_pool:
  local.state.sls:
    - tgt: 'haproxy*'
    - arg:
      - haproxy.refresh_pool
    - kwarg:
        pillar:
          new_minion: {{ data['id'] }}
{% endif %}

The above command is equivalent to the following command at the CLI:

salt 'haproxy*' state.sls haproxy.refresh_pool 'pillar={new_minion: minionid}'

This works with Orchestrate files as well:

call_some_orchestrate_file:
  runner.state.orchestrate:
    - mods: some_orchestrate_file
    - pillar:
        stuff: things

Which is equivalent to the following command at the CLI:

salt-run state.orchestrate some_orchestrate_file pillar='{stuff: things}'

Finally, that data is available in the state file using the normal Pillar lookup syntax. The following example is grabbing web server names and IP addresses from Salt Mine. If this state is invoked from the Reactor then the custom Pillar value from above will be available and the new minion will be added to the pool but with the disabled flag so that HAProxy won't yet direct traffic to it.

/srv/salt/haproxy/refresh_pool.sls:

{% set new_minion = salt['pillar.get']('new_minion') %}

listen web *:80
    balance source
    {% for server,ip in salt['mine.get']('web*', 'network.interfaces', ['eth0']).items() %}
    {% if server == new_minion %}
    server {{ server }} {{ ip }}:80 disabled
    {% else %}
    server {{ server }} {{ ip }}:80 check
    {% endif %}
    {% endfor %}

A Complete Example

In this example, we're going to assume that we have a group of servers that will come online at random and need to have keys automatically accepted. We'll also add that we don't want all servers being automatically accepted. For this example, we'll assume that all hosts that have an id that starts with 'ink' will be automatically accepted and have state.highstate executed. On top of this, we're going to add that a host coming up that was replaced (meaning a new key) will also be accepted.

Our master configuration will be rather simple. All minions that attempte to authenticate will match the tag of salt/auth. When it comes to the minion key being accepted, we get a more refined tag that includes the minion id, which we can use for matching.

/etc/salt/master.d/reactor.conf:

reactor:
  - 'salt/auth':
    - /srv/reactor/auth-pending.sls
  - 'salt/minion/ink*/start':
    - /srv/reactor/auth-complete.sls

In this sls file, we say that if the key was rejected we will delete the key on the master and then also tell the master to ssh in to the minion and tell it to restart the minion, since a minion process will die if the key is rejected.

We also say that if the key is pending and the id starts with ink we will accept the key. A minion that is waiting on a pending key will retry authentication every ten seconds by default.

/srv/reactor/auth-pending.sls:

{# Ink server faild to authenticate -- remove accepted key #}
{% if not data['result'] and data['id'].startswith('ink') %}
minion_remove:
  wheel.key.delete:
    - match: {{ data['id'] }}
minion_rejoin:
  local.cmd.run:
    - tgt: salt-master.domain.tld
    - arg:
      - ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no "{{ data['id'] }}" 'sleep 10 && /etc/init.d/salt-minion restart'
{% endif %}

{# Ink server is sending new key -- accept this key #}
{% if 'act' in data and data['act'] == 'pend' and data['id'].startswith('ink') %}
minion_add:
  wheel.key.accept:
    - match: {{ data['id'] }}
{% endif %}

No if statements are needed here because we already limited this action to just Ink servers in the master configuration.

/srv/reactor/auth-complete.sls:

{# When an Ink server connects, run state.highstate. #}
highstate_run:
  local.state.highstate:
    - tgt: {{ data['id'] }}
    - ret: smtp_return

The above will also return the highstate result data using the smtp_return returner. The returner needs to be configured on the minion for this to work. See salt.returners.smtp_return documentation for that.

Syncing Custom Types on Minion Start

Salt will sync all custom types (by running a saltutil.sync_all) on every highstate. However, there is a chicken-and-egg issue where, on the initial highstate, a minion will not yet have these custom types synced when the top file is first compiled. This can be worked around with a simple reactor which watches for minion_start events, which each minion fires when it first starts up and connects to the master.

On the master, create /srv/reactor/sync_grains.sls with the following contents:

sync_grains:
  local.saltutil.sync_grains:
    - tgt: {{ data['id'] }}

And in the master config file, add the following reactor configuration:

reactor:
  - 'minion_start':
    - /srv/reactor/sync_grains.sls

This will cause the master to instruct each minion to sync its custom grains when it starts, making these grains available when the initial highstate is executed.

Other types can be synced by replacing local.saltutil.sync_grains with local.saltutil.sync_modules, local.saltutil.sync_all, or whatever else suits the intended use case.

The Salt Mine

The Salt Mine is used to collect arbitrary data from minions and store it on the master. This data is then made available to all minions via the salt.modules.mine module.

The data is gathered on the minion and sent back to the master where only the most recent data is maintained (if long term data is required use returners or the external job cache).

Mine Functions

To enable the Salt Mine the mine_functions option needs to be applied to a minion. This option can be applied via the minion's configuration file, or the minion's Pillar. The mine_functions option dictates what functions are being executed and allows for arguments to be passed in. If no arguments are passed, an empty list must be added:

mine_functions:
  test.ping: []
  network.ip_addrs:
    interface: eth0
    cidr: '10.0.0.0/8'

Mine Functions Aliases

Function aliases can be used to provide friendly names, usage intentions or to allow multiple calls of the same function with different arguments. There is a different syntax for passing positional and key-value arguments. Mixing positional and key-value arguments is not supported.

New in version 2014.7.

mine_functions:
  network.ip_addrs: [eth0]
  networkplus.internal_ip_addrs: []
  internal_ip_addrs:
    mine_function: network.ip_addrs
    cidr: 192.168.0.0/16
  ip_list:
    - mine_function: grains.get
    - ip_interfaces

Mine Interval

The Salt Mine functions are executed when the minion starts and at a given interval by the scheduler. The default interval is every 60 minutes and can be adjusted for the minion via the mine_interval option:

mine_interval: 60

Mine in Salt-SSH

As of the 2015.5.0 release of salt, salt-ssh supports mine.get.

Because the minions cannot provide their own mine_functions configuration, we retrieve the args for specified mine functions in one of three places, searched in the following order:

  1. Roster data
  2. Pillar
  3. Master config

The mine_functions are formatted exactly the same as in normal salt, just stored in a different location. Here is an example of a flat roster containing mine_functions:

test:
  host: 104.237.131.248
  user: root
  mine_functions:
    cmd.run: ['echo "hello!"']
    network.ip_addrs:
      interface: eth0

Note

Because of the differences in the architecture of salt-ssh, mine.get calls are somewhat inefficient. Salt must make a new salt-ssh call to each of the minions in question to retrieve the requested data, much like a publish call. However, unlike publish, it must run the requested function as a wrapper function, so we can retrieve the function args from the pillar of the minion in question. This results in a non-trivial delay in retrieving the requested data.

Example

One way to use data from Salt Mine is in a State. The values can be retrieved via Jinja and used in the SLS file. The following example is a partial HAProxy configuration file and pulls IP addresses from all minions with the "web" grain to add them to the pool of load balanced servers.

/srv/pillar/top.sls:

base:
  'G@roles:web':
    - web

/srv/pillar/web.sls:

mine_functions:
  network.ip_addrs: [eth0]

/etc/salt/minion.d/mine.conf:

mine_interval: 5

/srv/salt/haproxy.sls:

haproxy_config:
  file.managed:
    - name: /etc/haproxy/config
    - source: salt://haproxy_config
    - template: jinja

/srv/salt/haproxy_config:

<...file contents snipped...>

{% for server, addrs in salt['mine.get']('roles:web', 'network.ip_addrs', expr_form='grain').items() %}
server {{ server }} {{ addrs[0] }}:80 check
{% endfor %}

<...file contents snipped...>

External Authentication System

Salt's External Authentication System (eAuth) allows for Salt to pass through command authorization to any external authentication system, such as PAM or LDAP.

Note

eAuth using the PAM external auth system requires salt-master to be run as root as this system needs root access to check authentication.

Access Control System

The external authentication system allows for specific users to be granted access to execute specific functions on specific minions. Access is configured in the master configuration file and uses the access control system:

external_auth:
  pam:
    thatch:
      - 'web*':
        - test.*
        - network.*
    steve:
      - .*

The above configuration allows the user thatch to execute functions in the test and network modules on the minions that match the web* target. User steve is given unrestricted access to minion commands.

Note

The PAM module does not allow authenticating as root.

To allow access to wheel modules or runner modules the following @ syntax must be used:

external_auth:
  pam:
    thatch:
      - '@wheel'   # to allow access to all wheel modules
      - '@runner'  # to allow access to all runner modules
      - '@jobs'    # to allow access to the jobs runner and/or wheel module

Note

The runner/wheel markup is different, since there are no minions to scope the acl to.

Note

Globs will not match wheel or runners! They must be explicitly allowed with @wheel or @runner.

The external authentication system can then be used from the command-line by any user on the same system as the master with the -a option:

$ salt -a pam web\* test.ping

The system will ask the user for the credentials required by the authentication system and then publish the command.

To apply permissions to a group of users in an external authentication system, append a % to the ID:

external_auth:
  pam:
    admins%:
      - '*':
        - 'pkg.*'

Tokens

With external authentication alone, the authentication credentials will be required with every call to Salt. This can be alleviated with Salt tokens.

Tokens are short term authorizations and can be easily created by just adding a -T option when authenticating:

$ salt -T -a pam web\* test.ping

Now a token will be created that has a expiration of 12 hours (by default). This token is stored in a file named salt_token in the active user's home directory.

Once the token is created, it is sent with all subsequent communications. User authentication does not need to be entered again until the token expires.

Token expiration time can be set in the Salt master config file.

LDAP and Active Directory

Note

LDAP usage requires that you have installed python-ldap.

Salt supports both user and group authentication for LDAP (and Active Directory accessed via its LDAP interface)

LDAP configuration happens in the Salt master configuration file.

Server configuration values and their defaults:

auth.ldap.server: localhost
auth.ldap.port: 389
auth.ldap.tls: False
auth.ldap.scope: 2
auth.ldap.uri: ''
auth.ldap.tls: False
auth.ldap.no_verify: False
auth.ldap.anonymous: False
auth.ldap.groupou: 'Groups'
auth.ldap.groupclass: 'posixGroup'
auth.ldap.accountattributename: 'memberUid'

# These are only for Active Directory
auth.ldap.activedirectory: False
auth.ldap.persontype: 'person'

Salt also needs to know which Base DN to search for users and groups and the DN to bind to:

auth.ldap.basedn: dc=saltstack,dc=com
auth.ldap.binddn: cn=admin,dc=saltstack,dc=com

To bind to a DN, a password is required

auth.ldap.bindpw: mypassword

Salt uses a filter to find the DN associated with a user. Salt substitutes the {{ username }} value for the username when querying LDAP

auth.ldap.filter: uid={{ username }}

For OpenLDAP, to determine group membership, one can specify an OU that contains group data. This is prepended to the basedn to create a search path. Then the results are filtered against auth.ldap.groupclass, default posixGroup, and the account's 'name' attribute, memberUid by default.

auth.ldap.groupou: Groups

Active Directory handles group membership differently, and does not utilize the groupou configuration variable. AD needs the following options in the master config:

auth.ldap.activedirectory: True
auth.ldap.filter: sAMAccountName={{username}}
auth.ldap.accountattributename: sAMAccountName
auth.ldap.groupclass: group
auth.ldap.persontype: person

To determine group membership in AD, the username and password that is entered when LDAP is requested as the eAuth mechanism on the command line is used to bind to AD's LDAP interface. If this fails, then it doesn't matter what groups the user belongs to, he or she is denied access. Next, the distinguishedName of the user is looked up with the following LDAP search:

(&(<value of auth.ldap.accountattributename>={{username}})
  (objectClass=<value of auth.ldap.persontype>)
)

This should return a distinguishedName that we can use to filter for group membership. Then the following LDAP quey is executed:

(&(member=<distinguishedName from search above>)
  (objectClass=<value of auth.ldap.groupclass>)
)
external_auth:
  ldap:
    test_ldap_user:
      - '*':
        - test.ping

To configure an LDAP group, append a % to the ID:

external_auth:
ldap:
    test_ldap_group%:
      - '*':
        - test.echo

Access Control System

New in version 0.10.4.

Salt maintains a standard system used to open granular control to non administrative users to execute Salt commands. The access control system has been applied to all systems used to configure access to non administrative control interfaces in Salt.These interfaces include, the peer system, the external auth system and the client acl system.

The access control system mandated a standard configuration syntax used in all of the three aforementioned systems. While this adds functionality to the configuration in 0.10.4, it does not negate the old configuration.

Now specific functions can be opened up to specific minions from specific users in the case of external auth and client ACLs, and for specific minions in the case of the peer system.

The access controls are manifested using matchers in these configurations:

client_acl:
  fred:
    - web\*:
      - pkg.list_pkgs
      - test.*
      - apache.*

In the above example, fred is able to send commands only to minions which match the specified glob target. This can be expanded to include other functions for other minions based on standard targets.

external_auth:
  pam:
    dave:
      - test.ping
      - mongo\*:
        - network.*
      - log\*:
        - network.*
        - pkg.*
      - 'G@os:RedHat':
        - kmod.*
    steve:
      - .*

The above allows for all minions to be hit by test.ping by dave, and adds a few functions that dave can execute on other minions. It also allows steve unrestricted access to salt commands.

Job Management

New in version 0.9.7.

Since Salt executes jobs running on many systems, Salt needs to be able to manage jobs running on many systems.

The Minion proc System

Salt Minions maintain a proc directory in the Salt cachedir. The proc directory maintains files named after the executed job ID. These files contain the information about the current running jobs on the minion and allow for jobs to be looked up. This is located in the proc directory under the cachedir, with a default configuration it is under /var/cache/salt/proc.

Functions in the saltutil Module

Salt 0.9.7 introduced a few new functions to the saltutil module for managing jobs. These functions are:

  1. running Returns the data of all running jobs that are found in the proc directory.
  2. find_job Returns specific data about a certain job based on job id.
  3. signal_job Allows for a given jid to be sent a signal.
  4. term_job Sends a termination signal (SIGTERM, 15) to the process controlling the specified job.
  5. kill_job Sends a kill signal (SIGKILL, 9) to the process controlling the specified job.

These functions make up the core of the back end used to manage jobs at the minion level.

The jobs Runner

A convenience runner front end and reporting system has been added as well. The jobs runner contains functions to make viewing data easier and cleaner.

The jobs runner contains a number of functions...

active

The active function runs saltutil.running on all minions and formats the return data about all running jobs in a much more usable and compact format. The active function will also compare jobs that have returned and jobs that are still running, making it easier to see what systems have completed a job and what systems are still being waited on.

# salt-run jobs.active

lookup_jid

When jobs are executed the return data is sent back to the master and cached. By default it is cached for 24 hours, but this can be configured via the keep_jobs option in the master configuration. Using the lookup_jid runner will display the same return data that the initial job invocation with the salt command would display.

# salt-run jobs.lookup_jid <job id number>

list_jobs

Before finding a historic job, it may be required to find the job id. list_jobs will parse the cached execution data and display all of the job data for jobs that have already, or partially returned.

# salt-run jobs.list_jobs

Scheduling Jobs

In Salt versions greater than 0.12.0, the scheduling system allows incremental executions on minions or the master. The schedule system exposes the execution of any execution function on minions or any runner on the master.

Scheduling is enabled via the schedule option on either the master or minion config files, or via a minion's pillar data. Schedules that are impletemented via pillar data, only need to refresh the minion's pillar data, for example by using saltutil.refresh_pillar. Schedules implemented in the master or minion config have to restart the application in order for the schedule to be implemented.

Note

The scheduler executes different functions on the master and minions. When running on the master the functions reference runner functions, when running on the minion the functions specify execution functions.

A scheduled run has no output on the minion unless the config is set to info level or higher. Refer to minion logging settings.

Specify maxrunning to ensure that there are no more than N copies of a particular routine running. Use this for jobs that may be long-running and could step on each other or otherwise double execute. The default for maxrunning is 1.

States are executed on the minion, as all states are. You can pass positional arguments and provide a yaml dict of named arguments.

schedule:
  job1:
    function: state.sls
    seconds: 3600
    args:
      - httpd
    kwargs:
      test: True

This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour)

schedule:
  job1:
    function: state.sls
    seconds: 3600
    args:
      - httpd
    kwargs:
      test: True
    splay: 15

This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) splaying the time between 0 and 15 seconds

schedule:
  job1:
    function: state.sls
    seconds: 3600
    args:
      - httpd
    kwargs:
      test: True
    splay:
      start: 10
      end: 15

This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) splaying the time between 10 and 15 seconds

New in version 2014.7.0.

Frequency of jobs can also be specified using date strings supported by the python dateutil library. This requires python-dateutil to be installed on the minion.

schedule:
  job1:
    function: state.sls
    args:
      - httpd
    kwargs:
      test: True
    when: 5:00pm

This will schedule the command: state.sls httpd test=True at 5:00pm minion localtime.

schedule:
  job1:
    function: state.sls
    args:
      - httpd
    kwargs:
      test: True
    when:
        - Monday 5:00pm
        - Tuesday 3:00pm
        - Wednesday 5:00pm
        - Thursday 3:00pm
        - Friday 5:00pm

This will schedule the command: state.sls httpd test=True at 5pm on Monday, Wednesday, and Friday, and 3pm on Tuesday and Thursday.

schedule:
  job1:
    function: state.sls
    seconds: 3600
    args:
      - httpd
    kwargs:
      test: True
    range:
        start: 8:00am
        end: 5:00pm

This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) between the hours of 8am and 5pm. The range parameter must be a dictionary with the date strings using the dateutil format. This requires python-dateutil to be installed on the minion.

New in version 2014.7.0.

The scheduler also supports ensuring that there are no more than N copies of a particular routine running. Use this for jobs that may be long-running and could step on each other or pile up in case of infrastructure outage.

The default for maxrunning is 1.

schedule:
  long_running_job:
      function: big_file_transfer
      jid_include: True

States

schedule:
  log-loadavg:
    function: cmd.run
    seconds: 3660
    args:
      - 'logger -t salt < /proc/loadavg'
    kwargs:
      stateful: False
      shell: \bin\sh

Highstates

To set up a highstate to run on a minion every 60 minutes set this in the minion config or pillar:

schedule:
  highstate:
    function: state.highstate
    minutes: 60

Time intervals can be specified as seconds, minutes, hours, or days.

Runners

Runner executions can also be specified on the master within the master configuration file:

schedule:
  overstate:
    function: state.over
    seconds: 35
    minutes: 30
    hours: 3

The above configuration will execute the state.over runner every 3 hours, 30 minutes and 35 seconds, or every 12,635 seconds.

Scheduler With Returner

The scheduler is also useful for tasks like gathering monitoring data about a minion, this schedule option will gather status data and send it to a MySQL returner database:

schedule:
  uptime:
    function: status.uptime
    seconds: 60
    returner: mysql
  meminfo:
    function: status.meminfo
    minutes: 5
    returner: mysql

Since specifying the returner repeatedly can be tiresome, the schedule_returner option is available to specify one or a list of global returners to be used by the minions when scheduling.

In Salt versions greater than 0.12.0, the scheduling system allows incremental executions on minions or the master. The schedule system exposes the execution of any execution function on minions or any runner on the master.

Scheduling is enabled via the schedule option on either the master or minion config files, or via a minion's pillar data. Schedules that are impletemented via pillar data, only need to refresh the minion's pillar data, for example by using saltutil.refresh_pillar. Schedules implemented in the master or minion config have to restart the application in order for the schedule to be implemented.

Note

The scheduler executes different functions on the master and minions. When running on the master the functions reference runner functions, when running on the minion the functions specify execution functions.

A scheduled run has no output on the minion unless the config is set to info level or higher. Refer to minion logging settings.

Specify maxrunning to ensure that there are no more than N copies of a particular routine running. Use this for jobs that may be long-running and could step on each other or otherwise double execute. The default for maxrunning is 1.

States are executed on the minion, as all states are. You can pass positional arguments and provide a yaml dict of named arguments.

schedule:
  job1:
    function: state.sls
    seconds: 3600
    args:
      - httpd
    kwargs:
      test: True

This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour)

schedule:
  job1:
    function: state.sls
    seconds: 3600
    args:
      - httpd
    kwargs:
      test: True
    splay: 15

This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) splaying the time between 0 and 15 seconds

schedule:
  job1:
    function: state.sls
    seconds: 3600
    args:
      - httpd
    kwargs:
      test: True
    splay:
      start: 10
      end: 15

This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) splaying the time between 10 and 15 seconds

New in version 2014.7.0.

Frequency of jobs can also be specified using date strings supported by the python dateutil library. This requires python-dateutil to be installed on the minion.

schedule:
  job1:
    function: state.sls
    args:
      - httpd
    kwargs:
      test: True
    when: 5:00pm

This will schedule the command: state.sls httpd test=True at 5:00pm minion localtime.

schedule:
  job1:
    function: state.sls
    args:
      - httpd
    kwargs:
      test: True
    when:
        - Monday 5:00pm
        - Tuesday 3:00pm
        - Wednesday 5:00pm
        - Thursday 3:00pm
        - Friday 5:00pm

This will schedule the command: state.sls httpd test=True at 5pm on Monday, Wednesday, and Friday, and 3pm on Tuesday and Thursday.

schedule:
  job1:
    function: state.sls
    seconds: 3600
    args:
      - httpd
    kwargs:
      test: True
    range:
        start: 8:00am
        end: 5:00pm

This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) between the hours of 8am and 5pm. The range parameter must be a dictionary with the date strings using the dateutil format. This requires python-dateutil to be installed on the minion.

New in version 2014.7.0.

The scheduler also supports ensuring that there are no more than N copies of a particular routine running. Use this for jobs that may be long-running and could step on each other or pile up in case of infrastructure outage.

The default for maxrunning is 1.

schedule:
  long_running_job:
      function: big_file_transfer
      jid_include: True

States

schedule:
  log-loadavg:
    function: cmd.run
    seconds: 3660
    args:
      - 'logger -t salt < /proc/loadavg'
    kwargs:
      stateful: False
      shell: \bin\sh

Highstates

To set up a highstate to run on a minion every 60 minutes set this in the minion config or pillar:

schedule:
  highstate:
    function: state.highstate
    minutes: 60

Time intervals can be specified as seconds, minutes, hours, or days.

Runners

Runner executions can also be specified on the master within the master configuration file:

schedule:
  overstate:
    function: state.over
    seconds: 35
    minutes: 30
    hours: 3

The above configuration will execute the state.over runner every 3 hours, 30 minutes and 35 seconds, or every 12,635 seconds.

Scheduler With Returner

The scheduler is also useful for tasks like gathering monitoring data about a minion, this schedule option will gather status data and send it to a MySQL returner database:

schedule:
  uptime:
    function: status.uptime
    seconds: 60
    returner: mysql
  meminfo:
    function: status.meminfo
    minutes: 5
    returner: mysql

Since specifying the returner repeatedly can be tiresome, the schedule_returner option is available to specify one or a list of global returners to be used by the minions when scheduling.

Managing the Job Cache

The Salt Master maintains a job cache of all job executions which can be queried via the jobs runner. The way this job cache is managed is very pluggable via Salt's underlying returner interface.

Default Job Cache

A number of options are available when configuring the job cache. The default caching system uses local storage on the Salt Master and can be found in the job cache directory (on Linux systems this is typically /var/cache/salt/master/jobs). The default caching system is suitable for most deployments as it does not typically require any further configuration or management.

The default job cache is a temporary cache and jobs will be stored for 24 hours. If the default cache needs to store jobs for a different period the time can be easily adjusted by changing the keep_jobs parameter in the Salt Master configuration file. The value passed in is measured via hours:

keep_jobs: 24

External Job Cache Options

Many deployments may wish to use an external database to maintain a long term register of executed jobs. Salt comes with two main mechanisms to do this, the master job cache and the external job cache. The difference is how the external data store is accessed.

Master Job Cache

New in version 2014.7.

The master job cache setting makes the built in job cache on the master modular. This system allows for the default cache to be swapped out by the Salt returner system. To configure the master job cache, set up an external returner database based on the instructions included with each returner and then simply add the following configuration to the master configuration file:

master_job_cache: mysql

External Job Cache

The external job cache setting instructs the minions to directly contact the data store. This scenario is helpful when the data store needs to be made available to the minions. This can be an effective way to share historic data across an infrastructure as data can be retrieved from the external job cache via the ret execution module.

To configure the external job cache, set up a returner database in the manner described in the specific returner documentation. Ensure that the returner database is accessible from the minions, and set the ext_job_cache setting in the master configuration file:

ext_job_cache: redis

Storing Data in Other Databases

The SDB interface is designed to store and retrieve data that, unlike pillars and grains, is not necessarily minion-specific. The initial design goal was to allow passwords to be stored in a secure database, such as one managed by the keyring package, rather than as plain-text files. However, as a generic database interface, it could conceptually be used for a number of other purposes.

SDB was added to Salt in version 2014.7.0. SDB is currently experimental, and should probably not be used in production.

SDB Configuration

In order to use the SDB interface, a configuration profile must be set up in either the master or minion configuration file. The configuration stanza includes the name/ID that the profile will be referred to as, a driver setting, and any other arguments that are necessary for the SDB module that will be used. For instance, a profile called mykeyring, which uses the system service in the keyring module would look like:

mykeyring:
  driver: keyring
  service: system

It is recommended to keep the name of the profile simple, as it is used in the SDB URI as well.

SDB URIs

SDB is designed to make small database queries (hence the name, SDB) using a compact URL. This allows users to reference a database value quickly inside a number of Salt configuration areas, without a lot of overhead. The basic format of an SDB URI is:

sdb://<profile>/<args>

The profile refers to the configuration profile defined in either the master or the minion configuration file. The args are specific to the module referred to in the profile, but will typically only need to refer to the key of a key/value pair inside the database. This is because the profile itself should define as many other parameters as possible.

For example, a profile might be set up to reference credentials for a specific OpenStack account. The profile might look like:

kevinopenstack:
  driver: keyring
  service: salt.cloud.openstack.kevin

And the URI used to reference the password might look like:

sdb://kevinopenstack/password

Writing SDB Modules

There is currently one function that MUST exist in any SDB module (get()) and one that MAY exist (set_()). If using a (set_()) function, a __func_alias__ dictionary MUST be declared in the module as well:

__func_alias__ = {
    'set_': 'set',
}

This is because set is a Python built-in, and therefore functions should not be created which are called set(). The __func_alias__ functionality is provided via Salt's loader interfaces, and allows legally-named functions to be referred to using names that would otherwise be unwise to use.

The get() function is required, as it will be called via functions in other areas of the code which make use of the sdb:// URI. For example, the config.get function in the config execution module uses this function.

The set_() function may be provided, but is not required, as some sources may be read-only, or may be otherwise unwise to access via a URI (for instance, because of SQL injection attacks).

A simple example of an SDB module is salt/sdb/keyring_db.py, as it provides basic examples of most, if not all, of the types of functionality that are available not only for SDB modules, but for Salt modules in general.

Salt Event System

The Salt Event System is used to fire off events enabling third party applications or external processes to react to behavior within Salt.

The event system is comprised of a two primary components:

  • The event sockets which publishes events.
  • The event library which can listen to events and send events into the salt system.

Event types

Salt Master Events

These events are fired on the Salt Master event bus. This list is not comprehensive.

Authentication events
salt/auth

Fired when a minion performs an authentication check with the master.

Variables:
  • id -- The minion ID.
  • act -- The current status of the minion key: accept, pend, reject.
  • pub -- The minion public key.

Note

Minions fire auth events on fairly regular basis for a number of reasons. Writing reactors to respond to events through the auth cycle can lead to infinite reactor event loops (minion tries to auth, reactor responds by doing something that generates another auth event, minion sends auth event, etc.). Consider reacting to salt/key or salt/minion/<MID>/start or firing a custom event tag instead.

Start events
salt/minion/<MID>/start

Fired every time a minion connects to the Salt master.

Variables:id -- The minion ID.
Key events
salt/key

Fired when accepting and rejecting minions keys on the Salt master.

Variables:
  • id -- The minion ID.
  • act -- The new status of the minion key: accept, pend, reject.

Warning

If a master is in auto_accept mode, salt/key events will not be fired when the keys are accepted. In addition, pre-seeding keys (like happens through Salt-Cloud) will not cause firing of these events.

Job events
salt/job/<JID>/new

Fired as a new job is sent out to minions.

Variables:
  • jid -- The job ID.
  • tgt -- The target of the job: *, a minion ID, G@os_family:RedHat, etc.
  • tgt_type -- The type of targeting used: glob, grain, compound, etc.
  • fun -- The function to run on minions: test.ping, network.interfaces, etc.
  • arg -- A list of arguments to pass to the function that will be called.
  • minions -- A list of minion IDs that Salt expects will return data for this job.
  • user -- The name of the user that ran the command as defined in Salt's Client ACL or external auth.
salt/job/<JID>/ret/<MID>

Fired each time a minion returns data for a job.

Variables:
  • id -- The minion ID.
  • jid -- The job ID.
  • retcode -- The return code for the job.
  • fun -- The function the minion ran. E.g., test.ping.
  • return -- The data returned from the execution module.
Presence events
salt/presence/present

Events fired on a regular interval about currently connected, newly connected, or recently disconnected minions. Requires the presence_events setting to be enabled.

Variables:present -- A list of minions that are currently connected to the Salt master.
salt/presence/change

Fired when the Presence system detects new minions connect or disconnect.

Variables:
  • new -- A list of minions that have connected since the last presence event.
  • lost -- A list of minions that have disconnected since the last presence event.
Cloud Events

Unlike other Master events, salt-cloud events are not fired on behalf of a Salt Minion. Instead, salt-cloud events are fired on behalf of a VM. This is because the minion-to-be may not yet exist to fire events to or also may have been destroyed.

This behavior is reflected by the name variable in the event data for salt-cloud events as compared to the id variable for Salt Minion-triggered events.

salt/cloud/<VM NAME>/creating

Fired when salt-cloud starts the VM creation process.

Variables:
  • name -- the name of the VM being created.
  • event -- description of the event.
  • provider -- the cloud provider of the VM being created.
  • profile -- the cloud profile for the VM being created.
salt/cloud/<VM NAME>/deploying

Fired when the VM is available and salt-cloud begins deploying Salt to the new VM.

Variables:
  • name -- the name of the VM being created.
  • event -- description of the event.
  • kwargs -- options available as the deploy script is invoked: conf_file, deploy_command, display_ssh_output, host, keep_tmp, key_filename, make_minion, minion_conf, name, parallel, preseed_minion_keys, script, script_args, script_env, sock_dir, start_action, sudo, tmp_dir, tty, username
salt/cloud/<VM NAME>/requesting

Fired when salt-cloud sends the request to create a new VM.

Variables:
  • event -- description of the event.
  • location -- the location of the VM being requested.
  • kwargs -- options available as the VM is being requested: Action, ImageId, InstanceType, KeyName, MaxCount, MinCount, SecurityGroup.1
salt/cloud/<VM NAME>/querying

Fired when salt-cloud queries data for a new instance.

Variables:
  • event -- description of the event.
  • instance_id -- the ID of the new VM.
salt/cloud/<VM NAME>/tagging

Fired when salt-cloud tags a new instance.

Variables:
  • event -- description of the event.
  • tags -- tags being set on the new instance.
salt/cloud/<VM NAME>/waiting_for_ssh

Fired while the salt-cloud deploy process is waiting for ssh to become available on the new instance.

Variables:
  • event -- description of the event.
  • ip_address -- IP address of the new instance.
salt/cloud/<VM NAME>/deploy_script

Fired once the deploy script is finished.

Variables:event -- description of the event.
salt/cloud/<VM NAME>/created

Fired once the new instance has been fully created.

Variables:
  • name -- the name of the VM being created.
  • event -- description of the event.
  • instance_id -- the ID of the new instance.
  • provider -- the cloud provider of the VM being created.
  • profile -- the cloud profile for the VM being created.
salt/cloud/<VM NAME>/destroying

Fired when salt-cloud requests the destruction of an instance.

Variables:
  • name -- the name of the VM being created.
  • event -- description of the event.
  • instance_id -- the ID of the new instance.
salt/cloud/<VM NAME>/destroyed

Fired when an instance has been destroyed.

Variables:
  • name -- the name of the VM being created.
  • event -- description of the event.
  • instance_id -- the ID of the new instance.

Listening for Events

Salt's Event Bus is used heavily within Salt and it is also written to integrate heavily with existing tooling and scripts. There is a variety of ways to consume it.

From the CLI

The quickest way to watch the event bus is by calling the state.event runner:

salt-run state.event pretty=True

That runner is designed to interact with the event bus from external tools and shell scripts. See the documentation for more examples.

Remotely via the REST API

Salt's event bus can be consumed salt.netapi.rest_cherrypy.app.Events as an HTTP stream from external tools or services.

curl -SsNk https://salt-api.example.com:8000/events?token=05A3

From Python

Python scripts can access the event bus only as the same system user that Salt is running as.

The event system is accessed via the event library and can only be accessed by the same system user that Salt is running as. To listen to events a SaltEvent object needs to be created and then the get_event function needs to be run. The SaltEvent object needs to know the location that the Salt Unix sockets are kept. In the configuration this is the sock_dir option. The sock_dir option defaults to "/var/run/salt/master" on most systems.

The following code will check for a single event:

import salt.config
import salt.utils.event

opts = salt.config.client_config('/etc/salt/master')

event = salt.utils.event.get_event(
        'master',
        sock_dir=opts['sock_dir'],
        transport=opts['transport'],
        opts=opts)

data = event.get_event()

Events will also use a "tag". Tags allow for events to be filtered by prefix. By default all events will be returned. If only authentication events are desired, then pass the tag "salt/auth".

The get_event method has a default poll time assigned of 5 seconds. To change this time set the "wait" option.

The following example will only listen for auth events and will wait for 10 seconds instead of the default 5.

data = event.get_event(wait=10, tag='salt/auth')

To retrieve the tag as well as the event data, pass full=True:

evdata = event.get_event(wait=10, tag='salt/job', full=True)

tag, data = evdata['tag'], evdata['data']

Instead of looking for a single event, the iter_events method can be used to make a generator which will continually yield salt events.

The iter_events method also accepts a tag but not a wait time:

for data in event.iter_events(tag='salt/auth'):
    print(data)

And finally event tags can be globbed, such as they can be in the Reactor, using the fnmatch library.

import fnmatch

import salt.config
import salt.utils.event

opts = salt.config.client_config('/etc/salt/master')

sevent = salt.utils.event.get_event(
        'master',
        sock_dir=opts['sock_dir'],
        transport=opts['transport'],
        opts=opts)

while True:
    ret = sevent.get_event(full=True)
    if ret is None:
        continue

    if fnmatch.fnmatch(ret['tag'], 'salt/job/*/ret/*'):
        do_something_with_job_return(ret['data'])

Firing Events

It is possible to fire events on either the minion's local bus or to fire events intended for the master.

To fire a local event from the minion on the command line call the event.fire execution function:

salt-call event.fire '{"data": "message to be sent in the event"}' 'tag'

To fire an event to be sent up to the master from the minion call the event.send execution function. Remember YAML can be used at the CLI in function arguments:

salt-call event.send 'myco/mytag/success' '{success: True, message: "It works!"}'

If a process is listening on the minion, it may be useful for a user on the master to fire an event to it:

# Job on minion
import salt.utils.event

event = salt.utils.event.MinionEvent(**__opts__)

for evdata in event.iter_events(tag='customtag/'):
    return evdata # do your processing here...
salt minionname event.fire '{"data": "message for the minion"}' 'customtag/african/unladen'

Firing Events from Python

From Salt execution modules

Events can be very useful when writing execution modules, in order to inform various processes on the master when a certain task has taken place. This is easily done using the normal cross-calling syntax:

# /srv/salt/_modules/my_custom_module.py

def do_something():
    '''
    Do something and fire an event to the master when finished

    CLI Example::

        salt '*' my_custom_module:do_something
    '''
    # do something!
    __salt__['event.send']('myco/my_custom_module/finished', {
        'finished': True,
        'message': "The something is finished!",
    })

From Custom Python Scripts

Firing events from custom Python code is quite simple and mirrors how it is done at the CLI:

import salt.client

caller = salt.client.Caller()

caller.sminion.functions['event.send'](
    'myco/myevent/success',
    {
        'success': True,
        'message': "It works!",
    }
)

Beacons

The beacon system allows the minion to hook into system processes and continually translate external events into the salt event bus. The primary example of this is the inotify beacon. This beacon uses inotify to watch configured files or directories on the minion for changes, creation, deletion etc.

This allows for the changes to be sent up to the master where the reactor can respond to changes.

Configuring The Beacons

The beacon system, like many others in Salt, can be configured via the minion pillar, grains, or local config file:

beacons:
  inotify:
    /etc/httpd/conf.d: {}
    /opt: {}

Optionally, a beacon can be run on an interval other than the default loop_interval, which is typically set to 1 second.

To run a beacon every 5 seconds, for example, provide an interval argument to a beacon.

beacons:
  inotify:
    /etc/httpd/conf.d: {}
    /opt: {}
    interval: 5
  load:
    - 1m:
      - 0.0
      - 2.0
    - 5m:
      - 0.0
      - 1.5
    - 15m:
      - 0.1
      - 1.0
    - interval: 10

Writing Beacon Plugins

Beacon plugins use the standard Salt loader system, meaning that many of the constructs from other plugin systems holds true, such as the __virtual__ function.

The important function in the Beacon Plugin is the beacon function. When the beacon is configured to run, this function will be executed repeatedly by the minion. The beacon function therefore cannot block and should be as lightweight as possible. The beacon also must return a list of dicts, each dict in the list will be translated into an event on the master.

Please see the inotify beacon as an example.

The beacon Function

The beacons system will look for a function named beacon in the module. If this function is not present then the beacon will not be fired. This function is called on a regular basis and defaults to being called on every iteration of the minion, which can be tens to hundreds of times a second. This means that the beacon function cannot block and should not be CPU or IO intensive.

The beacon function will be passed in the configuration for the executed beacon. This makes it easy to establish a flexible configuration for each called beacon. This is also the preferred way to ingest the beacon's configuration as it allows for the configuration to be dynamically updated while the minion is running by configuring the beacon in the minion's pillar.

The Beacon Return

The information returned from the beacon is expected to follow a predefined structure. The returned value needs to be a list of dictionaries (standard python dictionaries are preferred, no ordered dicts are needed).

The dictionaries represent individual events to be fired on the minion and master event buses. Each dict is a single event. The dict can contain any arbitrary keys but the 'tag' key will be extracted and added to the tag of the fired event.

The return data structure would look something like this:

[{'changes': ['/foo/bar'], 'tag': 'foo'},
 {'changes': ['/foo/baz'], 'tag': 'bar'}]

Calling Execution Modules

Execution modules are still the preferred location for all work and system interaction to happen in Salt. For this reason the __salt__ variable is available inside the beacon.

Please be careful when calling functions in __salt__, while this is the preferred means of executing complicated routines in Salt not all of the execution modules have been written with beacons in mind. Watch out for execution modules that may be CPU intense or IO bound. Please feel free to add new execution modules and functions to back specific beacons.

Running Custom Master Processes

In addition to the processes that the Salt Master automatically spawns, it is possible to configure it to start additional custom processes.

This is useful if a dedicated process is needed that should run throughout the life of the Salt Master. For periodic independent tasks, a scheduled runner may be more appropriate.

Processes started in this way will be restarted if they die and will be killed when the Salt Master is shut down.

Example Configuration

Processes are declared in the master config file with the ext_processes option. Processes will be started in the order they are declared.

ext_processes:
  - mymodule.TestProcess
  - mymodule.AnotherProcess

Example Process Class

# Import python libs
import time
import logging
from multiprocessing import Process

# Import Salt libs
from salt.utils.event import SaltEvent


log = logging.getLogger(__name__)


class TestProcess(Process):
    def __init__(self, opts):
        Process.__init__(self)
        self.opts = opts

    def run(self):
        self.event = SaltEvent('master', self.opts['sock_dir'])
        i = 0

        while True:
            self.event.fire_event({'iteration': i}, 'ext_processes/test{0}')
            time.sleep(60)

Salt Syndic

The Salt Syndic interface is a powerful tool which allows for the construction of Salt command topologies. A basic Salt setup has a Salt Master commanding a group of Salt Minions. The Syndic interface is a special passthrough minion, it is run on a master and connects to another master, then the master that the Syndic minion is listening to can control the minions attached to the master running the syndic.

The intent for supporting many layouts is not presented with the intent of supposing the use of any single topology, but to allow a more flexible method of controlling many systems.

Configuring the Syndic

Since the Syndic only needs to be attached to a higher level master the configuration is very simple. On a master that is running a syndic to connect to a higher level master the syndic_master option needs to be set in the master config file. The syndic_master option contains the hostname or IP address of the master server that can control the master that the syndic is running on.

The master that the syndic connects to sees the syndic as an ordinary minion, and treats it as such. the higher level master will need to accept the syndic's minion key like any other minion. This master will also need to set the order_masters value in the configuration to True. The order_masters option in the config on the higher level master is very important, to control a syndic extra information needs to be sent with the publications, the order_masters option makes sure that the extra data is sent out.

To sum up, you have those configuration options available on the master side:

Each Syndic must provide its own file_roots directory. Files will not be automatically transferred from the master-master.

Running the Syndic

The Syndic is a separate daemon that needs to be started on the master that is controlled by a higher master. Starting the Syndic daemon is the same as starting the other Salt daemons.

# salt-syndic

Note

If you have an exceptionally large infrastructure or many layers of syndics, you may find that the CLI doesn't wait long enough for the syndics to return their events. If you think this is the case, you can set the syndic_wait value in the upper master config. The default value is 1, and should work for the majority of deployments.

Topology

The salt-syndic is little more than a command and event forwarder. When a command is issued from a higher-level master, it will be received by the configured syndics on lower-level masters, and propagated to to their minions, and other syndics that are bound to them further down in the hierarchy. When events and job return data are generated by minions, they aggregated back, through the same syndic(s), to the master which issued the command.

The master sitting at the top of the hierarchy (the Master of Masters) will not be running the salt-syndic daemon. It will have the salt-master daemon running, and optionally, the salt-minion daemon. Each syndic connected to an upper-level master will have both the salt-master and the salt-syndic daemon running, and optionally, the salt-minion daemon.

Nodes on the lowest points of the hierarchy (minions which do not propagate data to another level) will only have the salt-minion daemon running. There is no need for either salt-master or salt-syndic to be running on a standard minion.

Syndic and the CLI

In order for the high-level master to return information from minions that are below the syndic(s), the CLI requires a short wait time in order to allow the syndic(s) to gather responses from their minions. This value is defined in the syndic_wait and has a default of five seconds.

While it is possible to run a syndic without a minion installed on the same machine, it is recommended, for a faster CLI response time, to do so. Without a minion installed on the syndic, the timeout value of syndic_wait increases significantly - about three-fold. With a minion installed on the syndic, the CLI timeout resides at the value defined in syndic_wait.

Note

To reduce the amount of time the CLI waits for minions to respond, install a minion on the syndic or tune the value of the syndic_wait configuration.

Salt Proxy Minion Documentation

Proxy minions are a developing Salt feature that enables controlling devices that, for whatever reason, cannot run a standard salt-minion. Examples include network gear that has an API but runs a proprietary OS, devices with limited CPU or memory, or devices that could run a minion, but for security reasons, will not.

Proxy minions are not an "out of the box" feature. Because there are an infinite number of controllable devices, you will most likely have to write the interface yourself. Fortunately, this is only as difficult as the actual interface to the proxied device. Devices that have an existing Python module (PyUSB for example) would be relatively simple to interface. Code to control a device that has an HTML REST-based interface should be easy. Code to control your typical housecat would be excellent source material for a PhD thesis.

Salt proxy-minions provide the 'plumbing' that allows device enumeration and discovery, control, status, remote execution, and state management.

Getting Started

The following diagram may be helpful in understanding the structure of a Salt installation that includes proxy-minions:

_images/proxy_minions.png

The key thing to remember is the left-most section of the diagram. Salt's nature is to have a minion connect to a master, then the master may control the minion. However, for proxy minions, the target device cannot run a minion, and thus must rely on a separate minion to fire up the proxy-minion and make the initial and persistent connection.

After the proxy minion is started and initiates its connection to the 'dumb' device, it connects back to the salt-master and ceases to be affiliated in any way with the minion that started it.

To create support for a proxied device one needs to create four things:

  1. The proxytype connection class (located in salt/proxy).
  2. The grains support code (located in salt/grains).
  3. Salt modules specific to the controlled device.
  4. Salt states specific to the controlled device.

Configuration parameters on the master

Proxy minions require no configuration parameters in /etc/salt/master.

Salt's Pillar system is ideally suited for configuring proxy-minions. Proxies can either be designated via a pillar file in pillar_roots, or through an external pillar. External pillars afford the opportunity for interfacing with a configuration management system, database, or other knowledgeable system that that may already contain all the details of proxy targets. To use static files in pillar_roots, pattern your files after the following examples, which are based on the diagram above:

/srv/pillar/top.sls

base:
  minioncontroller1:
    - networkswitches
  minioncontroller2:
    - reallydumbdevices
  minioncontroller3:
    - smsgateway

/srv/pillar/networkswitches.sls

proxy:
  dumbdevice1:
    proxytype: networkswitch
    host: 172.23.23.5
    username: root
    passwd: letmein
  dumbdevice2:
    proxytype: networkswitch
    host: 172.23.23.6
    username: root
    passwd: letmein
  dumbdevice3:
    proxytype: networkswitch
    host: 172.23.23.7
    username: root
    passwd: letmein

/srv/pillar/reallydumbdevices.sls

proxy:
  dumbdevice4:
    proxytype: i2c_lightshow
    i2c_address: 1
  dumbdevice5:
    proxytype: i2c_lightshow
    i2c_address: 2
  dumbdevice6:
    proxytype: 433mhz_wireless

/srv/pillar/smsgateway.sls

proxy:
  minioncontroller3:
    dumbdevice7:
      proxytype: sms_serial
      deventry: /dev/tty04

Note the contents of each minioncontroller key may differ widely based on the type of device that the proxy-minion is managing.

In the above example

  • dumbdevices 1, 2, and 3 are network switches that have a management interface available at a particular IP address.
  • dumbdevices 4 and 5 are very low-level devices controlled over an i2c bus. In this case the devices are physically connected to machine 'minioncontroller2', and are addressable on the i2c bus at their respective i2c addresses.
  • dumbdevice6 is a 433 MHz wireless transmitter, also physically connected to minioncontroller2
  • dumbdevice7 is an SMS gateway connected to machine minioncontroller3 via a serial port.

Because of the way pillar works, each of the salt-minions that fork off the proxy minions will only see the keys specific to the proxies it will be handling. In other words, from the above example, only minioncontroller1 will see the connection information for dumbdevices 1, 2, and 3. Minioncontroller2 will see configuration data for dumbdevices 4, 5, and 6, and minioncontroller3 will be privy to dumbdevice7.

Also, in general, proxy-minions are lightweight, so the machines that run them could conceivably control a large number of devices. The example above is just to illustrate that it is possible for the proxy services to be spread across many machines if necessary, or intentionally run on machines that need to control devices because of some physical interface (e.g. i2c and serial above). Another reason to divide proxy services might be security. In more secure environments only certain machines may have a network path to certain devices.

Now our salt-minions know if they are supposed to spawn a proxy-minion process to control a particular device. That proxy-minion process will initiate a connection back to the master to enable control.

Proxytypes

A proxytype is a Python class called 'Proxyconn' that encapsulates all the code necessary to interface with a device. Proxytypes are located inside the salt.proxy module. At a minimum a proxytype object must implement the following methods:

proxytype(self): Returns a string with the name of the proxy type.

proxyconn(self, **kwargs): Provides the primary way to connect and communicate with the device. Some proxyconns instantiate a particular object that opens a network connection to a device and leaves the connection open for communication. Others simply abstract a serial connection or even implement endpoints to communicate via REST over HTTP.

id(self, opts): Returns a unique, unchanging id for the controlled device. This is the "name" of the device, and is used by the salt-master for targeting and key authentication.

Optionally, the class may define a shutdown(self, opts) method if the controlled device should be informed when the minion goes away cleanly.

It is highly recommended that the test.ping execution module also be defined for a proxytype. The code for ping should contact the controlled device and make sure it is really available.

Here is an example proxytype used to interface to Juniper Networks devices that run the Junos operating system. Note the additional library requirements--most of the "hard part" of talking to these devices is handled by the jnpr.junos, jnpr.junos.utils, and jnpr.junos.cfg modules.

# Import python libs
import logging
import os

import jnpr.junos
import jnpr.junos.utils
import jnpr.junos.cfg
HAS_JUNOS = True

class Proxyconn(object):


    def __init__(self, details):
        self.conn = jnpr.junos.Device(user=details['username'], host=details['host'], password=details['passwd'])
        self.conn.open()
        self.conn.bind(cu=jnpr.junos.cfg.Resource)


    def proxytype(self):
        return 'junos'


    def id(self, opts):
        return self.conn.facts['hostname']


    def ping(self):
        return self.conn.connected


    def shutdown(self, opts):

        print('Proxy module {} shutting down!!'.format(opts['id']))
        try:
            self.conn.close()
        except Exception:
            pass

Grains are data about minions. Most proxied devices will have a paltry amount of data as compared to a typical Linux server. Because proxy-minions are started by a regular minion, they inherit a sizeable number of grain settings which can be useful, especially when targeting (PYTHONPATH, for example).

All proxy minions set a grain called 'proxy'. If it is present, you know the minion is controlling another device. To add more grains to your proxy minion for a particular device, create a file in salt/grains named [proxytype].py and place inside it the different functions that need to be run to collect the data you are interested in. Here's an example:

The __proxyenabled__ directive

Salt states and execution modules, by, and large, cannot "automatically" work with proxied devices. Execution modules like pkg or sqlite3 have no meaning on a network switch or a housecat. For a state/execution module to be available to a proxy-minion, the __proxyenabled__ variable must be defined in the module as an array containing the names of all the proxytypes that this module can support. The array can contain the special value * to indicate that the module supports all proxies.

If no __proxyenabled__ variable is defined, then by default, the state/execution module is unavailable to any proxy.

Here is an excerpt from a module that was modified to support proxy-minions:

def ping():

    if 'proxyobject' in __opts__:
        if 'ping' in __opts__['proxyobject'].__attr__():
            return __opts['proxyobject'].ping()
        else:
            return False
    else:
        return True

And then in salt.proxy.junos we find

def ping(self):

   return self.connected

The Junos API layer lacks the ability to do a traditional 'ping', so the example simply checks the connection object field that indicates if the ssh connection was successfully made to the device.

The RAET Transport

Note

The RAET transport is in very early development, it is functional but no promises are yet made as to its reliability or security. As for reliability and security, the encryption used has been audited and our tests show that raet is reliable. With this said we are still conducting more security audits and pushing the reliability. This document outlines the encryption used in RAET

New in version 2014.7.0.

The Reliable Asynchronous Event Transport, or RAET, is an alternative transport medium developed specifically with Salt in mind. It has been developed to allow queuing to happen up on the application layer and comes with socket layer encryption. It also abstracts a great deal of control over the socket layer and makes it easy to bubble up errors and exceptions.

RAET also offers very powerful message routing capabilities, allowing for messages to be routed between processes on a single machine all the way up to processes on multiple machines. Messages can also be restricted, allowing processes to be sent messages of specific types from specific sources allowing for trust to be established.

Using RAET in Salt

Using RAET in Salt is easy, the main difference is that the core dependencies change, instead of needing pycrypto, M2Crypto, ZeroMQ, and PYZMQ, the packages libsodium, libnacl, ioflo, and raet are required. Encryption is handled very cleanly by libnacl, while the queueing and flow control is handled by ioflo. Distribution packages are forthcoming, but libsodium can be easily installed from source, or many distributions do ship packages for it. The libnacl and ioflo packages can be easily installed from pypi, distribution packages are in the works.

Once the new deps are installed the 2014.7 release or higher of Salt needs to be installed.

Once installed, modify the configuration files for the minion and master to set the transport to raet:

/etc/salt/master:

transport: raet

/etc/salt/minion:

transport: raet

Now start salt as it would normally be started, the minion will connect to the master and share long term keys, which can then in turn be managed via salt-key. Remote execution and salt states will function in the same way as with Salt over ZeroMQ.

Limitations

The 2014.7 release of RAET is not complete! The Syndic and Multi Master have not been completed yet and these are slated for completion in the 2015.5.0 release.

Also, Salt-Raet allows for more control over the client but these hooks have not been implemented yet, thereforre the client still uses the same system as the ZeroMQ client. This means that the extra reliability that RAET exposes has not yet been implemented in the CLI client.

Why?

Customer and User Request

Why make an alternative transport for Salt? There are many reasons, but the primary motivation came from customer requests, many large companies came with requests to run Salt over an alternative transport, the reasoning was varied, from performance and scaling improvements to licensing concerns. These customers have partnered with SaltStack to make RAET a reality.

More Capabilities

RAET has been designed to allow salt to have greater communication capabilities. It has been designed to allow for development into features which out ZeroMQ topologies can't match.

Many of the proposed features are still under development and will be announced as they enter proof of concept phases, but these features include salt-fuse - a filesystem over salt, salt-vt - a parallel api driven shell over the salt transport and many others.

RAET Reliability

RAET is reliable, hence the name (Reliable Asynchronous Event Transport).

The concern posed by some over RAET reliability is based on the fact that RAET uses UDP instead of TCP and UDP does not have built in reliability.

RAET itself implements the needed reliability layers that are not natively present in UDP, this allows RAET to dynamically optimize packet delivery in a way that keeps it both reliable and asynchronous.

RAET and ZeroMQ

When using RAET, ZeroMQ is not required. RAET is a complete networking replacement. It is noteworthy that RAET is not a ZeroMQ replacement in a general sense, the ZeroMQ constructs are not reproduced in RAET, but they are instead implemented in such a way that is specific to Salt's needs.

RAET is primarily an async communication layer over truly async connections, defaulting to UDP. ZeroMQ is over TCP and abstracts async constructs within the socket layer.

Salt is not dropping ZeroMQ support and has no immediate plans to do so.

Encryption

RAET uses Dan Bernstein's NACL encryption libraries and CurveCP handshake. The libnacl python binding binds to both libsodium and tweetnacl to execute the underlying cryptography. This allows us to completely rely on an externally developed cryptography system.

For more information on libsodium and CurveCP please see: http://doc.libsodium.org/ http://curvecp.org/

Programming Intro

Raet Programming Introduction

Windows Software Repository

The Salt Windows Software Repository provides a package manager and software repository similar to what is provided by yum and apt on Linux.

It permits the installation of software using the installers on remote windows machines. In many senses, the operation is similar to that of the other package managers salt is aware of:

  • the pkg.installed and similar states work on Windows.
  • the pkg.install and similar module functions work on Windows.
  • each windows machine needs to have pkg.refresh_db executed against it to pick up the latest version of the package database.

High level differences to yum and apt are:

  • The repository metadata (sls files) is hosted through either salt or git.
  • Packages can be downloaded from within the salt repository, a git repository or from http(s) or ftp urls.
  • No dependencies are managed. Dependencies between packages needs to be managed manually.

Operation

The install state/module function of the windows package manager works roughly as follows:

  1. Execute pkg.list_pkgs and store the result
  2. Check if any action needs to be taken. (i.e. compare required package and version against pkg.list_pkgs results)
  3. If so, run the installer command.
  4. Execute pkg.list_pkgs and compare to the result stored from before installation.
  5. Success/Failure/Changes will be reported based on the differences between the original and final pkg.list_pkgs results.

If there are any problems in using the package manager it is likely to be due to the data in your sls files not matching the difference between the pre and post pkg.list_pkgs results.

Usage

By default, the Windows software repository is found at /srv/salt/win/repo This can be changed in the master config file (default location is /etc/salt/master) by modifying the win_repo variable. Each piece of software should have its own directory which contains the installers and a package definition file. This package definition file is a YAML file named init.sls.

The package definition file should look similar to this example for Firefox: /srv/salt/win/repo/firefox/init.sls

Firefox:
  17.0.1:
    installer: 'salt://win/repo/firefox/English/Firefox Setup 17.0.1.exe'
    full_name: Mozilla Firefox 17.0.1 (x86 en-US)
    locale: en_US
    reboot: False
    install_flags: ' -ms'
    uninstaller: '%ProgramFiles(x86)%/Mozilla Firefox/uninstall/helper.exe'
    uninstall_flags: ' /S'
  16.0.2:
    installer: 'salt://win/repo/firefox/English/Firefox Setup 16.0.2.exe'
    full_name: Mozilla Firefox 16.0.2 (x86 en-US)
    locale: en_US
    reboot: False
    install_flags: ' -ms'
    uninstaller: '%ProgramFiles(x86)%/Mozilla Firefox/uninstall/helper.exe'
    uninstall_flags: ' /S'
  15.0.1:
    installer: 'salt://win/repo/firefox/English/Firefox Setup 15.0.1.exe'
    full_name: Mozilla Firefox 15.0.1 (x86 en-US)
    locale: en_US
    reboot: False
    install_flags: ' -ms'
    uninstaller: '%ProgramFiles(x86)%/Mozilla Firefox/uninstall/helper.exe'
    uninstall_flags: ' /S'

More examples can be found here: https://github.com/saltstack/salt-winrepo

The version number and full_name need to match the output from pkg.list_pkgs so that the status can be verified when running highstate. Note: It is still possible to successfully install packages using pkg.install even if they don't match which can make this hard to troubleshoot.

salt 'test-2008' pkg.list_pkgs
test-2008
    ----------
    7-Zip 9.20 (x64 edition):
        9.20.00.0
    Microsoft .NET Framework 4 Client Profile:
        4.0.30319,4.0.30319
    Microsoft .NET Framework 4 Extended:
        4.0.30319,4.0.30319
    Microsoft Visual C++ 2008 Redistributable - x64 9.0.21022:
        9.0.21022
    Mozilla Firefox 17.0.1 (x86 en-US):
        17.0.1
    Mozilla Maintenance Service:
        17.0.1
    NSClient++ (x64):
        0.3.8.76
    Notepad++:
        6.4.2
    Salt Minion 0.16.0:
        0.16.0

If any of these preinstalled packages already exist in winrepo the full_name will be automatically renamed to their package name during the next update (running highstate or installing another package).

test-2008:
    ----------
    7zip:
        9.20.00.0
    Microsoft .NET Framework 4 Client Profile:
        4.0.30319,4.0.30319
    Microsoft .NET Framework 4 Extended:
        4.0.30319,4.0.30319
    Microsoft Visual C++ 2008 Redistributable - x64 9.0.21022:
        9.0.21022
    Mozilla Maintenance Service:
        17.0.1
    Notepad++:
        6.4.2
    Salt Minion 0.16.0:
        0.16.0
    firefox:
        17.0.1
    nsclient:
        0.3.9.328

Add msiexec: True if using an MSI installer requiring the use of msiexec /i to install and msiexec /x to uninstall.

The install_flags and uninstall_flags are flags passed to the software installer to cause it to perform a silent install. These can often be found by adding /? or /h when running the installer from the command line. A great resource for finding these silent install flags can be found on the WPKG project's wiki:

7zip:
  9.20.00.0:
    installer: salt://win/repo/7zip/7z920-x64.msi
    full_name: 7-Zip 9.20 (x64 edition)
    reboot: False
    install_flags: ' /q '
    msiexec: True
    uninstaller: salt://win/repo/7zip/7z920-x64.msi
    uninstall_flags: ' /qn'

Add cache_dir: True when the installer requires multiple source files. The directory containing the installer file will be recursively cached on the minion. Only applies to salt: installer URLs.

sqlexpress:
  12.0.2000.8:
    installer: 'salt://win/repo/sqlexpress/setup.exe'
    full_name: Microsoft SQL Server 2014 Setup (English)
    reboot: False
    install_flags: ' /ACTION=install /IACCEPTSQLSERVERLICENSETERMS /Q'
    cache_dir: True

Generate Repo Cache File

Once the sls file has been created, generate the repository cache file with the winrepo runner:

salt-run winrepo.genrepo

Then update the repository cache file on your minions, exactly how it's done for the Linux package managers:

salt '*' pkg.refresh_db

Install Windows Software

Now you can query the available version of Firefox using the Salt pkg module.

salt '*' pkg.available_version Firefox

{'Firefox': {'15.0.1': 'Mozilla Firefox 15.0.1 (x86 en-US)',
                 '16.0.2': 'Mozilla Firefox 16.0.2 (x86 en-US)',
                 '17.0.1': 'Mozilla Firefox 17.0.1 (x86 en-US)'}}

As you can see, there are three versions of Firefox available for installation. You can refer a software package by its name or its full_name surround by single quotes.

salt '*' pkg.install 'Firefox'

The above line will install the latest version of Firefox.

salt '*' pkg.install 'Firefox' version=16.0.2

The above line will install version 16.0.2 of Firefox.

If a different version of the package is already installed it will be replaced with the version in winrepo (only if the package itself supports live updating).

You can also specify the full name:

salt '*' pkg.install 'Mozilla Firefox 17.0.1 (x86 en-US)'

Uninstall Windows Software

Uninstall software using the pkg module:

salt '*' pkg.remove 'Firefox'

salt '*' pkg.purge 'Firefox'

pkg.purge just executes pkg.remove on Windows. At some point in the future pkg.purge may direct the installer to remove all configs and settings for software packages that support that option.

Standalone Minion Salt Windows Repo Module

In order to facilitate managing a Salt Windows software repo with Salt on a Standalone Minion on Windows, a new module named winrepo has been added to Salt. winrepo matches what is available in the salt runner and allows you to manage the Windows software repo contents. Example: salt '*' winrepo.genrepo

Git Hosted Repo

Windows software package definitions can also be hosted in one or more git repositories. The default repo is one hosted on GitHub.com by SaltStack,Inc., which includes package definitions for open source software. This repo points to the HTTP or ftp locations of the installer files. Anyone is welcome to send a pull request to this repo to add new package definitions. Browse the repo here: https://github.com/saltstack/salt-winrepo .

Configure which git repos the master can search for package definitions by modifying or extending the win_gitrepos configuration option list in the master config.

Checkout each git repo in win_gitrepos, compile your package repository cache and then refresh each minion's package cache:

salt-run winrepo.update_git_repos
salt-run winrepo.genrepo
salt '*' pkg.refresh_db

Troubleshooting

Incorrect name/version

If the package seems to install properly, but salt reports a failure then it is likely you have a version or full_name mismatch.

Check the exact full_name and version used by the package. Use pkg.list_pkgs to check that the names and version exactly match what is installed.

Changes to sls files not being picked up

Ensure you have (re)generated the repository cache file and then updated the repository cache on the relevant minions:

salt-run winrepo.genrepo
salt 'MINION' pkg.refresh_db

Packages management under Windows 2003

On windows server 2003, you need to install optional windows component "wmi windows installer provider" to have full list of installed packages. If you don't have this, salt-minion can't report some installed software.

Windows-specific Behaviour

Salt is capable of managing Windows systems, however due to various differences between the operating systems, there are some things you need to keep in mind.

This document will contain any quirks that apply across Salt or generally across multiple module functions. Any Windows-specific behavior for particular module functions will be documented in the module function documentation. Therefore this document should be read in conjunction with the module function documentation.

Group parameter for files

Salt was originally written for managing Unix-based systems, and therefore the file module functions were designed around that security model. Rather than trying to shoehorn that model on to Windows, Salt ignores these parameters and makes non-applicable module functions unavailable instead.

One of the commonly ignored parameters is the group parameter for managing files. Under Windows, while files do have a 'primary group' property, this is rarely used. It generally has no bearing on permissions unless intentionally configured and is most commonly used to provide Unix compatibility (e.g. Services For Unix, NFS services).

Because of this, any file module functions that typically require a group, do not under Windows. Attempts to directly use file module functions that operate on the group (e.g. file.chgrp) will return a pseudo-value and cause a log message to appear. No group parameters will be acted on.

If you do want to access and change the 'primary group' property and understand the implications, use the file.get_pgid or file.get_pgroup functions or the pgroup parameter on the file.chown module function.

Dealing with case-insensitive but case-preserving names

Windows is case-insensitive, but however preserves the case of names and it is this preserved form that is returned from system functions. This causes some issues with Salt because it assumes case-sensitive names. These issues generally occur in the state functions and can cause bizarre looking errors.

To avoid such issues, always pretend Windows is case-sensitive and use the right case for names, e.g. specify user=Administrator instead of user=administrator.

Follow issue 11801 for any changes to this behavior.

Dealing with various username forms

Salt does not understand the various forms that Windows usernames can come in, e.g. username, mydomainusername, username@mydomain.tld can all refer to the same user. In fact, Salt generally only considers the raw username value, i.e. the username without the domain or host information.

Using these alternative forms will likely confuse Salt and cause odd errors to happen. Use only the raw username value in the correct case to avoid problems.

Follow issue 11801 for any changes to this behavior.

Specifying the None group

Each Windows system has built-in _None_ group. This is the default 'primary group' for files for users not on a domain environment.

Unfortunately, the word _None_ has special meaning in Python - it is a special value indicating 'nothing', similar to null or nil in other languages.

To specify the None group, it must be specified in quotes, e.g. ./salt '*' file.chpgrp C:\path\to\file "'None'".

Modifying security properties (ACLs) on files

There is no support in Salt for modifying ACLs, and therefore no support for changing file permissions, besides modifying the owner/user.

Salt Cloud

Getting Started

Install Salt Cloud

Salt Cloud is now part of Salt proper. It was merged in as of Salt version 2014.1.0.

On Ubuntu, install Salt Cloud by using following command:

sudo add-apt-repository ppa:saltstack/salt
sudo apt-get update
sudo apt-get install salt-cloud

If using Salt Cloud on OS X, curl-ca-bundle must be installed. Presently, this package is not available via brew, but it is available using MacPorts:

sudo port install curl-ca-bundle

Salt Cloud depends on apache-libcloud. Libcloud can be installed via pip with pip install apache-libcloud.

Installing Salt Cloud for development

Installing Salt for development enables Salt Cloud development as well, just make sure apache-libcloud is installed as per above paragraph.

See these instructions: Installing Salt for development.

Using Salt Cloud

Salt Cloud basic usage

Salt Cloud needs, at least, one configured Provider and Profile to be functional.

Creating a VM

To create a VM with salt cloud, use command:

salt-cloud -p <profile> name_of_vm

Assuming there is a profile configured as following:

fedora_rackspace:
    provider: rackspace
    image: Fedora 17
    size: 256 server
    script: bootstrap-salt

Then, the command to create new VM named fedora_http_01 is:

salt-cloud -p fedora_rackspace fedora_http_01
Destroying a VM

To destroy a created-by-salt-cloud VM, use command:

salt-cloud -d name_of_vm

For example, to delete the VM created on above example, use:

salt-cloud -d fedora_http_01

VM Profiles

Salt cloud designates virtual machines inside the profile configuration file. The profile configuration file defaults to /etc/salt/cloud.profiles and is a yaml configuration. The syntax for declaring profiles is simple:

fedora_rackspace:
    provider: rackspace
    image: Fedora 17
    size: 256 server
    script: bootstrap-salt

It should be noted that the script option defaults to bootstrap-salt, and does not normally need to be specified. Further examples in this document will not show the script option.

A few key pieces of information need to be declared and can change based on the public cloud provider. A number of additional parameters can also be inserted:

centos_rackspace:
    provider: rackspace
    image: CentOS 6.2
    size: 1024 server
    minion:
        master: salt.example.com
        append_domain: webs.example.com
        grains:
            role: webserver

The image must be selected from available images. Similarly, sizes must be selected from the list of sizes. To get a list of available images and sizes use the following command:

salt-cloud --list-images openstack
salt-cloud --list-sizes openstack

Some parameters can be specified in the main Salt cloud configuration file and then are applied to all cloud profiles. For instance if only a single cloud provider is being used then the provider option can be declared in the Salt cloud configuration file.

Multiple Configuration Files

In addition to /etc/salt/cloud.profiles, profiles can also be specified in any file matching cloud.profiles.d/*conf which is a sub-directory relative to the profiles configuration file(with the above configuration file as an example, /etc/salt/cloud.profiles.d/*.conf). This allows for more extensible configuration, and plays nicely with various configuration management tools as well as version control systems.

Larger Example
rhel_ec2:
    provider: ec2
    image: ami-e565ba8c
    size: t1.micro
    minion:
        cheese: edam

ubuntu_ec2:
    provider: ec2
    image: ami-7e2da54e
    size: t1.micro
    minion:
        cheese: edam

ubuntu_rackspace:
    provider: rackspace
    image: Ubuntu 12.04 LTS
    size: 256 server
    minion:
        cheese: edam

fedora_rackspace:
    provider: rackspace
    image: Fedora 17
    size: 256 server
    minion:
        cheese: edam

cent_linode:
    provider: linode
    image: CentOS 6.2 64bit
    size: Linode 512

cent_gogrid:
    provider: gogrid
    image: 12834
    size: 512MB

cent_joyent:
    provider: joyent
    image: centos-6
    size: Small 1GB

Cloud Map File

A number of options exist when creating virtual machines. They can be managed directly from profiles and the command line execution, or a more complex map file can be created. The map file allows for a number of virtual machines to be created and associated with specific profiles.

Map files have a simple format, specify a profile and then a list of virtual machines to make from said profile:

fedora_small:
  - web1
  - web2
  - web3
fedora_high:
  - redis1
  - redis2
  - redis3
cent_high:
  - riak1
  - riak2
  - riak3

This map file can then be called to roll out all of these virtual machines. Map files are called from the salt-cloud command with the -m option:

$ salt-cloud -m /path/to/mapfile

Remember, that as with direct profile provisioning the -P option can be passed to create the virtual machines in parallel:

$ salt-cloud -m /path/to/mapfile -P

A map file can also be enforced to represent the total state of a cloud deployment by using the --hard option. When using the hard option any vms that exist but are not specified in the map file will be destroyed:

$ salt-cloud -m /path/to/mapfile -P -H

Be careful with this argument, it is very dangerous! In fact, it is so dangerous that in order to use it, you must explicitly enable it in the main configuration file.

enable_hard_maps: True

A map file can include grains and minion configuration options:

fedora_small:
  - web1:
      minion:
        log_level: debug
      grains:
        cheese: tasty
        omelet: du fromage
  - web2:
      minion:
        log_level: warn
      grains:
        cheese: more tasty
        omelet: with peppers

A map file may also be used with the various query options:

$ salt-cloud -m /path/to/mapfile -Q
{'ec2': {'web1': {'id': 'i-e6aqfegb',
                     'image': None,
                     'private_ips': [],
                     'public_ips': [],
                     'size': None,
                     'state': 0}},
         'web2': {'Absent'}}

...or with the delete option:

$ salt-cloud -m /path/to/mapfile -d
The following virtual machines are set to be destroyed:
  web1
  web2

Proceed? [N/y]

Warning

Specifying Nodes with Maps on the Command Line Specifying the name of a node or nodes with the maps options on the command line is not supported. This is especially important to remember when using --destroy with maps; salt-cloud will ignore any arguments passed in which are not directly relevant to the map file. When using ``--destroy`` with a map, every node in the map file will be deleted! Maps don't provide any useful information for destroying individual nodes, and should not be used to destroy a subset of a map.

Setting up New Salt Masters

Bootstrapping a new master in the map is as simple as:

fedora_small:
  - web1:
      make_master: True
  - web2
  - web3

Notice that ALL bootstrapped minions from the map will answer to the newly created salt-master.

To make any of the bootstrapped minions answer to the bootstrapping salt-master as opposed to the newly created salt-master, as an example:

fedora_small:
  - web1:
      make_master: True
      minion:
        master: <the local master ip address>
        local_master: True
  - web2
  - web3

The above says the minion running on the newly created salt-master responds to the local master, ie, the master used to bootstrap these VMs.

Another example:

fedora_small:
  - web1:
      make_master: True
  - web2
  - web3:
      minion:
        master: <the local master ip address>
        local_master: True

The above example makes the web3 minion answer to the local master, not the newly created master.

Cloud Actions

Once a VM has been created, there are a number of actions that can be performed on it. The "reboot" action can be used across all providers, but all other actions are specific to the cloud provider. In order to perform an action, you may specify it from the command line, including the name(s) of the VM to perform the action on:

$ salt-cloud -a reboot vm_name
$ salt-cloud -a reboot vm1 vm2 vm2

Or you may specify a map which includes all VMs to perform the action on:

$ salt-cloud -a reboot -m /path/to/mapfile

The following is a list of actions currently supported by salt-cloud:

all providers:
    - reboot
ec2:
    - start
    - stop
joyent:
    - stop

Another useful reference for viewing more salt-cloud actions is the :ref:Salt Cloud Feature Matrix <salt-cloud-feature-matrix>

Cloud Functions

Cloud functions work much the same way as cloud actions, except that they don't perform an operation on a specific instance, and so do not need a machine name to be specified. However, since they perform an operation on a specific cloud provider, that provider must be specified.

$ salt-cloud -f show_image ec2 image=ami-fd20ad94

There are three universal salt-cloud functions that are extremely useful for gathering information about instances on a provider basis:

  • list_nodes: Returns some general information about the instances for the given provider.
  • list_nodes_full: Returns all information about the instances for the given provider.
  • list_nodes_select: Returns select information about the instances for the given provider.
$ salt-cloud -f list_nodes linode
$ salt-cloud -f list_nodes_full linode
$ salt-cloud -f list_nodes_select linode

Another useful reference for viewing salt-cloud functions is the :ref:Salt Cloud Feature Matrix <salt-cloud-feature-matrix>

Core Configuration

Core Configuration

A number of core configuration options and some options that are global to the VM profiles can be set in the cloud configuration file. By default this file is located at /etc/salt/cloud.

Thread Pool Size

When salt cloud is operating in parallel mode via the -P argument, you can control the thread pool size by specifying the pool_size parameter with a positive integer value.

By default, the thread pool size will be set to the number of VMs that salt cloud is operating on.

pool_size: 10
Minion Configuration

The default minion configuration is set up in this file. Minions created by salt-cloud derive their configuration from this file. Almost all parameters found in Configuring the Salt Minion can be used here.

minion:
  master: saltmaster.example.com

In particular, this is the location to specify the location of the salt master and its listening port, if the port is not set to the default.

Cloud Configuration Syntax

The data specific to interacting with public clouds is set up here.

Cloud provider configuration syntax can live in several places. The first is in /etc/salt/cloud:

# /etc/salt/cloud
providers:
  my-aws-migrated-config:
    id: HJGRYCILJLKJYG
    key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn'
    keyname: test
    securitygroup: quick-start
    private_key: /root/test.pem
    provider: aws

Cloud provider configuration data can also be housed in /etc/salt/cloud.providers or any file matching /etc/salt/cloud.providers.d/*.conf. All files in any of these locations will be parsed for cloud provider data.

Using the example configuration above:

# /etc/salt/cloud.providers
# or could be /etc/salt/cloud.providers.d/*.conf
my-aws-config:
  id: HJGRYCILJLKJYG
  key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn'
  keyname: test
  securitygroup: quick-start
  private_key: /root/test.pem
  provider: aws

Note

Salt Cloud provider configurations within /etc/cloud.provider.d/ should not specify the ``providers starting key.

It is also possible to have multiple cloud configuration blocks within the same alias block. For example:

production-config:
  - id: HJGRYCILJLKJYG
    key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn'
    keyname: test
    securitygroup: quick-start
    private_key: /root/test.pem
    provider: aws

  - user: example_user
    apikey: 123984bjjas87034
    provider: rackspace

However, using this configuration method requires a change with profile configuration blocks. The provider alias needs to have the provider key value appended as in the following example:

rhel_aws_dev:
  provider: production-config:aws
  image: ami-e565ba8c
  size: t1.micro

rhel_aws_prod:
  provider: production-config:aws
  image: ami-e565ba8c
  size: High-CPU Extra Large Instance

database_prod:
  provider: production-config:rackspace
  image: Ubuntu 12.04 LTS
  size: 256 server

Notice that because of the multiple entries, one has to be explicit about the provider alias and name, from the above example, production-config: aws.

This data interactions with the salt-cloud binary regarding its --list-location, --list-images, and --list-sizes which needs a cloud provider as an argument. The argument used should be the configured cloud provider alias. If the provider alias has multiple entries, <provider-alias>: <provider-name> should be used.

To allow for a more extensible configuration, --providers-config, which defaults to /etc/salt/cloud.providers, was added to the cli parser. It allows for the providers' configuration to be added on a per-file basis.

Pillar Configuration

It is possible to configure cloud providers using pillars. This is only used when inside the cloud module. You can setup a variable called cloud that contains your profile and provider to pass that information to the cloud servers instead of having to copy the full configuration to every minion. In your pillar file, you would use something like this:

cloud:
  ssh_key_name: saltstack
  ssh_key_file: /root/.ssh/id_rsa
  update_cachedir: True
  diff_cache_events: True
  change_password: True

  providers:
    my-nova:
      identity_url: https://identity.api.rackspacecloud.com/v2.0/
      compute_region: IAD
      user: myuser
      api_key: apikey
      tenant: 123456
      provider: nova

    my-openstack:
      identity_url: https://identity.api.rackspacecloud.com/v2.0/tokens
      user: user2
      apikey: apikey2
      tenant: 654321
      compute_region: DFW
      provider: openstack
      compute_name: cloudServersOpenStack

  profiles:
    ubuntu-nova:
      provider: my-nova
      size: performance1-8
      image: bb02b1a3-bc77-4d17-ab5b-421d89850fca
      script_args: git develop

    ubuntu-openstack:
      provider: my-openstack
      size: performance1-8
      image: bb02b1a3-bc77-4d17-ab5b-421d89850fca
      script_args: git develop
Cloud Configurations
Scaleway

To use Salt Cloud with Scaleway, you need to get an access key and an API token. API tokens are unique identifiers associated with your Scaleway account. To retrieve your access key and API token, log-in to the Scaleway control panel, open the pull-down menu on your account name and click on "My Credentials" link.

If you do not have API token you can create one by clicking the "Create New Token" button on the right corner.

my-scaleway-config:
  access_key: 15cf404d-4560-41b1-9a0c-21c3d5c4ff1f
  token: a7347ec8-5de1-4024-a5e3-24b77d1ba91d
  provider: scaleway

Note

In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: my-scaleway-config.

Rackspace

Rackspace cloud requires two configuration options; a user and an apikey:

my-rackspace-config:
  user: example_user
  apikey: 123984bjjas87034
  provider: rackspace-config

Note

In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: my-rackspace-config.

Amazon AWS

A number of configuration options are required for Amazon AWS including id, key, keyname, sercuritygroup, and private_key:

my-aws-quick-start:
  id: HJGRYCILJLKJYG
  key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn'
  keyname: test
  securitygroup: quick-start
  private_key: /root/test.pem
  provider: aws

my-aws-default:
  id: HJGRYCILJLKJYG
  key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn'
  keyname: test
  securitygroup: default
  private_key: /root/test.pem
  provider: aws

Note

In the cloud profile that uses this provider configuration, the syntax for the provider required field would be either provider: my-aws-quick-start or provider: my-aws-default.

Linode

Linode requires a single API key, but the default root password also needs to be set:

my-linode-config:
  apikey: asldkgfakl;sdfjsjaslfjaklsdjf;askldjfaaklsjdfhasldsadfghdkf
  password: F00barbaz
  ssh_pubkey: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKHEOLLbeXgaqRQT9NBAopVz366SdYc0KKX33vAnq+2R user@host
  ssh_key_file: ~/.ssh/id_ed25519
  provider: linode

The password needs to be 8 characters and contain lowercase, uppercase, and numbers.

Note

In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: my-linode-config

Joyent Cloud

The Joyent cloud requires three configuration parameters: The username and password that are used to log into the Joyent system, as well as the location of the private SSH key associated with the Joyent account. The SSH key is needed to send the provisioning commands up to the freshly created virtual machine.

my-joyent-config:
  user: fred
  password: saltybacon
  private_key: /root/joyent.pem
  provider: joyent

Note

In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: my-joyent-config

GoGrid

To use Salt Cloud with GoGrid, log into the GoGrid web interface and create an API key. Do this by clicking on "My Account" and then going to the API Keys tab.

The apikey and the sharedsecret configuration parameters need to be set in the configuration file to enable interfacing with GoGrid:

my-gogrid-config:
  apikey: asdff7896asdh789
  sharedsecret: saltybacon
  provider: gogrid

Note

In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: my-gogrid-config.

OpenStack

OpenStack configuration differs between providers, and at the moment several options need to be specified. This module has been officially tested against the HP and the Rackspace implementations, and some examples are provided for both.

# For HP
my-openstack-hp-config:
  identity_url:
  'https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/'
  compute_name: Compute
  compute_region: 'az-1.region-a.geo-1'
  tenant: myuser-tenant1
  user: myuser
  ssh_key_name: mykey
  ssh_key_file: '/etc/salt/hpcloud/mykey.pem'
  password: mypass
  provider: openstack

# For Rackspace
my-openstack-rackspace-config:
  identity_url: 'https://identity.api.rackspacecloud.com/v2.0/tokens'
  compute_name: cloudServersOpenStack
  protocol: ipv4
  compute_region: DFW
  protocol: ipv4
  user: myuser
  tenant: 5555555
  password: mypass
  provider: openstack

If you have an API key for your provider, it may be specified instead of a password:

my-openstack-hp-config:
  apikey: 901d3f579h23c8v73q9

my-openstack-rackspace-config:
  apikey: 901d3f579h23c8v73q9

Note

In the cloud profile that uses this provider configuration, the syntax for the provider required field would be either provider: my-openstack-hp-config or provider: my-openstack-rackspace-config.

You will certainly need to configure the user, tenant, and either password or apikey.

If your OpenStack instances only have private IP addresses and a CIDR range of private addresses are not reachable from the salt-master, you may set your preference to have Salt ignore it:

my-openstack-config:
  ignore_cidr: 192.168.0.0/16

For in-house OpenStack Essex installation, libcloud needs the service_type :

my-openstack-config:
  identity_url: 'http://control.openstack.example.org:5000/v2.0/'
  compute_name : Compute Service
  service_type : compute
DigitalOcean

Using Salt for DigitalOcean requires a client_key and an api_key. These can be found in the DigitalOcean web interface, in the "My Settings" section, under the API Access tab.

my-digitalocean-config:
  provider: digital_ocean
  personal_access_token: xxx
  location: New York 1

Note

In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: my-digital-ocean-config.

Parallels

Using Salt with Parallels requires a user, password and URL. These can be obtained from your cloud provider.

my-parallels-config:
  user: myuser
  password: xyzzy
  url: https://api.cloud.xmission.com:4465/paci/v1.0/
  provider: parallels

Note

In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: my-parallels-config.

Proxmox

Using Salt with Proxmox requires a user, password, and URL. These can be obtained from your cloud provider. Both PAM and PVE users can be used.

my-proxmox-config:
  provider: proxmox
  user: saltcloud@pve
  password: xyzzy
  url: your.proxmox.host

Note

In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: my-proxmox-config.

LXC

The lxc driver uses saltify to install salt and attach the lxc container as a new lxc minion. As soon as we can, we manage baremetal operation over SSH. You can also destroy those containers via this driver.

devhost10-lxc:
  target: devhost10
  provider: lxc

And in the map file:

devhost10-lxc:
  provider: devhost10-lxc
  from_container: ubuntu
  backing: lvm
  sudo: True
  size: 3g
  ip: 10.0.3.9
  minion:
    master: 10.5.0.1
    master_port: 4506
  lxc_conf:
    - lxc.utsname: superlxc

Note

In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: devhost10-lxc.

Saltify

The Saltify driver is a new, experimental driver for installing Salt on existing machines (virtual or bare metal). Because it does not use an actual cloud provider, it needs no configuration in the main cloud config file. However, it does still require a profile to be set up, and is most useful when used inside a map file. The key parameters to be set are ssh_host, ssh_username and either ssh_keyfile or ssh_password. These may all be set in either the profile or the map. An example configuration might use the following in cloud.profiles:

make_salty:
  provider: saltify

And in the map file:

make_salty:
  - myinstance:
    ssh_host: 54.262.11.38
    ssh_username: ubuntu
    ssh_keyfile: '/etc/salt/mysshkey.pem'
    sudo: True

Note

In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: make_salty.

Extending Profiles and Cloud Providers Configuration

As of 0.8.7, the option to extend both the profiles and cloud providers configuration and avoid duplication was added. The extends feature works on the current profiles configuration, but, regarding the cloud providers configuration, only works in the new syntax and respective configuration files, i.e. /etc/salt/salt/cloud.providers or /etc/salt/cloud.providers.d/*.conf.

Note

Extending cloud profiles and providers is not recursive. For example, a profile that is extended by a second profile is possible, but the second profile cannot be extended by a third profile.

Also, if a profile (or provider) is extending another profile and each contains a list of values, the lists from the extending profile will override the list from the original profile. The lists are not merged together.

Extending Profiles

Some example usage on how to use extends with profiles. Consider /etc/salt/salt/cloud.profiles containing:

development-instances:
  provider: my-ec2-config
  size: t1.micro
  ssh_username: ec2_user
  securitygroup:
    - default
  deploy: False

Amazon-Linux-AMI-2012.09-64bit:
  image: ami-54cf5c3d
  extends: development-instances

Fedora-17:
  image: ami-08d97e61
  extends: development-instances

CentOS-5:
  provider: my-aws-config
  image: ami-09b61d60
  extends: development-instances

The above configuration, once parsed would generate the following profiles data:

[{'deploy': False,
  'image': 'ami-08d97e61',
  'profile': 'Fedora-17',
  'provider': 'my-ec2-config',
  'securitygroup': ['default'],
  'size': 't1.micro',
  'ssh_username': 'ec2_user'},
 {'deploy': False,
  'image': 'ami-09b61d60',
  'profile': 'CentOS-5',
  'provider': 'my-aws-config',
  'securitygroup': ['default'],
  'size': 't1.micro',
  'ssh_username': 'ec2_user'},
 {'deploy': False,
  'image': 'ami-54cf5c3d',
  'profile': 'Amazon-Linux-AMI-2012.09-64bit',
  'provider': 'my-ec2-config',
  'securitygroup': ['default'],
  'size': 't1.micro',
  'ssh_username': 'ec2_user'},
 {'deploy': False,
  'profile': 'development-instances',
  'provider': 'my-ec2-config',
  'securitygroup': ['default'],
  'size': 't1.micro',
  'ssh_username': 'ec2_user'}]

Pretty cool right?

Extending Providers

Some example usage on how to use extends within the cloud providers configuration. Consider /etc/salt/salt/cloud.providers containing:

my-develop-envs:
  - id: HJGRYCILJLKJYG
    key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn'
    keyname: test
    securitygroup: quick-start
    private_key: /root/test.pem
    location: ap-southeast-1
    availability_zone: ap-southeast-1b
    provider: aws

  - user: myuser@mycorp.com
    password: mypass
    ssh_key_name: mykey
    ssh_key_file: '/etc/salt/ibm/mykey.pem'
    location: Raleigh
    provider: ibmsce


my-productions-envs:
  - extends: my-develop-envs:ibmsce
    user: my-production-user@mycorp.com
    location: us-east-1
    availability_zone: us-east-1

The above configuration, once parsed would generate the following providers data:

'providers': {
    'my-develop-envs': [
        {'availability_zone': 'ap-southeast-1b',
         'id': 'HJGRYCILJLKJYG',
         'key': 'kdjgfsgm;woormgl/aserigjksjdhasdfgn',
         'keyname': 'test',
         'location': 'ap-southeast-1',
         'private_key': '/root/test.pem',
         'provider': 'aws',
         'securitygroup': 'quick-start'
        },
        {'location': 'Raleigh',
         'password': 'mypass',
         'provider': 'ibmsce',
         'ssh_key_file': '/etc/salt/ibm/mykey.pem',
         'ssh_key_name': 'mykey',
         'user': 'myuser@mycorp.com'
        }
    ],
    'my-productions-envs': [
        {'availability_zone': 'us-east-1',
         'location': 'us-east-1',
         'password': 'mypass',
         'provider': 'ibmsce',
         'ssh_key_file': '/etc/salt/ibm/mykey.pem',
         'ssh_key_name': 'mykey',
         'user': 'my-production-user@mycorp.com'
        }
    ]
}

Windows Configuration

Spinning up Windows Minions

It is possible to use Salt Cloud to spin up Windows instances, and then install Salt on them. This functionality is available on all cloud providers that are supported by Salt Cloud. However, it may not necessarily be available on all Windows images.

Requirements

Salt Cloud makes use of impacket and winexe to set up the Windows Salt Minion installer.

impacket is usually available as either the impacket or the python-impacket package, depending on the distribution. More information on impacket can be found at the project home:

winexe is less commonly available in distribution-specific repositories. However, it is currently being built for various distributions in 3rd party channels:

Optionally WinRM can be used instead of winexe if the python module pywinrm is available and WinRM is supported on the target Windows version. Information on pywinrm can be found at the project home:

Additionally, a copy of the Salt Minion Windows installer must be present on the system on which Salt Cloud is running. This installer may be downloaded from saltstack.com:

Firewall Settings

Because Salt Cloud makes use of smbclient and winexe, port 445 must be open on the target image. This port is not generally open by default on a standard Windows distribution, and care must be taken to use an image in which this port is open, or the Windows firewall is disabled.

If supported by the cloud provider, a PowerShell script may be used to open up this port automatically, using the cloud provider's userdata. The following script would open up port 445, and apply the changes:

<powershell>
New-NetFirewallRule -Name "SMB445" -DisplayName "SMB445" -Protocol TCP -LocalPort 445
Set-Item (dir wsman:\localhost\Listener\*\Port -Recurse).pspath 445 -Force
Restart-Service winrm
</powershell>

For EC2, this script may be saved as a file, and specified in the provider or profile configuration as userdata_file. For instance:

userdata_file: /etc/salt/windows-firewall.ps1
Configuration

Configuration is set as usual, with some extra configuration settings. The location of the Windows installer on the machine that Salt Cloud is running on must be specified. This may be done in any of the regular configuration files (main, providers, profiles, maps). For example:

Setting the installer in /etc/salt/cloud.providers:

my-softlayer:
  provider: softlayer
  user: MYUSER1138
  apikey: 'e3b68aa711e6deadc62d5b76355674beef7cc3116062ddbacafe5f7e465bfdc9'
  minion:
    master: saltmaster.example.com
  win_installer: /root/Salt-Minion-2014.7.0-AMD64-Setup.exe
  win_username: Administrator
  win_password: letmein
  smb_port: 445

The default Windows user is Administrator, and the default Windows password is blank.

If WinRM is to be used use_winrm needs to be set to True.

Auto-Generated Passwords on EC2

On EC2, when the win_password is set to auto, Salt Cloud will query EC2 for an auto-generated password. This password is expected to take at least 4 minutes to generate, adding additional time to the deploy process.

When the EC2 API is queried for the auto-generated password, it will be returned in a message encrypted with the specified keyname. This requires that the appropriate private_key file is also specified. Such a profile configuration might look like:

windows-server-2012:
  provider: my-ec2-config
  image: ami-c49c0dac
  size: m1.small
  securitygroup: windows
  keyname: mykey
  private_key: /root/mykey.pem
  userdata_file: /etc/salt/windows-firewall.ps1
  win_installer: /root/Salt-Minion-2014.7.0-AMD64-Setup.exe
  win_username: Administrator
  win_password: auto

Cloud Provider Specifics

Getting Started With Aliyun ECS

The Aliyun ECS (Elastic Computer Service) is one of the most popular public cloud providers in China. This cloud provider can be used to manage aliyun instance using salt-cloud.

http://www.aliyun.com/

Dependencies

This driver requires the Python requests library to be installed.

Configuration

Using Salt for Aliyun ECS requires aliyun access key id and key secret. These can be found in the aliyun web interface, in the "User Center" section, under "My Service" tab.

# Note: This example is for /etc/salt/cloud.providers or any file in the
# /etc/salt/cloud.providers.d/ directory.

my-aliyun-config:
  # aliyun Access Key ID
  id: wDGEwGregedg3435gDgxd
  # aliyun Access Key Secret
  key: GDd45t43RDBTrkkkg43934t34qT43t4dgegerGEgg
  location: cn-qingdao
  provider: aliyun
Profiles
Cloud Profiles

Set up an initial profile at /etc/salt/cloud.profiles or in the /etc/salt/cloud.profiles.d/ directory:

aliyun_centos:
    provider: my-aliyun-config
    size: ecs.t1.small
    location: cn-qingdao
    securitygroup: G1989096784427999
    image: centos6u3_64_20G_aliaegis_20130816.vhd

Sizes can be obtained using the --list-sizes option for the salt-cloud command:

# salt-cloud --list-sizes my-aliyun-config
my-aliyun-config:
    ----------
    aliyun:
        ----------
        ecs.c1.large:
            ----------
            CpuCoreCount:
                8
            InstanceTypeId:
                ecs.c1.large
            MemorySize:
                16.0

...SNIP...

Images can be obtained using the --list-images option for the salt-cloud command:

# salt-cloud --list-images my-aliyun-config
my-aliyun-config:
    ----------
    aliyun:
        ----------
        centos5u8_64_20G_aliaegis_20131231.vhd:
            ----------
            Architecture:
                x86_64
            Description:

            ImageId:
                centos5u8_64_20G_aliaegis_20131231.vhd
            ImageName:
                CentOS 5.8 64位
            ImageOwnerAlias:
                system
            ImageVersion:
                1.0
            OSName:
                CentOS  5.8 64位
            Platform:
                CENTOS5
            Size:
                20
            Visibility:
                public
...SNIP...

Locations can be obtained using the --list-locations option for the salt-cloud command:

my-aliyun-config:
    ----------
    aliyun:
        ----------
        cn-beijing:
            ----------
            LocalName:
                北京
            RegionId:
                cn-beijing
        cn-hangzhou:
            ----------
            LocalName:
                杭州
            RegionId:
                cn-hangzhou
        cn-hongkong:
            ----------
            LocalName:
                香港
            RegionId:
                cn-hongkong
        cn-qingdao:
            ----------
            LocalName:
                青岛
            RegionId:
                cn-qingdao

Security Group can be obtained using the -f list_securitygroup option for the salt-cloud command:

# salt-cloud --location=cn-qingdao -f list_securitygroup my-aliyun-config
my-aliyun-config:
    ----------
    aliyun:
        ----------
        G1989096784427999:
            ----------
            Description:
                G1989096784427999
            SecurityGroupId:
                G1989096784427999

Note

Aliyun ECS REST API documentation is available from Aliyun ECS API.

Getting Started With Azure

New in version 2014.1.0.

Azure is a cloud service by Microsoft providing virtual machines, SQL services, media services, and more. This document describes how to use Salt Cloud to create a virtual machine on Azure, with Salt installed.

More information about Azure is located at http://www.windowsazure.com/.

Dependencies
  • The Azure Python SDK.
  • A Microsoft Azure account
  • OpenSSL (to generate the certificates)
  • Salt
Configuration

Set up the provider config at /etc/salt/cloud.providers.d/azure.conf:

# Note: This example is for /etc/salt/cloud.providers.d/azure.conf

my-azure-config:
  provider: azure
  subscription_id: 3287abc8-f98a-c678-3bde-326766fd3617
  certificate_path: /etc/salt/azure.pem

  # Set up the location of the salt master
  #
  minion:
    master: saltmaster.example.com

  # Optional
  management_host: management.core.windows.net

The certificate used must be generated by the user. OpenSSL can be used to create the management certificates. Two certificates are needed: a .cer file, which is uploaded to Azure, and a .pem file, which is stored locally.

To create the .pem file, execute the following command:

openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout /etc/salt/azure.pem -out /etc/salt/azure.pem

To create the .cer file, execute the following command:

openssl x509 -inform pem -in /etc/salt/azure.pem -outform der -out /etc/salt/azure.cer

After creating these files, the .cer file will need to be uploaded to Azure via the "Upload a Management Certificate" action of the "Management Certificates" tab within the "Settings" section of the management portal.

Optionally, a management_host may be configured, if necessary for the region.

Cloud Profiles

Set up an initial profile at /etc/salt/cloud.profiles:

azure-ubuntu:
  provider: my-azure-config
  image: 'b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-12_04_3-LTS-amd64-server-20131003-en-us-30GB'
  size: Small
  location: 'East US'
  ssh_username: azureuser
  ssh_password: verybadpass
  slot: production
  media_link: 'http://portalvhdabcdefghijklmn.blob.core.windows.net/vhds'

These options are described in more detail below. Once configured, the profile can be realized with a salt command:

salt-cloud -p azure-ubuntu newinstance

This will create an salt minion instance named newinstance in Azure. If the command was executed on the salt-master, its Salt key will automatically be signed on the master.

Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt:

salt newinstance test.ping
Profile Options

The following options are currently available for Azure.

provider

The name of the provider as configured in /etc/salt/cloud.providers.d/azure.conf.

image

The name of the image to use to create a VM. Available images can be viewed using the following command:

salt-cloud --list-images my-azure-config
size

The name of the size to use to create a VM. Available sizes can be viewed using the following command:

salt-cloud --list-sizes my-azure-config
location

The name of the location to create a VM in. Available locations can be viewed using the following command:

salt-cloud --list-locations my-azure-config
affinity_group

The name of the affinity group to create a VM in. Either a location or an affinity_group may be specified, but not both. See Affinity Groups below.

ssh_username

The user to use to log into the newly-created VM to install Salt.

ssh_password

The password to use to log into the newly-created VM to install Salt.

slot

The environment to which the hosted service is deployed. Valid values are staging or production. When set to production, the resulting URL of the new VM will be <vm_name>.cloudapp.net. When set to staging, the resulting URL will contain a generated hash instead.

service_name

The name of the service in which to create the VM. If this is not specified, then a service will be created with the same name as the VM.

Show Instance

This action is a thin wrapper around --full-query, which displays details on a single instance only. In an environment with several machines, this will save a user from having to sort through all instance data, just to examine a single instance.

salt-cloud -a show_instance myinstance
Destroying VMs

There are certain options which can be specified in the global cloud configuration file (usually /etc/salt/cloud) which affect Salt Cloud's behavior when a VM is destroyed.

cleanup_disks

New in version Beryllium.

Default is False. When set to True, Salt Cloud will wait for the VM to be destroyed, then attempt to destroy the main disk that is associated with the VM.

cleanup_vhds

New in version Beryllium.

Default is False. Requires cleanup_disks to be set to True. When also set to True, Salt Cloud will ask Azure to delete the VHD associated with the disk that is also destroyed.

cleanup_services

New in version Beryllium.

Default is False. Requires cleanup_disks to be set to True. When also set to True, Salt Cloud will wait for the disk to be destroyed, then attempt to remove the service that is associated with the VM. Because the disk belongs to the service, the disk must be destroyed before the service can be.

Managing Hosted Services

New in version Beryllium.

An account can have one or more hosted services. A hosted service is required in order to create a VM. However, as mentioned above, if a hosted service is not specified when a VM is created, then one will automatically be created with the name of the name. The following functions are also available.

create_service

Create a hosted service. The following options are available.

name

Required. The name of the hosted service to create.

label

Required. A label to apply to the hosted service.

description

Optional. A longer description of the hosted service.

location

Required, if affinity_group is not set. The location in which to create the hosted service. Either the location or the affinity_group must be set, but not both.

affinity_group

Required, if location is not set. The affinity group in which to create the hosted service. Either the location or the affinity_group must be set, but not both.

extended_properties

Optional. Dictionary containing name/value pairs of hosted service properties. You can have a maximum of 50 extended property name/value pairs. The maximum length of the Name element is 64 characters, only alphanumeric characters and underscores are valid in the Name, and the name must start with a letter. The value has a maximum length of 255 characters.

CLI Example

The following example illustrates creating a hosted service.

salt-cloud -f create_service my-azure name=my-service label=my-service location='West US'
show_service

Return details about a specific hosted service. Can also be called with get_service.

salt-cloud -f show_storage my-azure name=my-service
list_services

List all hosted services associates with the subscription.

salt-cloud -f list_services my-azure-config
delete_service

Delete a specific hosted service.

salt-cloud -f delete_service my-azure name=my-service
Managing Storage Accounts

New in version Beryllium.

Salt Cloud can manage storage accounts associated with the account. The following functions are available. Deprecated marked as deprecated are marked as such as per the SDK documentation, but are still included for completeness with the SDK.

create_storage

Create a storage account. The following options are supported.

name

Required. The name of the storage account to create.

label

Required. A label to apply to the storage account.

description

Optional. A longer description of the storage account.

location

Required, if affinity_group is not set. The location in which to create the storage account. Either the location or the affinity_group must be set, but not both.

affinity_group

Required, if location is not set. The affinity group in which to create the storage account. Either the location or the affinity_group must be set, but not both.

extended_properties

Optional. Dictionary containing name/value pairs of storage account properties. You can have a maximum of 50 extended property name/value pairs. The maximum length of the Name element is 64 characters, only alphanumeric characters and underscores are valid in the Name, and the name must start with a letter. The value has a maximum length of 255 characters.

geo_replication_enabled

Deprecated. Replaced by the account_type parameter.

account_type

Specifies whether the account supports locally-redundant storage, geo-redundant storage, zone-redundant storage, or read access geo-redundant storage. Possible values are:

  • Standard_LRS
  • Standard_ZRS
  • Standard_GRS
  • Standard_RAGRS
CLI Example

The following example illustrates creating a storage account.

salt-cloud -f create_storage my-azure name=my-storage label=my-storage location='West US'
list_storage

List all storage accounts associates with the subscription.

salt-cloud -f list_storage my-azure-config
show_storage

Return details about a specific storage account. Can also be called with get_storage.

salt-cloud -f show_storage my-azure name=my-storage
update_storage

Update details concerning a storage account. Any of the options available in create_storage can be used, but the name cannot be changed.

salt-cloud -f update_storage my-azure name=my-storage label=my-storage
delete_storage

Delete a specific storage account.

salt-cloud -f delete_storage my-azure name=my-storage
show_storage_keys

Returns the primary and secondary access keys for the specified storage account.

salt-cloud -f show_storage_keys my-azure name=my-storage
regenerate_storage_keys

Regenerate storage account keys. Requires a key_type ("primary" or "secondary") to be specified.

salt-cloud -f regenerate_storage_keys my-azure name=my-storage key_type=primary
Managing Disks

New in version Beryllium.

When a VM is created, a disk will also be created for it. The following functions are available for managing disks. Deprecated marked as deprecated are marked as such as per the SDK documentation, but are still included for completeness with the SDK.

show_disk

Return details about a specific disk. Can also be called with get_disk.

salt-cloud -f show_disk my-azure name=my-disk
list_disks

List all disks associates with the account.

salt-cloud -f list_disks my-azure
update_disk

Update details for a disk. The following options are available.

name

Required. The name of the disk to update.

has_operating_system

Deprecated.

label

Required. The label for the disk.

media_link

Deprecated. The location of the disk in the account, including the storage container that it is in. This should not need to be changed.

new_name

Deprecated. If renaming the disk, the new name.

os

Deprecated.

CLI Example

The following example illustrates updating a disk.

salt-cloud -f update_disk my-azure name=my-disk label=my-disk
delete_disk

Delete a specific disk.

salt-cloud -f delete_disk my-azure name=my-disk
Managing Service Certificates

New in version Beryllium.

Stored at the cloud service level, these certificates are used by your deployed services. For more information on service certificates, see the following link:

The following functions are available.

list_service_certificates

List service certificates associated with the account.

salt-cloud -f list_service_certificates my-azure
show_service_certificate

Show the data for a specific service certificate associated with the account. The name, thumbprint, and thumbalgorithm can be obtained from list_service_certificates. Can also be called with get_service_certificate.

salt-cloud -f show_service_certificate my-azure name=my_service_certificate \
    thumbalgorithm=sha1 thumbprint=0123456789ABCDEF
add_service_certificate

Add a service certificate to the account. This requires that a certificate already exists, which is then added to the account. For more information on creating the certificate itself, see:

The following options are available.

name

Required. The name of the hosted service that the certificate will belong to.

data

Required. The base-64 encoded form of the pfx file.

certificate_format

Required. The service certificate format. The only supported value is pfx.

password

The certificate password.

salt-cloud -f add_service_certificate my-azure name=my-cert \
    data='...CERT_DATA...' certificate_format=pfx password=verybadpass
delete_service_certificate

Delete a service certificate from the account. The name, thumbprint, and thumbalgorithm can be obtained from list_service_certificates.

salt-cloud -f delete_service_certificate my-azure \
    name=my_service_certificate \
    thumbalgorithm=sha1 thumbprint=0123456789ABCDEF
Managing Management Certificates

New in version Beryllium.

A Azure management certificate is an X.509 v3 certificate used to authenticate an agent, such as Visual Studio Tools for Windows Azure or a client application that uses the Service Management API, acting on behalf of the subscription owner to manage subscription resources. Azure management certificates are uploaded to Azure and stored at the subscription level. The management certificate store can hold up to 100 certificates per subscription. These certificates are used to authenticate your Windows Azure deployment.

For more information on management certificates, see the following link.

The following functions are available.

list_management_certificates

List management certificates associated with the account.

salt-cloud -f list_management_certificates my-azure
show_management_certificate

Show the data for a specific management certificate associated with the account. The name, thumbprint, and thumbalgorithm can be obtained from list_management_certificates. Can also be called with get_management_certificate.

salt-cloud -f show_management_certificate my-azure name=my_management_certificate \
    thumbalgorithm=sha1 thumbprint=0123456789ABCDEF
add_management_certificate

Management certificates must have a key length of at least 2048 bits and should reside in the Personal certificate store. When the certificate is installed on the client, it should contain the private key of the certificate. To upload to the certificate to the Microsoft Azure Management Portal, you must export it as a .cer format file that does not contain the private key. For more information on creating management certificates, see the following link:

The following options are available.

public_key

A base64 representation of the management certificate public key.

thumbprint

The thumb print that uniquely identifies the management certificate.

data

The certificate's raw data in base-64 encoded .cer format.

salt-cloud -f add_management_certificate my-azure public_key='...PUBKEY...' \
    thumbprint=0123456789ABCDEF data='...CERT_DATA...'
delete_management_certificate

Delete a management certificate from the account. The thumbprint can be obtained from list_management_certificates.

salt-cloud -f delete_management_certificate my-azure thumbprint=0123456789ABCDEF
Virtual Network Management

New in version Beryllium.

The following are functions for managing virtual networks.

list_virtual_networks

List input endpoints associated with the deployment.

salt-cloud -f list_virtual_networks my-azure service=myservice deployment=mydeployment
Managing Input Endpoints

New in version Beryllium.

Input endpoints are used to manage port access for roles. Because endpoints cannot be managed by the Azure Python SDK, Salt Cloud uses the API directly. With versions of Python before 2.7.9, the requests-python package needs to be installed in order for this to work. Additionally, the following needs to be set in the master's configuration file:

requests_lib: True

The following functions are available.

list_input_endpoints

List input endpoints associated with the deployment

salt-cloud -f list_input_endpoints my-azure service=myservice deployment=mydeployment
show_input_endpoint

Show an input endpoint associated with the deployment

salt-cloud -f show_input_endpoint my-azure service=myservice \
    deployment=mydeployment name=SSH
add_input_endpoint

Add an input endpoint to the deployment. Please note that there may be a delay before the changes show up. The following options are available.

service

Required. The name of the hosted service which the VM belongs to.

deployment

Required. The name of the deployment that the VM belongs to. If the VM was created with Salt Cloud, the deployment name probably matches the VM name.

role

Required. The name of the role that the VM belongs to. If the VM was created with Salt Cloud, the role name probably matches the VM name.

name

Required. The name of the input endpoint. This typically matches the port that the endpoint is set to. For instance, port 22 would be called SSH.

port

Required. The public (Internet-facing) port that is used for the endpoint.

local_port

Optional. The private port on the VM itself that will be matched with the port. This is typically the same as the port. If this value is not specified, it will be copied from port.

protocol

Required. Either tcp or udp.

enable_direct_server_return

Optional. If an internal load balancer exists in the account, it can be used with a direct server return. The default value is False. Please see the following article for an explanation of this option.

timeout_for_tcp_idle_connection

Optional. The default value is 4. Please see the following article for an explanation of this option.

CLI Example

The following example illustrates adding an input endpoint.

salt-cloud -f add_input_endpoint my-azure service=myservice \
    deployment=mydeployment role=myrole name=HTTP local_port=80 \
    port=80 protocol=tcp enable_direct_server_return=False \
    timeout_for_tcp_idle_connection=4
update_input_endpoint

Updates the details for a specific input endpoint. All options from add_input_endpoint are supported.

salt-cloud -f update_input_endpoint my-azure service=myservice \
    deployment=mydeployment role=myrole name=HTTP local_port=80 \
    port=80 protocol=tcp enable_direct_server_return=False \
    timeout_for_tcp_idle_connection=4
delete_input_endpoint

Delete an input endpoint from the deployment. Please note that there may be a delay before the changes show up. The following items are required.

CLI Example

The following example illustrates deleting an input endpoint.

service

The name of the hosted service which the VM belongs to.

deployment

The name of the deployment that the VM belongs to. If the VM was created with Salt Cloud, the deployment name probably matches the VM name.

role

The name of the role that the VM belongs to. If the VM was created with Salt Cloud, the role name probably matches the VM name.

name

The name of the input endpoint. This typically matches the port that the endpoint is set to. For instance, port 22 would be called SSH.

salt-cloud -f delete_input_endpoint my-azure service=myservice \
    deployment=mydeployment role=myrole name=HTTP
Managing Affinity Groups

New in version Beryllium.

Affinity groups allow you to group your Azure services to optimize performance. All services and VMs within an affinity group will be located in the same region. For more information on Affinity groups, see the following link:

The following functions are available.

list_affinity_groups

List input endpoints associated with the account

salt-cloud -f list_affinity_groups my-azure
show_affinity_group

Show an affinity group associated with the account

salt-cloud -f show_affinity_group my-azure service=myservice \
    deployment=mydeployment name=SSH
create_affinity_group

Create a new affinity group. The following options are supported.

name

Required. The name of the new affinity group.

location

Required. The region in which the affinity group lives.

label

Required. A label describing the new affinity group.

description

Optional. A longer description of the affinity group.

salt-cloud -f create_affinity_group my-azure name=my_affinity_group \
   label=my-affinity-group location='West US'
update_affinity_group

Update an affinity group's properties

salt-cloud -f update_affinity_group my-azure name=my_group label=my_group
delete_affinity_group

Delete a specific affinity group associated with the account

salt-cloud -f delete_affinity_group my-azure name=my_affinity_group
Managing Blob Storage

New in version Beryllium.

Azure storage containers and their contents can be managed with Salt Cloud. This is not as elegant as using one of the other available clients in Windows, but it benefits Linux and Unix users, as there are fewer options available on those platforms.

Blob Storage Configuration

Blob storage must be configured differently than the standard Azure configuration. Both a storage_account and a storage_key must be specified either through the Azure provider configuration (in addition to the other Azure configuration) or via the command line.

storage_account: mystorage
storage_key: ffhj334fDSGFEGDFGFDewr34fwfsFSDFwe==
storage_account

This is one of the storage accounts that is available via the list_storage function.

storage_key

Both a primary and a secondary storage_key can be obtained by running the show_storage_keys function. Either key may be used.

Blob Functions

The following functions are made available through Salt Cloud for managing blog storage.

make_blob_url

Creates the URL to access a blob

salt-cloud -f make_blob_url my-azure container=mycontainer blob=myblob
container

Name of the container.

blob

Name of the blob.

account

Name of the storage account. If not specified, derives the host base from the provider configuration.

protocol

Protocol to use: 'http' or 'https'. If not specified, derives the host base from the provider configuration.

host_base

Live host base URL. If not specified, derives the host base from the provider configuration.

list_storage_containers

List containers associated with the storage account

salt-cloud -f list_storage_containers my-azure
create_storage_container

Create a storage container

salt-cloud -f create_storage_container my-azure name=mycontainer
name

Name of container to create.

meta_name_values

Optional. A dict with name_value pairs to associate with the container as metadata. Example:{'Category':'test'}

blob_public_access

Optional. Possible values include: container, blob

fail_on_exist

Specify whether to throw an exception when the container exists.

show_storage_container

Show a container associated with the storage account

salt-cloud -f show_storage_container my-azure name=myservice
name

Name of container to show.

show_storage_container_metadata

Show a storage container's metadata

salt-cloud -f show_storage_container_metadata my-azure name=myservice
name

Name of container to show.

lease_id

If specified, show_storage_container_metadata only succeeds if the container's lease is active and matches this ID.

set_storage_container_metadata

Set a storage container's metadata

salt-cloud -f set_storage_container my-azure name=mycontainer \
    x_ms_meta_name_values='{"my_name": "my_value"}'
name

Name of existing container. meta_name_values ```````````` A dict containing name, value for metadata. Example: {'category':'test'} lease_id ```` If specified, set_storage_container_metadata only succeeds if the container's lease is active and matches this ID.

show_storage_container_acl

Show a storage container's acl

salt-cloud -f show_storage_container_acl my-azure name=myservice
name

Name of existing container.

lease_id

If specified, show_storage_container_acl only succeeds if the container's lease is active and matches this ID.

set_storage_container_acl

Set a storage container's acl

salt-cloud -f set_storage_container my-azure name=mycontainer
name

Name of existing container.

signed_identifiers

SignedIdentifers instance

blob_public_access

Optional. Possible values include: container, blob

lease_id

If specified, set_storage_container_acl only succeeds if the container's lease is active and matches this ID.

delete_storage_container

Delete a container associated with the storage account

salt-cloud -f delete_storage_container my-azure name=mycontainer
name

Name of container to create.

fail_not_exist

Specify whether to throw an exception when the container exists.

lease_id

If specified, delete_storage_container only succeeds if the container's lease is active and matches this ID.

lease_storage_container

Lease a container associated with the storage account

salt-cloud -f lease_storage_container my-azure name=mycontainer
name

Name of container to create.

lease_action

Required. Possible values: acquire|renew|release|break|change

lease_id

Required if the container has an active lease.

lease_duration

Specifies the duration of the lease, in seconds, or negative one (-1) for a lease that never expires. A non-infinite lease can be between 15 and 60 seconds. A lease duration cannot be changed using renew or change. For backwards compatibility, the default is 60, and the value is only used on an acquire operation.

lease_break_period

Optional. For a break operation, this is the proposed duration of seconds that the lease should continue before it is broken, between 0 and 60 seconds. This break period is only used if it is shorter than the time remaining on the lease. If longer, the time remaining on the lease is used. A new lease will not be available before the break period has expired, but the lease may be held for longer than the break period. If this header does not appear with a break operation, a fixed-duration lease breaks after the remaining lease period elapses, and an infinite lease breaks immediately.

proposed_lease_id

Optional for acquire, required for change. Proposed lease ID, in a GUID string format.

list_blobs

List blobs associated with the container

salt-cloud -f list_blobs my-azure container=mycontainer
container

The name of the storage container

prefix

Optional. Filters the results to return only blobs whose names begin with the specified prefix.

marker

Optional. A string value that identifies the portion of the list to be returned with the next list operation. The operation returns a marker value within the response body if the list returned was not complete. The marker value may then be used in a subsequent call to request the next set of list items. The marker value is opaque to the client.

maxresults

Optional. Specifies the maximum number of blobs to return, including all BlobPrefix elements. If the request does not specify maxresults or specifies a value greater than 5,000, the server will return up to 5,000 items. Setting maxresults to a value less than or equal to zero results in error response code 400 (Bad Request).

include

Optional. Specifies one or more datasets to include in the response. To specify more than one of these options on the URI, you must separate each option with a comma. Valid values are:

snapshots:
    Specifies that snapshots should be included in the
    enumeration. Snapshots are listed from oldest to newest in
    the response.
metadata:
    Specifies that blob metadata be returned in the response.
uncommittedblobs:
    Specifies that blobs for which blocks have been uploaded,
    but which have not been committed using Put Block List
    (REST API), be included in the response.
copy:
    Version 2012-02-12 and newer. Specifies that metadata
    related to any current or previous Copy Blob operation
    should be included in the response.
delimiter

Optional. When the request includes this parameter, the operation returns a BlobPrefix element in the response body that acts as a placeholder for all blobs whose names begin with the same substring up to the appearance of the delimiter character. The delimiter may be a single character or a string.

show_blob_service_properties

Show a blob's service properties

salt-cloud -f show_blob_service_properties my-azure
set_blob_service_properties

Sets the properties of a storage account's Blob service, including Windows Azure Storage Analytics. You can also use this operation to set the default request version for all incoming requests that do not have a version specified.

salt-cloud -f set_blob_service_properties my-azure
properties

a StorageServiceProperties object.

timeout

Optional. The timeout parameter is expressed in seconds.

show_blob_properties

Returns all user-defined metadata, standard HTTP properties, and system properties for the blob.

salt-cloud -f show_blob_properties my-azure container=mycontainer blob=myblob
container

Name of existing container.

blob

Name of existing blob.

lease_id

Required if the blob has an active lease.

set_blob_properties

Set a blob's properties

salt-cloud -f set_blob_properties my-azure
container

Name of existing container.

blob

Name of existing blob.

blob_cache_control

Optional. Modifies the cache control string for the blob.

blob_content_type

Optional. Sets the blob's content type.

blob_content_md5

Optional. Sets the blob's MD5 hash.

blob_content_encoding

Optional. Sets the blob's content encoding.

blob_content_language

Optional. Sets the blob's content language.

lease_id

Required if the blob has an active lease.

blob_content_disposition

Optional. Sets the blob's Content-Disposition header. The Content-Disposition response header field conveys additional information about how to process the response payload, and also can be used to attach additional metadata. For example, if set to attachment, it indicates that the user-agent should not display the response, but instead show a Save As dialog with a filename other than the blob name specified.

put_blob

Upload a blob

salt-cloud -f put_blob my-azure container=base name=top.sls blob_path=/srv/salt/top.sls
salt-cloud -f put_blob my-azure container=base name=content.txt blob_content='Some content'
container

Name of existing container.

name

Name of existing blob.

blob_path

The path on the local machine of the file to upload as a blob. Either this or blob_content must be specified.

blob_content

The actual content to be uploaded as a blob. Either this or blob_path must me specified.

cache_control

Optional. The Blob service stores this value but does not use or modify it.

content_language

Optional. Specifies the natural languages used by this resource.

content_md5

Optional. An MD5 hash of the blob content. This hash is used to verify the integrity of the blob during transport. When this header is specified, the storage service checks the hash that has arrived with the one that was sent. If the two hashes do not match, the operation will fail with error code 400 (Bad Request).

blob_content_type

Optional. Set the blob's content type.

blob_content_encoding

Optional. Set the blob's content encoding.

blob_content_language

Optional. Set the blob's content language.

blob_content_md5

Optional. Set the blob's MD5 hash.

blob_cache_control

Optional. Sets the blob's cache control.

meta_name_values

A dict containing name, value for metadata.

lease_id

Required if the blob has an active lease.

get_blob

Download a blob

salt-cloud -f get_blob my-azure container=base name=top.sls local_path=/srv/salt/top.sls
salt-cloud -f get_blob my-azure container=base name=content.txt return_content=True
container

Name of existing container.

name

Name of existing blob.

local_path

The path on the local machine to download the blob to. Either this or return_content must be specified.

return_content

Whether or not to return the content directly from the blob. If specified, must be True or False. Either this or the local_path must be specified.

snapshot

Optional. The snapshot parameter is an opaque DateTime value that, when present, specifies the blob snapshot to retrieve.

lease_id

Required if the blob has an active lease.

progress_callback

callback for progress with signature function(current, total) where current is the number of bytes transfered so far, and total is the size of the blob.

max_connections

Maximum number of parallel connections to use when the blob size exceeds 64MB. Set to 1 to download the blob chunks sequentially. Set to 2 or more to download the blob chunks in parallel. This uses more system resources but will download faster.

max_retries

Number of times to retry download of blob chunk if an error occurs.

retry_wait

Sleep time in secs between retries.

Getting Started With DigitalOcean

DigitalOcean is a public cloud provider that specializes in Linux instances.

Configuration

Using Salt for DigitalOcean requires a personal_access_token, an ssh_key_file, and at least one SSH key name in ssh_key_names. More ssh_key_names can be added by separating each key with a comma. The personal_access_token can be found in the DigitalOcean web interface in the "Apps & API" section. The SSH key name can be found under the "SSH Keys" section.

# Note: This example is for /etc/salt/cloud.providers or any file in the
# /etc/salt/cloud.providers.d/ directory.

my-digitalocean-config:
  provider: digital_ocean
  personal_access_token: xxx
  ssh_key_file: /path/to/ssh/key/file
  ssh_key_names: my-key-name,my-key-name-2
  location: New York 1
Profiles
Cloud Profiles

Set up an initial profile at /etc/salt/cloud.profiles or in the /etc/salt/cloud.profiles.d/ directory:

digitalocean-ubuntu:
    provider: my-digitalocean-config
    image: Ubuntu 14.04 x32
    size: 512MB
    location: New York 1
    private_networking: True
    backups_enabled: True
    ipv6: True

Locations can be obtained using the --list-locations option for the salt-cloud command:

# salt-cloud --list-locations my-digitalocean-config
my-digitalocean-config:
    ----------
    digital_ocean:
        ----------
        Amsterdam 1:
            ----------
            available:
                False
            features:
                [u'backups']
            name:
                Amsterdam 1
            sizes:
                []
            slug:
                ams1
...SNIP...

Sizes can be obtained using the --list-sizes option for the salt-cloud command:

# salt-cloud --list-sizes my-digitalocean-config
my-digitalocean-config:
    ----------
    digital_ocean:
        ----------
        512MB:
            ----------
            cost_per_hour:
                0.00744
            cost_per_month:
                5.0
            cpu:
                1
            disk:
                20
            id:
                66
            memory:
                512
            name:
                512MB
            slug:
                None
...SNIP...

Images can be obtained using the --list-images option for the salt-cloud command:

# salt-cloud --list-images my-digitalocean-config
my-digitalocean-config:
    ----------
    digital_ocean:
        ----------
        Arch Linux 2013.05 x64:
            ----------
            distribution:
                Arch Linux
            id:
                350424
            name:
                Arch Linux 2013.05 x64
            public:
                True
            slug:
                None
...SNIP...

Note

DigitalOcean's concept of Applications is nothing more than a pre-configured instance (same as a normal Droplet). You will find examples such Docker 0.7 Ubuntu 13.04 x64 and Wordpress on Ubuntu 12.10 when using the --list-images option. These names can be used just like the rest of the standard instances when specifying an image in the cloud profile configuration.

Note

If your domain's DNS is managed with DigitalOcean, you can automatically create A-records for newly created droplets. Use create_dns_record: True in your config to enable this. Add delete_dns_record: True to also delete records when a droplet is destroyed.

Note

Additional documentation is available from DigitalOcean.

Getting Started With AWS EC2

Amazon EC2 is a very widely used public cloud platform and one of the core platforms Salt Cloud has been built to support.

Previously, the suggested provider for AWS EC2 was the aws provider. This has been deprecated in favor of the ec2 provider. Configuration using the old aws provider will still function, but that driver is no longer in active development.

Dependencies

This driver requires the Python requests library to be installed.

Configuration

The following example illustrates some of the options that can be set. These parameters are discussed in more detail below.

# Note: This example is for /etc/salt/cloud.providers or any file in the
# /etc/salt/cloud.providers.d/ directory.

my-ec2-southeast-public-ips:
  # Set up the location of the salt master
  #
  minion:
    master: saltmaster.example.com

  # Set up grains information, which will be common for all nodes
  # using this provider
  grains:
    node_type: broker
    release: 1.0.1

  # Specify whether to use public or private IP for deploy script.
  #
  # Valid options are:
  #     private_ips - The salt-cloud command is run inside the EC2
  #     public_ips - The salt-cloud command is run outside of EC2
  #
  ssh_interface: public_ips

  # Optionally configure the Windows credential validation number of
  # retries and delay between retries.  This defaults to 10 retries
  # with a one second delay betwee retries
  win_deploy_auth_retries: 10
  win_deploy_auth_retry_delay: 1

  # Set the EC2 access credentials (see below)
  #
  id: 'use-instance-role-credentials'
  key: 'use-instance-role-credentials'

  # Make sure this key is owned by root with permissions 0400.
  #
  private_key: /etc/salt/my_test_key.pem
  keyname: my_test_key
  securitygroup: default

  # Optionally configure default region
  # Use salt-cloud --list-locations <provider> to obtain valid regions
  #
  location: ap-southeast-1
  availability_zone: ap-southeast-1b

  # Configure which user to use to run the deploy script. This setting is
  # dependent upon the AMI that is used to deploy. It is usually safer to
  # configure this individually in a profile, than globally. Typical users
  # are:
  #
  # Amazon Linux -> ec2-user
  # RHEL         -> ec2-user
  # CentOS       -> ec2-user
  # Ubuntu       -> ubuntu
  #
  ssh_username: ec2-user

  # Optionally add an IAM profile
  iam_profile: 'arn:aws:iam::123456789012:instance-profile/ExampleInstanceProfile'

  provider: ec2


my-ec2-southeast-private-ips:
  # Set up the location of the salt master
  #
  minion:
    master: saltmaster.example.com

  # Specify whether to use public or private IP for deploy script.
  #
  # Valid options are:
  #     private_ips - The salt-master is also hosted with EC2
  #     public_ips - The salt-master is hosted outside of EC2
  #
  ssh_interface: private_ips

  # Optionally configure the Windows credential validation number of
  # retries and delay between retries.  This defaults to 10 retries
  # with a one second delay betwee retries
  win_deploy_auth_retries: 10
  win_deploy_auth_retry_delay: 1

  # Set the EC2 access credentials (see below)
  #
  id: 'use-instance-role-credentials'
  key: 'use-instance-role-credentials'

  # Make sure this key is owned by root with permissions 0400.
  #
  private_key: /etc/salt/my_test_key.pem
  keyname: my_test_key

  # This one should NOT be specified if VPC was not configured in AWS to be
  # the default. It might cause an error message which sais that network
  # interfaces and an instance-level security groups may not be specified
  # on the same request.
  #
  securitygroup: default

  # Optionally configure default region
  #
  location: ap-southeast-1
  availability_zone: ap-southeast-1b

  # Configure which user to use to run the deploy script. This setting is
  # dependent upon the AMI that is used to deploy. It is usually safer to
  # configure this individually in a profile, than globally. Typical users
  # are:
  #
  # Amazon Linux -> ec2-user
  # RHEL         -> ec2-user
  # CentOS       -> ec2-user
  # Ubuntu       -> ubuntu
  #
  ssh_username: ec2-user

  # Optionally add an IAM profile
  iam_profile: 'my other profile name'

  provider: ec2
Access Credentials

The id and key settings may be found in the Security Credentials area of the AWS Account page:

https://portal.aws.amazon.com/gp/aws/securityCredentials

Both are located in the Access Credentials area of the page, under the Access Keys tab. The id setting is labeled Access Key ID, and the key setting is labeled Secret Access Key.

Note: if either id or key is set to 'use-instance-role-credentials' it is assumed that Salt is running on an AWS instance, and the instance role credentials will be retrieved and used. Since both the id and key are required parameters for the AWS ec2 provider, it is recommended to set both to 'use-instance-role-credentials' for this functionality.

A "static" and "permanent" Access Key ID and Secret Key can be specified, but this is not recommended. Instance role keys are rotated on a regular basis, and are the recommended method of specifying AWS credentials.

Windows Deploy Timeouts

For Windows instances, it may take longer than normal for the instance to be ready. In these circumstances, the provider configuration can be configured with a win_deploy_auth_retries and/or a win_deploy_auth_retry_delay setting, which default to 10 retries and a one second delay between retries. These retries and timeouts relate to validating the Administrator password once AWS provides the credentials via the AWS API.

Windows Deploy Timeouts

For Windows instances, it may take longer than normal for the instance to be ready. In these circumstances, the provider configuration can be configured with a win_deploy_auth_retries and/or a win_deploy_auth_retry_delay setting, which default to 10 retries and a one second delay between retries. These retries and timeouts relate to validating the Administrator password once AWS provides the credentials via the AWS API.

Key Pairs

In order to create an instance with Salt installed and configured, a key pair will need to be created. This can be done in the EC2 Management Console, in the Key Pairs area. These key pairs are unique to a specific region. Keys in the us-east-1 region can be configured at:

https://console.aws.amazon.com/ec2/home?region=us-east-1#s=KeyPairs

Keys in the us-west-1 region can be configured at

https://console.aws.amazon.com/ec2/home?region=us-west-1#s=KeyPairs

...and so on. When creating a key pair, the browser will prompt to download a pem file. This file must be placed in a directory accessible by Salt Cloud, with permissions set to either 0400 or 0600.

Security Groups

An instance on EC2 needs to belong to a security group. Like key pairs, these are unique to a specific region. These are also configured in the EC2 Management Console. Security groups for the us-east-1 region can be configured at:

https://console.aws.amazon.com/ec2/home?region=us-east-1#s=SecurityGroups

...and so on.

A security group defines firewall rules which an instance will adhere to. If the salt-master is configured outside of EC2, the security group must open the SSH port (usually port 22) in order for Salt Cloud to install Salt.

IAM Profile

Amazon EC2 instances support the concept of an instance profile, which is a logical container for the IAM role. At the time that you launch an EC2 instance, you can associate the instance with an instance profile, which in turn corresponds to the IAM role. Any software that runs on the EC2 instance is able to access AWS using the permissions associated with the IAM role.

Scaffolding the profile is a 2-step configuration process:

  1. Configure an IAM Role from the IAM Management Console.

  2. Attach this role to a new profile. It can be done with the AWS CLI:

    > aws iam create-instance-profile --instance-profile-name PROFILE_NAME
    > aws iam add-role-to-instance-profile --instance-profile-name PROFILE_NAME --role-name ROLE_NAME
    

Once the profile is created, you can use the PROFILE_NAME to configure your cloud profiles.

Cloud Profiles

Set up an initial profile at /etc/salt/cloud.profiles:

base_ec2_private:
  provider: my-ec2-southeast-private-ips
  image: ami-e565ba8c
  size: t1.micro
  ssh_username: ec2-user

base_ec2_public:
  provider: my-ec2-southeast-public-ips
  image: ami-e565ba8c
  size: t1.micro
  ssh_username: ec2-user

base_ec2_db:
  provider: my-ec2-southeast-public-ips
  image: ami-e565ba8c
  size: m1.xlarge
  ssh_username: ec2-user
  volumes:
    - { size: 10, device: /dev/sdf }
    - { size: 10, device: /dev/sdg, type: io1, iops: 1000 }
    - { size: 10, device: /dev/sdh, type: io1, iops: 1000 }
  # optionally add tags to profile:
  tag: {'Environment': 'production', 'Role': 'database'}
  # force grains to sync after install
  sync_after_install: grains

base_ec2_vpc:
  provider: my-ec2-southeast-public-ips
  image: ami-a73264ce
  size: m1.xlarge
  ssh_username: ec2-user
  script:  /etc/salt/cloud.deploy.d/user_data.sh
  network_interfaces:
    - DeviceIndex: 0
      PrivateIpAddresses:
        - Primary: True
      #auto assign public ip (not EIP)
      AssociatePublicIpAddress: True
      SubnetId: subnet-813d4bbf
      SecurityGroupId:
        - sg-750af413
  volumes:
    - { size: 10, device: /dev/sdf }
    - { size: 10, device: /dev/sdg, type: io1, iops: 1000 }
    - { size: 10, device: /dev/sdh, type: io1, iops: 1000 }
  del_root_vol_on_destroy: True
  del_all_vol_on_destroy: True
  tag: {'Environment': 'production', 'Role': 'database'}
  sync_after_install: grains

The profile can now be realized with a salt command:

# salt-cloud -p base_ec2 ami.example.com
# salt-cloud -p base_ec2_public ami.example.com
# salt-cloud -p base_ec2_private ami.example.com

This will create an instance named ami.example.com in EC2. The minion that is installed on this instance will have an id of ami.example.com. If the command was executed on the salt-master, its Salt key will automatically be signed on the master.

Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt:

# salt 'ami.example.com' test.ping
Required Settings

The following settings are always required for EC2:

# Set the EC2 login data
my-ec2-config:
  id: HJGRYCILJLKJYG
  key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn'
  keyname: test
  securitygroup: quick-start
  private_key: /root/test.pem
  provider: ec2
Optional Settings

EC2 allows a location to be set for servers to be deployed in. Availability zones exist inside regions, and may be added to increase specificity.

my-ec2-config:
  # Optionally configure default region
  location: ap-southeast-1
  availability_zone: ap-southeast-1b

EC2 instances can have a public or private IP, or both. When an instance is deployed, Salt Cloud needs to log into it via SSH to run the deploy script. By default, the public IP will be used for this. If the salt-cloud command is run from another EC2 instance, the private IP should be used.

my-ec2-config:
  # Specify whether to use public or private IP for deploy script
  # private_ips or public_ips
  ssh_interface: public_ips

Many EC2 instances do not allow remote access to the root user by default. Instead, another user must be used to run the deploy script using sudo. Some common usernames include ec2-user (for Amazon Linux), ubuntu (for Ubuntu instances), admin (official Debian) and bitnami (for images provided by Bitnami).

my-ec2-config:
  # Configure which user to use to run the deploy script
  ssh_username: ec2-user

Multiple usernames can be provided, in which case Salt Cloud will attempt to guess the correct username. This is mostly useful in the main configuration file:

my-ec2-config:
  ssh_username:
    - ec2-user
    - ubuntu
    - admin
    - bitnami

Multiple security groups can also be specified in the same fashion:

my-ec2-config:
  securitygroup:
    - default
    - extra

Your instances may optionally make use of EC2 Spot Instances. The following example will request that spot instances be used and your maximum bid will be $0.10. Keep in mind that different spot prices may be needed based on the current value of the various EC2 instance sizes. You can check current and past spot instance pricing via the EC2 API or AWS Console.

my-ec2-config:
  spot_config:
    spot_price: 0.10

By default, the spot instance type is set to 'one-time', meaning it will be launched and, if it's ever terminated for whatever reason, it will not be recreated. If you would like your spot instances to be relaunched after a termination (by your or AWS), set the type to 'persistent'.

NOTE: Spot instances are a great way to save a bit of money, but you do run the risk of losing your spot instances if the current price for the instance size goes above your maximum bid.

The following parameters may be set in the cloud configuration file to control various aspects of the spot instance launching:

  • wait_for_spot_timeout: seconds to wait before giving up on spot instance launch (default=600)
  • wait_for_spot_interval: seconds to wait in between polling requests to determine if a spot instance is available (default=30)
  • wait_for_spot_interval_multiplier: a multiplier to add to the interval in between requests, which is useful if AWS is throttling your requests (default=1)
  • wait_for_spot_max_failures: maximum number of failures before giving up on launching your spot instance (default=10)

If you find that you're being throttled by AWS while polling for spot instances, you can set the following in your core cloud configuration file that will double the polling interval after each request to AWS.

wait_for_spot_interval: 1
wait_for_spot_interval_multiplier: 2

See the AWS Spot Instances documentation for more information.

Block device mappings enable you to specify additional EBS volumes or instance store volumes when the instance is launched. This setting is also available on each cloud profile. Note that the number of instance stores varies by instance type. If more mappings are provided than are supported by the instance type, mappings will be created in the order provided and additional mappings will be ignored. Consult the AWS documentation for a listing of the available instance stores, and device names.

my-ec2-config:
  block_device_mappings:
    - DeviceName: /dev/sdb
      VirtualName: ephemeral0
    - DeviceName: /dev/sdc
      VirtualName: ephemeral1

You can also use block device mappings to change the size of the root device at the provisioning time. For example, assuming the root device is '/dev/sda', you can set its size to 100G by using the following configuration.

my-ec2-config:
  block_device_mappings:
    - DeviceName: /dev/sda
      Ebs.VolumeSize: 100
      Ebs.VolumeType: gp2
      Ebs.SnapshotId: dummy0

Existing EBS volumes may also be attached (not created) to your instances or you can create new EBS volumes based on EBS snapshots. To simply attach an existing volume use the volume_id parameter.

device: /dev/xvdj
volume_id: vol-12345abcd

Or, to create a volume from an EBS snapshot, use the snapshot parameter.

device: /dev/xvdj
snapshot: snap-abcd12345

Note that volume_id will take precedence over the snapshot parameter.

Tags can be set once an instance has been launched.

my-ec2-config:
    tag:
        tag0: value
        tag1: value
Modify EC2 Tags

One of the features of EC2 is the ability to tag resources. In fact, under the hood, the names given to EC2 instances by salt-cloud are actually just stored as a tag called Name. Salt Cloud has the ability to manage these tags:

salt-cloud -a get_tags mymachine
salt-cloud -a set_tags mymachine tag1=somestuff tag2='Other stuff'
salt-cloud -a del_tags mymachine tag1,tag2,tag3

It is possible to manage tags on any resource in EC2 with a Resource ID, not just instances:

salt-cloud -f get_tags my_ec2 resource_id=af5467ba
salt-cloud -f set_tags my_ec2 resource_id=af5467ba tag1=somestuff
salt-cloud -f del_tags my_ec2 resource_id=af5467ba tag1,tag2,tag3
Rename EC2 Instances

As mentioned above, EC2 instances are named via a tag. However, renaming an instance by renaming its tag will cause the salt keys to mismatch. A rename function exists which renames both the instance, and the salt keys.

salt-cloud -a rename mymachine newname=yourmachine
EC2 Termination Protection

EC2 allows the user to enable and disable termination protection on a specific instance. An instance with this protection enabled cannot be destroyed.

salt-cloud -a enable_term_protect mymachine
salt-cloud -a disable_term_protect mymachine
Rename on Destroy

When instances on EC2 are destroyed, there will be a lag between the time that the action is sent, and the time that Amazon cleans up the instance. During this time, the instance still retails a Name tag, which will cause a collision if the creation of an instance with the same name is attempted before the cleanup occurs. In order to avoid such collisions, Salt Cloud can be configured to rename instances when they are destroyed. The new name will look something like:

myinstance-DEL20f5b8ad4eb64ed88f2c428df80a1a0c

In order to enable this, add rename_on_destroy line to the main configuration file:

my-ec2-config:
  rename_on_destroy: True
Listing Images

Normally, images can be queried on a cloud provider by passing the --list-images argument to Salt Cloud. This still holds true for EC2:

salt-cloud --list-images my-ec2-config

However, the full list of images on EC2 is extremely large, and querying all of the available images may cause Salt Cloud to behave as if frozen. Therefore, the default behavior of this option may be modified, by adding an owner argument to the provider configuration:

owner: aws-marketplace

The possible values for this setting are amazon, aws-marketplace, self, <AWS account ID> or all. The default setting is amazon. Take note that all and aws-marketplace may cause Salt Cloud to appear as if it is freezing, as it tries to handle the large amount of data.

It is also possible to perform this query using different settings without modifying the configuration files. To do this, call the avail_images function directly:

salt-cloud -f avail_images my-ec2-config owner=aws-marketplace
EC2 Images

The following are lists of available AMI images, generally sorted by OS. These lists are on 3rd-party websites, are not managed by Salt Stack in any way. They are provided here as a reference for those who are interested, and contain no warranty (express or implied) from anyone affiliated with Salt Stack. Most of them have never been used, much less tested, by the Salt Stack team.

show_image

This is a function that describes an AMI on EC2. This will give insight as to the defaults that will be applied to an instance using a particular AMI.

$ salt-cloud -f show_image ec2 image=ami-fd20ad94
show_instance

This action is a thin wrapper around --full-query, which displays details on a single instance only. In an environment with several machines, this will save a user from having to sort through all instance data, just to examine a single instance.

$ salt-cloud -a show_instance myinstance
ebs_optimized

This argument enables switching of the EbsOptimized setting which default to 'false'. Indicates whether the instance is optimized for EBS I/O. This optimization provides dedicated throughput to Amazon EBS and an optimized configuration stack to provide optimal Amazon EBS I/O performance. This optimization isn't available with all instance types. Additional usage charges apply when using an EBS-optimized instance.

This setting can be added to the profile or map file for an instance.

If set to True, this setting will enable an instance to be EbsOptimized

ebs_optimized: True

This can also be set as a cloud provider setting in the EC2 cloud configuration:

my-ec2-config:
  ebs_optimized: True
del_root_vol_on_destroy

This argument overrides the default DeleteOnTermination setting in the AMI for the EBS root volumes for an instance. Many AMIs contain 'false' as a default, resulting in orphaned volumes in the EC2 account, which may unknowingly be charged to the account. This setting can be added to the profile or map file for an instance.

If set, this setting will apply to the root EBS volume

del_root_vol_on_destroy: True

This can also be set as a cloud provider setting in the EC2 cloud configuration:

my-ec2-config:
  del_root_vol_on_destroy: True
del_all_vols_on_destroy

This argument overrides the default DeleteOnTermination setting in the AMI for the not-root EBS volumes for an instance. Many AMIs contain 'false' as a default, resulting in orphaned volumes in the EC2 account, which may unknowingly be charged to the account. This setting can be added to the profile or map file for an instance.

If set, this setting will apply to any (non-root) volumes that were created by salt-cloud using the 'volumes' setting.

The volumes will not be deleted under the following conditions * If a volume is detached before terminating the instance * If a volume is created without this setting and attached to the instance

del_all_vols_on_destroy: True

This can also be set as a cloud provider setting in the EC2 cloud configuration:

my-ec2-config:
  del_all_vols_on_destroy: True

The setting for this may be changed on all volumes of an existing instance using one of the following commands:

salt-cloud -a delvol_on_destroy myinstance
salt-cloud -a keepvol_on_destroy myinstance
salt-cloud -a show_delvol_on_destroy myinstance

The setting for this may be changed on a volume on an existing instance using one of the following commands:

salt-cloud -a delvol_on_destroy myinstance device=/dev/sda1
salt-cloud -a delvol_on_destroy myinstance volume_id=vol-1a2b3c4d
salt-cloud -a keepvol_on_destroy myinstance device=/dev/sda1
salt-cloud -a keepvol_on_destroy myinstance volume_id=vol-1a2b3c4d
salt-cloud -a show_delvol_on_destroy myinstance device=/dev/sda1
salt-cloud -a show_delvol_on_destroy myinstance volume_id=vol-1a2b3c4d
EC2 Termination Protection

EC2 allows the user to enable and disable termination protection on a specific instance. An instance with this protection enabled cannot be destroyed. The EC2 driver adds a show_term_protect action to the regular EC2 functionality.

salt-cloud -a show_term_protect mymachine
salt-cloud -a enable_term_protect mymachine
salt-cloud -a disable_term_protect mymachine
Alternate Endpoint

Normally, EC2 endpoints are build using the region and the service_url. The resulting endpoint would follow this pattern:

ec2.<region>.<service_url>

This results in an endpoint that looks like:

ec2.us-east-1.amazonaws.com

There are other projects that support an EC2 compatibility layer, which this scheme does not account for. This can be overridden by specifying the endpoint directly in the main cloud configuration file:

my-ec2-config:
  endpoint: myendpoint.example.com:1138/services/Cloud
Volume Management

The EC2 driver has several functions and actions for management of EBS volumes.

Creating Volumes

A volume may be created, independent of an instance. A zone must be specified. A size or a snapshot may be specified (in GiB). If neither is given, a default size of 10 GiB will be used. If a snapshot is given, the size of the snapshot will be used.

salt-cloud -f create_volume ec2 zone=us-east-1b
salt-cloud -f create_volume ec2 zone=us-east-1b size=10
salt-cloud -f create_volume ec2 zone=us-east-1b snapshot=snap12345678
salt-cloud -f create_volume ec2 size=10 type=standard
salt-cloud -f create_volume ec2 size=10 type=io1 iops=1000
Attaching Volumes

Unattached volumes may be attached to an instance. The following values are required; name or instance_id, volume_id, and device.

salt-cloud -a attach_volume myinstance volume_id=vol-12345 device=/dev/sdb1
Show a Volume

The details about an existing volume may be retrieved.

salt-cloud -a show_volume myinstance volume_id=vol-12345
salt-cloud -f show_volume ec2 volume_id=vol-12345
Detaching Volumes

An existing volume may be detached from an instance.

salt-cloud -a detach_volume myinstance volume_id=vol-12345
Deleting Volumes

A volume that is not attached to an instance may be deleted.

salt-cloud -f delete_volume ec2 volume_id=vol-12345
Managing Key Pairs

The EC2 driver has the ability to manage key pairs.

Creating a Key Pair

A key pair is required in order to create an instance. When creating a key pair with this function, the return data will contain a copy of the private key. This private key is not stored by Amazon, will not be obtainable past this point, and should be stored immediately.

salt-cloud -f create_keypair ec2 keyname=mykeypair
Show a Key Pair

This function will show the details related to a key pair, not including the private key itself (which is not stored by Amazon).

salt-cloud -f show_keypair ec2 keyname=mykeypair
Delete a Key Pair

This function removes the key pair from Amazon.

salt-cloud -f delete_keypair ec2 keyname=mykeypair
Launching instances into a VPC
Simple launching into a VPC

In the amazon web interface, identify the id of the subnet into which your image should be created. Then, edit your cloud.profiles file like so:-

profile-id:
  provider: provider-name
  subnetid: subnet-XXXXXXXX
  image: ami-XXXXXXXX
  size: m1.medium
  ssh_username: ubuntu
  securitygroupid:
    - sg-XXXXXXXX
Specifying interface properties

New in version 2014.7.0.

Launching into a VPC allows you to specify more complex configurations for the network interfaces of your virtual machines, for example:-

profile-id:
  provider: provider-name
  image: ami-XXXXXXXX
  size: m1.medium
  ssh_username: ubuntu

  # Do not include either 'subnetid' or 'securitygroupid' here if you are
  # going to manually specify interface configuration
  #
  network_interfaces:
    - DeviceIndex: 0
      SubnetId: subnet-XXXXXXXX
      SecurityGroupId:
        - sg-XXXXXXXX

      # Uncomment this line if you would like to set an explicit private
      # IP address for the ec2 instance
      #
      # PrivateIpAddress: 192.168.1.66

      # Uncomment this to associate an existing Elastic IP Address with
      # this network interface:
      #
      # associate_eip: eni-XXXXXXXX

      # You can allocate more than one IP address to an interface. Use the
      # 'ip addr list' command to see them.
      #
      # SecondaryPrivateIpAddressCount: 2

      # Uncomment this to allocate a new Elastic IP Address to this
      # interface (will be associated with the primary private ip address
      # of the interface
      #
      # allocate_new_eip: True

      # Uncomment this instead to allocate a new Elastic IP Address to
      # both the primary private ip address and each of the secondary ones
      #
      allocate_new_eips: True

Note that it is an error to assign a 'subnetid' or 'securitygroupid' to a profile where the interfaces are manually configured like this. These are both really properties of each network interface, not of the machine itself.

Getting Started With GoGrid

GoGrid is a public cloud provider supporting Linux and Windows.

Dependencies
  • Libcloud >= 0.13.2
Configuration

To use Salt Cloud with GoGrid log into the GoGrid web interface and create an API key. Do this by clicking on "My Account" and then going to the API Keys tab.

The apikey and the sharedsecret configuration parameters need to be set in the configuration file to enable interfacing with GoGrid:

# Note: This example is for /etc/salt/cloud.providers or any file in the
# /etc/salt/cloud.providers.d/ directory.

my-gogrid-config:
  provider: gogrid
  apikey: asdff7896asdh789
  sharedsecret: saltybacon
Profiles
Cloud Profiles

Set up an initial profile at /etc/salt/cloud.profiles or in the /etc/salt/cloud.profiles.d/ directory:

gogrid_512:
  provider: my-gogrid-config
  size: 512MB
  image: CentOS 6.2 (64-bit) w/ None

Sizes can be obtained using the --list-sizes option for the salt-cloud command:

# salt-cloud --list-sizes my-gogrid-config
my-gogrid-config:
    ----------
    gogrid:
        ----------
        512MB:
            ----------
            bandwidth:
                None
            disk:
                30
            driver:
            get_uuid:
            id:
                512MB
            name:
                512MB
            price:
                0.095
            ram:
                512
            uuid:
                bde1e4d7c3a643536e42a35142c7caac34b060e9
...SNIP...

Images can be obtained using the --list-images option for the salt-cloud command:

# salt-cloud --list-images my-gogrid-config
my-gogrid-config:
    ----------
    gogrid:
        ----------
        CentOS 6.4 (64-bit) w/ None:
            ----------
            driver:
            extra:
                ----------
            get_uuid:
            id:
                18094
            name:
                CentOS 6.4 (64-bit) w/ None
            uuid:
                bfd4055389919e01aa6261828a96cf54c8dcc2c4
...SNIP...

Getting Started With Google Compute Engine

Google Compute Engine (GCE) is Google-infrastructure as a service that lets you run your large-scale computing workloads on virtual machines. This document covers how to use Salt Cloud to provision and manage your virtual machines hosted within Google's infrastructure.

You can find out more about GCE and other Google Cloud Platform services at https://cloud.google.com.

Dependencies
  • Libcloud >= 0.14.0-beta3
  • PyCrypto >= 2.1.
  • A Google Cloud Platform account with Compute Engine enabled
  • A registered Service Account for authorization
  • Oh, and obviously you'll need salt
Google Compute Engine Setup
  1. Sign up for Google Cloud Platform

    Go to https://cloud.google.com and use your Google account to sign up for Google Cloud Platform and complete the guided instructions.

  2. Create a Project

    Next, go to the console at https://cloud.google.com/console and create a new Project. Make sure to select your new Project if you are not automatically directed to the Project.

    Projects are a way of grouping together related users, services, and billing. You may opt to create multiple Projects and the remaining instructions will need to be completed for each Project if you wish to use GCE and Salt Cloud to manage your virtual machines.

  3. Enable the Google Compute Engine service

    In your Project, either just click Compute Engine to the left, or go to the APIs & auth section and APIs link and enable the Google Compute Engine service.

  4. Create a Service Account

    To set up authorization, navigate to APIs & auth section and then the Credentials link and click the CREATE NEW CLIENT ID button. Select Service Account and click the Create Client ID button. This will automatically download a .json file, which should be ignored. Look for a new Service Account section in the page and record the generated email address for the matching key/fingerprint. The email address will be used in the service_account_email_address of the /etc/salt/cloud file.

  5. Key Format

    In the new Service Account section, click Generate new P12 key, which will automatically download a .p12 private key file. The .p12 private key needs to be converted to a format compatible with libcloud. This new Google-generated private key was encrypted using notasecret as a passphrase. Use the following command and record the location of the converted private key and record the location for use in the service_account_private_key of the /etc/salt/cloud file:

    openssl pkcs12 -in ORIG.p12 -passin pass:notasecret \
    -nodes -nocerts | openssl rsa -out NEW.pem
    
Configuration

Set up the cloud config at /etc/salt/cloud:

# Note: This example is for /etc/salt/cloud

providers:
  gce-config:
    # Set up the Project name and Service Account authorization
    #
    project: "your-project-id"
    service_account_email_address: "123-a5gt@developer.gserviceaccount.com"
    service_account_private_key: "/path/to/your/NEW.pem"

    # Set up the location of the salt master
    #
    minion:
      master: saltmaster.example.com

    # Set up grains information, which will be common for all nodes
    # using this provider
    grains:
      node_type: broker
      release: 1.0.1

    provider: gce

Note

The value provided for project must not contain underscores or spaces and is labeled as "Project ID" on the Google Developers Console.

Cloud Profiles

Set up an initial profile at /etc/salt/cloud.profiles:

all_settings:
  image: centos-6
  size: n1-standard-1
  location: europe-west1-b
  network: default
  tags: '["one", "two", "three"]'
  metadata: '{"one": "1", "2": "two"}'
  use_persistent_disk: True
  delete_boot_pd: False
  deploy: True
  make_master: False
  provider: gce-config

The profile can be realized now with a salt command:

salt-cloud -p all_settings gce-instance

This will create an salt minion instance named gce-instance in GCE. If the command was executed on the salt-master, its Salt key will automatically be signed on the master.

Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt:

salt 'ami.example.com' test.ping
GCE Specific Settings

Consult the sample profile below for more information about GCE specific settings. Some of them are mandatory and are properly labeled below but typically also include a hard-coded default.

all_settings:

  # Image is used to define what Operating System image should be used
  # to for the instance.  Examples are Debian 7 (wheezy) and CentOS 6.
  #
  # MANDATORY
  #
  image: centos-6

  # A 'size', in GCE terms, refers to the instance's 'machine type'.  See
  # the on-line documentation for a complete list of GCE machine types.
  #
  # MANDATORY
  #
  size: n1-standard-1

  # A 'location', in GCE terms, refers to the instance's 'zone'.  GCE
  # has the notion of both Regions (e.g. us-central1, europe-west1, etc)
  # and Zones (e.g. us-central1-a, us-central1-b, etc).
  #
  # MANDATORY
  #
  location: europe-west1-b

  # Use this setting to define the network resource for the instance.
  # All GCE projects contain a network named 'default' but it's possible
  # to use this setting to create instances belonging to a different
  # network resource.
  #
  network: default

  # GCE supports instance/network tags and this setting allows you to
  # set custom tags.  It should be a list of strings and must be
  # parse-able by the python ast.literal_eval() function to convert it
  # to a python list.
  #
  tags: '["one", "two", "three"]'

  # GCE supports instance metadata and this setting allows you to
  # set custom metadata.  It should be a hash of key/value strings and
  # parse-able by the python ast.literal_eval() function to convert it
  # to a python dictionary.
  #
  metadata: '{"one": "1", "2": "two"}'

  # Use this setting to ensure that when new instances are created,
  # they will use a persistent disk to preserve data between instance
  # terminations and re-creations.
  #
  use_persistent_disk: True

  # In the event that you wish the boot persistent disk to be permanently
  # deleted when you destroy an instance, set delete_boot_pd to True.
  #
  delete_boot_pd: False

  # Specify whether to use public or private IP for deploy script.
  # Valid options are:
  #     private_ips - The salt-master is also hosted with GCE
  #     public_ips - The salt-master is hosted outside of GCE
  ssh_interface: public_ips

  # Per instance setting: Used a named fixed IP address to this host.
  # Valid options are:
  #     ephemeral - The host will use a GCE ephemeral IP
  #     None - No external IP will be configured on this host.
  # Optionally, pass the name of a GCE address to use a fixed IP address.
  # If the address does not already exist, it will be created.
  external_ip: "ephemeral"

GCE instances do not allow remote access to the root user by default. Instead, another user must be used to run the deploy script using sudo. Append something like this to /etc/salt/cloud.profiles:

all_settings:
    ...

    # SSH to GCE instances as gceuser
    ssh_username: gceuser

    # Use the local private SSH key file located here
    ssh_keyfile: /etc/cloud/google_compute_engine

If you have not already used this SSH key to login to instances in this GCE project you will also need to add the public key to your projects metadata at https://cloud.google.com/console. You could also add it via the metadata setting too:

all_settings:
    ...

    metadata: '{"one": "1", "2": "two",
                "sshKeys": "gceuser:ssh-rsa <Your SSH Public Key> gceuser@host"}'
Single instance details

This action is a thin wrapper around --full-query, which displays details on a single instance only. In an environment with several machines, this will save a user from having to sort through all instance data, just to examine a single instance.

salt-cloud -a show_instance myinstance
Destroy, persistent disks, and metadata

As noted in the provider configuration, it's possible to force the boot persistent disk to be deleted when you destroy the instance. The way that this has been implemented is to use the instance metadata to record the cloud profile used when creating the instance. When destroy is called, if the instance contains a salt-cloud-profile key, it's value is used to reference the matching profile to determine if delete_boot_pd is set to True.

Be aware that any GCE instances created with salt cloud will contain this custom salt-cloud-profile metadata entry.

List various resources

It's also possible to list several GCE resources similar to what can be done with other providers. The following commands can be used to list GCE zones (locations), machine types (sizes), and images.

salt-cloud --list-locations gce
salt-cloud --list-sizes gce
salt-cloud --list-images gce
Persistent Disk

The Compute Engine provider provides functions via salt-cloud to manage your Persistent Disks. You can create and destroy disks as well as attach and detach them from running instances.

Create

When creating a disk, you can create an empty disk and specify its size (in GB), or specify either an 'image' or 'snapshot'.

salt-cloud -f create_disk gce disk_name=pd location=us-central1-b size=200
Delete

Deleting a disk only requires the name of the disk to delete

salt-cloud -f delete_disk gce disk_name=old-backup
Attach

Attaching a disk to an existing instance is really an 'action' and requires both an instance name and disk name. It's possible to use this ation to create bootable persistent disks if necessary. Compute Engine also supports attaching a persistent disk in READ_ONLY mode to multiple instances at the same time (but then cannot be attached in READ_WRITE to any instance).

salt-cloud -a attach_disk myinstance disk_name=pd mode=READ_WRITE boot=yes
Detach

Detaching a disk is also an action against an instance and only requires the name of the disk. Note that this does not safely sync and umount the disk from the instance. To ensure no data loss, you must first make sure the disk is unmounted from the instance.

salt-cloud -a detach_disk myinstance disk_name=pd
Show disk

It's also possible to look up the details for an existing disk with either a function or an action.

salt-cloud -a show_disk myinstance disk_name=pd
salt-cloud -f show_disk gce disk_name=pd
Create snapshot

You can take a snapshot of an existing disk's content. The snapshot can then in turn be used to create other persistent disks. Note that to prevent data corruption, it is strongly suggested that you unmount the disk prior to taking a snapshot. You must name the snapshot and provide the name of the disk.

salt-cloud -f create_snapshot gce name=backup-20140226 disk_name=pd
Delete snapshot

You can delete a snapshot when it's no longer needed by specifying the name of the snapshot.

salt-cloud -f delete_snapshot gce name=backup-20140226
Show snapshot

Use this function to look up information about the snapshot.

salt-cloud -f show_snapshot gce name=backup-20140226
Networking

Compute Engine supports multiple private networks per project. Instances within a private network can easily communicate with each other by an internal DNS service that resolves instance names. Instances within a private network can also communicate with either directly without needing special routing or firewall rules even if they span different regions/zones.

Networks also support custom firewall rules. By default, traffic between instances on the same private network is open to all ports and protocols. Inbound SSH traffic (port 22) is also allowed but all other inbound traffic is blocked.

Create network

New networks require a name and CIDR range. New instances can be created and added to this network by setting the network name during create. It is not possible to add/remove existing instances to a network.

salt-cloud -f create_network gce name=mynet cidr=10.10.10.0/24
Destroy network

Destroy a network by specifying the name. Make sure that there are no instances associated with the network prior to deleting it or you'll have a bad day.

salt-cloud -f delete_network gce name=mynet
Show network

Specify the network name to view information about the network.

salt-cloud -f show_network gce name=mynet
Create address

Create a new named static IP address in a region.

salt-cloud -f create_address gce name=my-fixed-ip region=us-central1
Delete address

Delete an existing named fixed IP address.

salt-cloud -f delete_address gce name=my-fixed-ip region=us-central1
Show address

View details on a named address.

salt-cloud -f show_address gce name=my-fixed-ip region=us-central1
Create firewall

You'll need to create custom firewall rules if you want to allow other traffic than what is described above. For instance, if you run a web service on your instances, you'll need to explicitly allow HTTP and/or SSL traffic. The firewall rule must have a name and it will use the 'default' network unless otherwise specified with a 'network' attribute. Firewalls also support instance tags for source/destination

salt-cloud -f create_fwrule gce name=web allow=tcp:80,tcp:443,icmp
Delete firewall

Deleting a firewall rule will prevent any previously allowed traffic for the named firewall rule.

salt-cloud -f delete_fwrule gce name=web
Show firewall

Use this function to review an existing firewall rule's information.

salt-cloud -f show_fwrule gce name=web
Load Balancer

Compute Engine possess a load-balancer feature for splitting traffic across multiple instances. Please reference the documentation for a more complete discription.

The load-balancer functionality is slightly different than that described in Google's documentation. The concept of TargetPool and ForwardingRule are consolidated in salt-cloud/libcloud. HTTP Health Checks are optional.

HTTP Health Check

HTTP Health Checks can be used as a means to toggle load-balancing across instance members, or to detect if an HTTP site is functioning. A common use-case is to set up a health check URL and if you want to toggle traffic on/off to an instance, you can temporarily have it return a non-200 response. A non-200 response to the load-balancer's health check will keep the LB from sending any new traffic to the "down" instance. Once the instance's health check URL beings returning 200-responses, the LB will again start to send traffic to it. Review Compute Engine's documentation for allowable parameters. You can use the following salt-cloud functions to manage your HTTP health checks.

salt-cloud -f create_hc gce name=myhc path=/ port=80
salt-cloud -f delete_hc gce name=myhc
salt-cloud -f show_hc gce name=myhc
Load-balancer

When creating a new load-balancer, it requires a name, region, port range, and list of members. There are other optional parameters for protocol, and list of health checks. Deleting or showing details about the LB only requires the name.

salt-cloud -f create_lb gce name=lb region=... ports=80 members=w1,w2,w3
salt-cloud -f delete_lb gce name=lb
salt-cloud -f show_lb gce name=lb

You can also create a load balancer using a named fixed IP addressby specifying the name of the address. If the address does not exist yet it will be created.

salt-cloud -f create_lb gce name=my-lb region=us-central1 ports=234 members=s1,s2,s3 address=my-lb-ip
Attach and Detach LB

It is possible to attach or detach an instance from an existing load-balancer. Both the instance and load-balancer must exist before using these functions.

salt-cloud -f attach_lb gce name=lb member=w4
salt-cloud -f detach_lb gce name=lb member=oops

Getting Started With HP Cloud

HP Cloud is a major public cloud platform and uses the libcloud openstack driver. The current version of OpenStack that HP Cloud uses is Havana. When an instance is booted, it must have a floating IP added to it in order to connect to it and further below you will see an example that adds context to this statement.

Set up a cloud provider configuration file

To use the openstack driver for HP Cloud, set up the cloud provider configuration file as in the example shown below:

/etc/salt/cloud.providers.d/hpcloud.conf:

hpcloud-config:
  # Set the location of the salt-master
  #
  minion:
    master: saltmaster.example.com

  # Configure HP Cloud using the OpenStack plugin
  #
  identity_url: https://region-b.geo-1.identity.hpcloudsvc.com:35357/v2.0/tokens
  compute_name: Compute
  protocol: ipv4

  # Set the compute region:
  #
  compute_region: region-b.geo-1

  # Configure HP Cloud authentication credentials
  #
  user: myname
  tenant: myname-project1
  password: xxxxxxxxx

  # keys to allow connection to the instance launched
  #
  ssh_key_name: yourkey
  ssh_key_file: /path/to/key/yourkey.priv

  provider: openstack

The subsequent example that follows is using the openstack driver.

Compute Region

Originally, HP Cloud, in its OpenStack Essex version (1.0), had 3 availability zones in one region, US West (region-a.geo-1), which each behaved each as a region.

This has since changed, and the current OpenStack Havana version of HP Cloud (1.1) now has simplified this and now has two regions to choose from:

region-a.geo-1 -> US West
region-b.geo-1 -> US East
Authentication

The user is the same user as is used to log into the HP Cloud management UI. The tenant can be found in the upper left under "Project/Region/Scope". It is often named the same as user albeit with a -project1 appended. The password is of course what you created your account with. The management UI also has other information such as being able to select US East or US West.

Set up a cloud profile config file

The profile shown below is a know working profile for an Ubuntu instance. The profile configuration file is stored in the following location:

/etc/salt/cloud.profiles.d/hp_ae1_ubuntu.conf:

hp_ae1_ubuntu:
    provider: hp_ae1
    image: 9302692b-b787-4b52-a3a6-daebb79cb498
    ignore_cidr: 10.0.0.1/24
    networks:
      - floating: Ext-Net
    size: standard.small
    ssh_key_file: /root/keys/test.key
    ssh_key_name: test
    ssh_username: ubuntu

Some important things about the example above:

  • The image parameter can use either the image name or image ID which you can obtain by running in the example below (this case US East):
# salt-cloud --list-images hp_ae1
  • The parameter ignore_cidr specifies a range of addresses to ignore when trying to connect to the instance. In this case, it's the range of IP addresses used for an private IP of the instance.
  • The parameter networks is very important to include. In previous versions of Salt Cloud, this is what made it possible for salt-cloud to be able to attach a floating IP to the instance in order to connect to the instance and set up the minion. The current version of salt-cloud doesn't require it, though having it is of no harm either. Newer versions of salt-cloud will use this, and without it, will attempt to find a list of floating IP addresses to use regardless.
  • The ssh_key_file and ssh_key_name are the keys that will make it possible to connect to the instance to set up the minion
  • The ssh_username parameter, in this case, being that the image used will be ubuntu, will make it possible to not only log in but install the minion
Launch an instance

To instantiate a machine based on this profile (example):

# salt-cloud -p hp_ae1_ubuntu ubuntu_instance_1

After several minutes, this will create an instance named ubuntu_instance_1 running in HP Cloud in the US East region and will set up the minion and then return information about the instance once completed.

Manage the instance

Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt:

# salt ubuntu_instance_1 ping
SSH to the instance

Additionally, the instance can be accessed via SSH using the floating IP assigned to it

# ssh ubuntu@<floating ip>
Using a private IP

Alternatively, in the cloud profile, using the private IP to log into the instance to set up the minion is another option, particularly if salt-cloud is running within the cloud on an instance that is on the same network with all the other instances (minions)

The example below is a modified version of the previous example. Note the use of ssh_interface:

hp_ae1_ubuntu:
    provider: hp_ae1
    image: 9302692b-b787-4b52-a3a6-daebb79cb498
    size: standard.small
    ssh_key_file: /root/keys/test.key
    ssh_key_name: test
    ssh_username: ubuntu
    ssh_interface: private_ips

With this setup, salt-cloud will use the private IP address to ssh into the instance and set up the salt-minion

Getting Started With Joyent

Joyent is a public cloud provider supporting SmartOS, Linux, FreeBSD, and Windows.

Dependencies

This driver requires the Python requests library to be installed.

Configuration

The Joyent cloud requires three configuration parameters. The user name and password that are used to log into the Joyent system, and the location of the private ssh key associated with the Joyent account. The ssh key is needed to send the provisioning commands up to the freshly created virtual machine.

# Note: This example is for /etc/salt/cloud.providers or any file in the
# /etc/salt/cloud.providers.d/ directory.

my-joyent-config:
    provider: joyent
    user: fred
    password: saltybacon
    private_key: /root/mykey.pem
    keyname: mykey
Profiles
Cloud Profiles

Set up an initial profile at /etc/salt/cloud.profiles or in the /etc/salt/cloud.profiles.d/ directory:

joyent_512
  provider: my-joyent-config
  size: Extra Small 512 MB
  image: Arch Linux 2013.06

Sizes can be obtained using the --list-sizes option for the salt-cloud command:

# salt-cloud --list-sizes my-joyent-config
my-joyent-config:
    ----------
    joyent:
        ----------
        Extra Small 512 MB:
            ----------
            default:
                false
            disk:
                15360
            id:
                Extra Small 512 MB
            memory:
                512
            name:
                Extra Small 512 MB
            swap:
                1024
            vcpus:
                1
...SNIP...

Images can be obtained using the --list-images option for the salt-cloud command:

# salt-cloud --list-images my-joyent-config
my-joyent-config:
    ----------
    joyent:
        ----------
        base:
            ----------
            description:
                A 32-bit SmartOS image with just essential packages
                installed. Ideal for users who are comfortable with setting
                up their own environment and tools.
            disabled:
                False
            files:
                ----------
                - compression:
                    bzip2
                - sha1:
                    40cdc6457c237cf6306103c74b5f45f5bf2d9bbe
                - size:
                    82492182
            name:
                base
            os:
                smartos
            owner:
                352971aa-31ba-496c-9ade-a379feaecd52
            public:
                True
...SNIP...
SmartDataCenter

This driver can also be used with the Joyent SmartDataCenter project. More details can be found at:

Using SDC requires that an api_host_suffix is set. The default value for this is .api.joyentcloud.com. All characters, including the leading ., should be included:

api_host_suffix: .api.myhostname.com
Miscellaneous Configuration

The following configuration items can be set in either provider or profile confuration files.

use_ssl

When set to True (the default), attach https:// to any URL that does not already have http:// or https:// included at the beginning. The best practice is to leave the protocol out of the URL, and use this setting to manage it.

verify_ssl

When set to True (the default), the underlying web library will verify the SSL certificate. This should only be set to False for debugging.`

Getting Started With LXC

The LXC module is designed to install Salt in an LXC container on a controlled and possibly remote minion.

In other words, Salt will connect to a minion, then from that minion:

  • Provision and configure a container for networking access

  • Use those modules to deploy salt and re-attach to master.

Limitations
  • You can only act on one minion and one provider at a time.
  • Listing images must be targeted to a particular LXC provider (nothing will be outputted with all)

Warning

On pre 2015.5.2, you need to specify explitly the network bridge

Operation

Salt's LXC support does use lxc.init via the lxc.cloud_init_interface and seeds the minion via seed.mkconfig.

You can provide to those lxc VMs a profile and a network profile like if you were directly using the minion module.

Order of operation:

  • Create the LXC container on the desired minion (clone or template)
  • Change LXC config options (if any need to be changed)
  • Start container
  • Change base passwords if any
  • Change base DNS configuration if necessary
  • Wait for LXC container to be up and ready for ssh
  • Test SSH connection and bailout in error
  • Upload deploy script and seeds, then re-attach the minion.
Provider configuration

Here is a simple provider configuration:

# Note: This example goes in /etc/salt/cloud.providers or any file in the
# /etc/salt/cloud.providers.d/ directory.
devhost10-lxc:
  target: devhost10
  provider: lxc
Profile configuration

Please read LXC Management with Salt before anything else. And specially Profiles.

Here are the options to configure your containers:

target
Host minion id to install the lxc Container into
lxc_profile
Name of the profile or inline options for the LXC vm creation/cloning, please see Container Profiles.
network_profile
Name of the profile or inline options for the LXC vm network settings, please see Network Profiles.
nic_opts

Totally optionnal. Per interface new-style configuration options mappings which will override any profile default option:

eth0: {'mac': '00:16:3e:01:29:40',
              'gateway': None, (default)
              'link': 'br0', (default)
              'gateway': None, (default)
              'netmask': '', (default)
              'ip': '22.1.4.25'}}
password
password for root and sysadmin users
dnsservers
List of DNS servers to use. This is optional.
minion
minion configuration (see Minion Configuration in Salt Cloud)
bootstrap_shell
shell for bootstraping script (default: /bin/sh)
script
defaults to salt-boostrap
script_args

arguments which are given to the bootstrap script. the {0} placeholder will be replaced by the path which contains the minion config and key files, eg:

script_args="-c {0}"

Using profiles:

# Note: This example would go in /etc/salt/cloud.profiles or any file in the
# /etc/salt/cloud.profiles.d/ directory.
devhost10-lxc:
  provider: devhost10-lxc
  lxc_profile: foo
  network_profile: bar
  minion:
    master: 10.5.0.1
    master_port: 4506

Using inline profiles (eg to override the network bridge):

devhost11-lxc:
  provider: devhost10-lxc
  lxc_profile:
    clone_from: foo
  network_profile:
    etho:
      link: lxcbr0
  minion:
    master: 10.5.0.1
    master_port: 4506

Template instead of a clone:

devhost11-lxc:
  provider: devhost10-lxc
  lxc_profile:
    template: ubuntu
  network_profile:
    etho:
      link: lxcbr0
  minion:
    master: 10.5.0.1
    master_port: 4506

Static ip:

# Note: This example would go in /etc/salt/cloud.profiles or any file in the
# /etc/salt/cloud.profiles.d/ directory.
devhost10-lxc:
  provider: devhost10-lxc
  nic_opts:
    eth0:
      ipv4: 10.0.3.9
  minion:
    master: 10.5.0.1
    master_port: 4506

DHCP:

# Note: This example would go in /etc/salt/cloud.profiles or any file in the
# /etc/salt/cloud.profiles.d/ directory.
devhost10-lxc:
  provider: devhost10-lxc
  minion:
    master: 10.5.0.1
    master_port: 4506
Driver Support
  • Container creation
  • Image listing (LXC templates)
  • Running container information (IP addresses, etc.)

Getting Started With Linode

Linode is a public cloud provider with a focus on Linux instances.

Dependencies
  • linode-python >= 1.1.1

OR

  • Libcloud >= 0.13.2

This driver supports accessing Linode via linode-python or Apache Libcloud. Linode-python is recommended, it is more full-featured than Libcloud. In particular using linode-python enables stopping, starting, and cloning machines.

Driver selection is automatic. If linode-python is present it will be used. If it is absent, salt-cloud will fall back to Libcloud. If neither are present salt-cloud will abort.

NOTE: linode-python 1.1.1 or later is recommended. Earlier versions of linode-python should work but leak sensitive information into the debug logs.

Linode-python can be downloaded from https://github.com/tjfontaine/linode-python or installed via pip.

Configuration

Linode requires a single API key, but the default root password for new instances also needs to be set:

# Note: This example is for /etc/salt/cloud.providers or any file in the
# /etc/salt/cloud.providers.d/ directory.

my-linode-config:
  apikey: asldkgfakl;sdfjsjaslfjaklsdjf;askldjfaaklsjdfhasldsadfghdkf
  password: F00barbaz
  ssh_pubkey: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKHEOLLbeXgaqRQT9NBAopVz366SdYc0KKX33vAnq+2R user@host
  ssh_key_file: ~/.ssh/id_ed25519
  provider: linode

The password needs to be 8 characters and contain lowercase, uppercase, and numbers.

Profiles
Cloud Profiles

Set up an initial profile at /etc/salt/cloud.profiles or in the /etc/salt/cloud.profiles.d/ directory:

linode_1024:
  provider: my-linode-config
  size: Linode 1024
  image: Arch Linux 2013.06

Sizes can be obtained using the --list-sizes option for the salt-cloud command:

# salt-cloud --list-sizes my-linode-config
my-linode-config:
    ----------
    linode:
        ----------
        Linode 1024:
            ----------
            bandwidth:
                2000
            disk:
                49152
            driver:
            get_uuid:
            id:
                1
            name:
                Linode 1024
            price:
                20.0
            ram:
                1024
            uuid:
                03e18728ce4629e2ac07c9cbb48afffb8cb499c4
...SNIP...

Images can be obtained using the --list-images option for the salt-cloud command:

# salt-cloud --list-images my-linode-config
my-linode-config:
    ----------
    linode:
        ----------
        Arch Linux 2013.06:
            ----------
            driver:
            extra:
                ----------
                64bit:
                    1
                pvops:
                    1
            get_uuid:
            id:
                112
            name:
                Arch Linux 2013.06
            uuid:
                8457f92eaffc92b7666b6734a96ad7abe1a8a6dd
...SNIP...
Cloning

When salt-cloud accesses Linode via linode-python it can clone machines.

It is safest to clone a stopped machine. To stop a machine run

salt-cloud -a stop machine_to_clone

To create a new machine based on another machine, add an entry to your linode cloud profile that looks like this:

li-clone:
  provider: linode
  clonefrom: machine_to_clone
  script_args: -C

Then run salt-cloud as normal, specifying -p li-clone. The profile name can be anything--it doesn't have to be li-clone.

Clonefrom: is the name of an existing machine in Linode from which to clone. Script_args: -C is necessary to avoid re-deploying Salt via salt-bootstrap. -C will just re-deploy keys so the new minion will not have a duplicate key or minion_id on the master.

Getting Started With OpenStack

OpenStack is one the most popular cloud projects. It's an open source project to build public and/or private clouds. You can use Salt Cloud to launch OpenStack instances.

Dependencies
  • Libcloud >= 0.13.2
Configuration
  • Using the new format, set up the cloud configuration at /etc/salt/cloud.providers or /etc/salt/cloud.providers.d/openstack.conf:
my-openstack-config:
  # Set the location of the salt-master
  #
  minion:
    master: saltmaster.example.com

  # Configure the OpenStack driver
  #
  identity_url: http://identity.youopenstack.com/v2.0/tokens
  compute_name: nova
  protocol: ipv4

  compute_region: RegionOne

  # Configure Openstack authentication credentials
  #
  user: myname
  password: 123456
  # tenant is the project name
  tenant: myproject

  provider: openstack

  # skip SSL certificate validation (default false)
  insecure: false
Using nova client to get information from OpenStack

One of the best ways to get information about OpenStack is using the novaclient python package (available in pypi as python-novaclient). The client configuration is a set of environment variables that you can get from the Dashboard. Log in and then go to Project -> Access & security -> API Access and download the "OpenStack RC file". Then:

source /path/to/your/rcfile
nova credentials
nova endpoints

In the nova endpoints output you can see the information about compute_region and compute_name.

Compute Region

It depends on the OpenStack cluster that you are using. Please, have a look at the previous sections.

Authentication

The user and password is the same user as is used to log into the OpenStack Dashboard.

Profiles

Here is an example of a profile:

openstack_512:
  provider: my-openstack-config
  size: m1.tiny
  image: cirros-0.3.1-x86_64-uec
  ssh_key_file: /tmp/test.pem
  ssh_key_name: test
  ssh_interface: private_ips

The following list explains some of the important properties.

size
can be one of the options listed in the output of nova flavor-list.
image
can be one of the options listed in the output of nova image-list.
ssh_key_file
The SSH private key that the salt-cloud uses to SSH into the VM after its first booted in order to execute a command or script. This private key's public key must be the openstack public key inserted into the authorized_key's file of the VM's root user account.
ssh_key_name
The name of the openstack SSH public key that is inserted into the authorized_keys file of the VM's root user account. Prior to using this public key, you must use openstack commands or the horizon web UI to load that key into the tenant's account. Note that this openstack tenant must be the one you defined in the cloud provider.
ssh_interface
This option allows you to create a VM without a public IP. If this option is omitted and the VM does not have a public IP, then the salt-cloud waits for a certain period of time and then destroys the VM.

For more information concerning cloud profiles, see here.

change_password

If no ssh_key_file is provided, and the server already exists, change_password will use the api to change the root password of the server so that it can be bootstrapped.

change_password: True
userdata_file

Use userdata_file to specify the userdata file to upload for use with cloud-init if available.

userdata_file: /etc/salt/cloud-init/packages.yml

Getting Started With Parallels

Parallels Cloud Server is a product by Parallels that delivers a cloud hosting solution. The PARALLELS module for Salt Cloud enables you to manage instances hosted by a provider using PCS. Further information can be found at:

http://www.parallels.com/products/pcs/

  • Using the old format, set up the cloud configuration at /etc/salt/cloud:
# Set up the location of the salt master
#
minion:
    master: saltmaster.example.com

# Set the PARALLELS access credentials (see below)
#
PARALLELS.user: myuser
PARALLELS.password: badpass

# Set the access URL for your PARALLELS provider
#
PARALLELS.url: https://api.cloud.xmission.com:4465/paci/v1.0/
  • Using the new format, set up the cloud configuration at /etc/salt/cloud.providers or /etc/salt/cloud.providers.d/parallels.conf:
my-parallels-config:
  # Set up the location of the salt master
  #
  minion:
    master: saltmaster.example.com

  # Set the PARALLELS access credentials (see below)
  #
  user: myuser
  password: badpass

  # Set the access URL for your PARALLELS provider
  #
  url: https://api.cloud.xmission.com:4465/paci/v1.0/
  provider: parallels
Access Credentials

The user, password, and url will be provided to you by your cloud provider. These are all required in order for the PARALLELS driver to work.

Cloud Profiles

Set up an initial profile at /etc/salt/cloud.profiles or /etc/salt/cloud.profiles.d/parallels.conf:

  • Using the old cloud configuration format:
parallels-ubuntu:
    provider: parallels
    image: ubuntu-12.04-x86_64
  • Using the new cloud configuration format and the cloud configuration example from above:
parallels-ubuntu:
    provider: my-parallels-config
    image: ubuntu-12.04-x86_64

The profile can be realized now with a salt command:

# salt-cloud -p parallels-ubuntu myubuntu

This will create an instance named myubuntu on the cloud provider. The minion that is installed on this instance will have an id of myubuntu. If the command was executed on the salt-master, its Salt key will automatically be signed on the master.

Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt:

# salt myubuntu test.ping
Required Settings

The following settings are always required for PARALLELS:

  • Using the old cloud configuration format:
PARALLELS.user: myuser
PARALLELS.password: badpass
PARALLELS.url: https://api.cloud.xmission.com:4465/paci/v1.0/
  • Using the new cloud configuration format:
my-parallels-config:
  user: myuser
  password: badpass
  url: https://api.cloud.xmission.com:4465/paci/v1.0/
  provider: parallels
Optional Settings

Unlike other cloud providers in Salt Cloud, Parallels does not utilize a size setting. This is because Parallels allows the end-user to specify a more detailed configuration for their instances, than is allowed by many other cloud providers. The following options are available to be used in a profile, with their default settings listed.

# Description of the instance. Defaults to the instance name.
desc: <instance_name>

# How many CPU cores, and how fast they are (in MHz)
cpu_number: 1
cpu_power: 1000

# How many megabytes of RAM
ram: 256

# Bandwidth available, in kbps
bandwidth: 100

# How many public IPs will be assigned to this instance
ip_num: 1

# Size of the instance disk (in GiB)
disk_size: 10

# Username and password
ssh_username: root
password: <value from PARALLELS.password>

# The name of the image, from ``salt-cloud --list-images parallels``
image: ubuntu-12.04-x86_64

Getting Started With Proxmox

Proxmox Virtual Environment is a complete server virtualization management solution, based on KVM virtualization and OpenVZ containers. Further information can be found at:

http://www.proxmox.org/

Dependencies
  • IPy >= 0.81
  • requests >= 2.2.1

Please note: This module allows you to create both OpenVZ and KVM but installing Salt on it will only be done when the VM is an OpenVZ container rather than a KVM virtual machine.

  • Set up the cloud configuration at /etc/salt/cloud.providers or /etc/salt/cloud.providers.d/proxmox.conf:
my-proxmox-config:
  # Set up the location of the salt master
  #
  minion:
    master: saltmaster.example.com

  # Set the PROXMOX access credentials (see below)
  #
  user: myuser@pve
  password: badpass

  # Set the access URL for your PROXMOX provider
  #
  url: your.proxmox.host
  provider: proxmox
Access Credentials

The user, password, and url will be provided to you by your cloud provider. These are all required in order for the PROXMOX driver to work.

Cloud Profiles

Set up an initial profile at /etc/salt/cloud.profiles or /etc/salt/cloud.profiles.d/proxmox.conf:

  • Configure a profile to be used:
proxmox-ubuntu:
    provider: proxmox
    image: local:vztmpl/ubuntu-12.04-standard_12.04-1_amd64.tar.gz
    technology: openvz
    host: myvmhost
    ip_address: 192.168.100.155
    password: topsecret

The profile can be realized now with a salt command:

# salt-cloud -p proxmox-ubuntu myubuntu

This will create an instance named myubuntu on the cloud provider. The minion that is installed on this instance will have a hostname of myubuntu. If the command was executed on the salt-master, its Salt key will automatically be signed on the master.

Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt:

# salt myubuntu test.ping
Required Settings

The following settings are always required for PROXMOX:

  • Using the new cloud configuration format:
my-proxmox-config:
  provider: proxmox
  user: saltcloud@pve
  password: xyzzy
  url: your.proxmox.host
Optional Settings

Unlike other cloud providers in Salt Cloud, Proxmox does not utilize a size setting. This is because Proxmox allows the end-user to specify a more detailed configuration for their instances, than is allowed by many other cloud providers. The following options are available to be used in a profile, with their default settings listed.

# Description of the instance.
desc: <instance_name>

# How many CPU cores, and how fast they are (in MHz)
cpus: 1
cpuunits: 1000

# How many megabytes of RAM
memory: 256

# How much swap space in MB
swap: 256

# Whether to auto boot the vm after the host reboots
onboot: 1

# Size of the instance disk (in GiB)
disk: 10

# Host to create this vm on
host: myvmhost

# Nameservers. Defaults to host
nameserver: 8.8.8.8 8.8.4.4

# Username and password
ssh_username: root
password: <value from PROXMOX.password>

# The name of the image, from ``salt-cloud --list-images proxmox``
image: local:vztmpl/ubuntu-12.04-standard_12.04-1_amd64.tar.gz

Getting Started With Rackspace

Rackspace is a major public cloud platform which may be configured using either the rackspace or the openstack driver, depending on your needs.

Please note that the rackspace driver is only intended for 1st gen instances, aka, "the old cloud" at Rackspace. It is required for 1st gen instances, but will not work with OpenStack-based instances. Unless you explicitly have a reason to use it, it is highly recommended that you use the openstack driver instead.

Dependencies
  • Libcloud >= 0.13.2
Configuration
To use the openstack driver (recommended), set up the cloud configuration at
/etc/salt/cloud.providers or /etc/salt/cloud.providers.d/rackspace.conf:
my-rackspace-config:
  # Set the location of the salt-master
  #
  minion:
    master: saltmaster.example.com

  # Configure Rackspace using the OpenStack plugin
  #
  identity_url: 'https://identity.api.rackspacecloud.com/v2.0/tokens'
  compute_name: cloudServersOpenStack
  protocol: ipv4

  # Set the compute region:
  #
  compute_region: DFW

  # Configure Rackspace authentication credentials
  #
  user: myname
  tenant: 123456
  apikey: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

  provider: openstack
To use the rackspace driver, set up the cloud configuration at
/etc/salt/cloud.providers or /etc/salt/cloud.providers.d/rackspace.conf:
my-rackspace-config:
  provider: rackspace
  # The Rackspace login user
  user: fred
  # The Rackspace user's apikey
  apikey: 901d3f579h23c8v73q9

The settings that follow are for using Rackspace with the openstack driver, and will not work with the rackspace driver.

Compute Region

Rackspace currently has six compute regions which may be used:

DFW -> Dallas/Forth Worth
ORD -> Chicago
SYD -> Sydney
LON -> London
IAD -> Northern Virginia
HKG -> Hong Kong

Note: Currently the LON region is only available with a UK account, and UK accounts cannot access other regions

Authentication

The user is the same user as is used to log into the Rackspace Control Panel. The tenant and apikey can be found in the API Keys area of the Control Panel. The apikey will be labeled as API Key (and may need to be generated), and tenant will be labeled as Cloud Account Number.

An initial profile can be configured in /etc/salt/cloud.profiles or /etc/salt/cloud.profiles.d/rackspace.conf:

openstack_512:
    provider: my-rackspace-config
    size: 512 MB Standard
    image: Ubuntu 12.04 LTS (Precise Pangolin)

To instantiate a machine based on this profile:

# salt-cloud -p openstack_512 myinstance

This will create a virtual machine at Rackspace with the name myinstance. This operation may take several minutes to complete, depending on the current load at the Rackspace data center.

Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt:

# salt myinstance test.ping
RackConnect Environments

Rackspace offers a hybrid hosting configuration option called RackConnect that allows you to use a physical firewall appliance with your cloud servers. When this service is in use the public_ip assigned by nova will be replaced by a NAT ip on the firewall. For salt-cloud to work properly it must use the newly assigned "access ip" instead of the Nova assigned public ip. You can enable that capability by adding this to your profiles:

openstack_512:
    provider: my-openstack-config
    size: 512 MB Standard
    image: Ubuntu 12.04 LTS (Precise Pangolin)
    rackconnect: True
Managed Cloud Environments

Rackspace offers a managed service level of hosting. As part of the managed service level you have the ability to choose from base of lamp installations on cloud server images. The post build process for both the base and the lamp installations used Chef to install things such as the cloud monitoring agent and the cloud backup agent. It also takes care of installing the lamp stack if selected. In order to prevent the post installation process from stomping over the bootstrapping you can add the below to your profiles.

openstack_512:
    provider: my-rackspace-config
    size: 512 MB Standard
    image: Ubuntu 12.04 LTS (Precise Pangolin)
    managedcloud: True
First and Next Generation Images

Rackspace provides two sets of virtual machine images, first, and next generation. As of 0.8.9 salt-cloud will default to using the next generation images. To force the use of first generation images, on the profile configuration please add:

FreeBSD-9.0-512:
  provider: my-rackspace-config
  size: 512 MB Standard
  image: FreeBSD 9.0
  force_first_gen: True
Private Subnets

By default salt-cloud will not add Rackspace private networks to new servers. To enable a private network to a server instantiated by salt cloud, add the following section to the provider file (typically /etc/salt/cloud.providers.d/rackspace.conf)

networks:
  - fixed:
    # This is the private network
    - private-network-id
    # This is Rackspace's "PublicNet"
    - 00000000-0000-0000-0000-000000000000
    # This is Rackspace's "ServiceNet"
    - 11111111-1111-1111-1111-111111111111

To get the Rackspace private network ID, go to Networking, Networks and hover over the private network name.

The order of the networks in the above code block does not map to the order of the ethernet devices on newly created servers. Public IP will always be first ( eth0 ) followed by servicenet ( eth1 ) and then private networks.

Enabling the private network per above gives the option of using the private subnet for all master-minion communication, including the bootstrap install of salt-minion. To enable the minion to use the private subnet, update the master: line in the minion: section of the providers file. To configure the master to only listen on the private subnet IP, update the interface: line in the /etc/salt/master file to be the private subnet IP of the salt master.

Getting Started With Scaleway

Scaleway is the first IaaS provider worldwide to offer an ARM based cloud. It’s the ideal platform for horizontal scaling with BareMetal SSD servers. The solution provides on demand resources: it comes with on-demand SSD storage, movable IPs , images, security group and an Object Storage solution. https://scaleway.com

Configuration

Using Salt for Scaleway, requires an access key and an API token. API tokens are unique identifiers associated with your Scaleway account. To retrieve your access key and API token, log-in to the Scaleway control panel, open the pull-down menu on your account name and click on "My Credentials" link.

If you do not have API token you can create one by clicking the "Create New Token" button on the right corner.

# Note: This example is for /etc/salt/cloud.providers or any file in the
# /etc/salt/cloud.providers.d/ directory.

my-scaleway-config:
  access_key: 15cf404d-4560-41b1-9a0c-21c3d5c4ff1f
  token: a7347ec8-5de1-4024-a5e3-24b77d1ba91d
  provider: scaleway
Profiles
Cloud Profiles

Set up an initial profile at /etc/salt/cloud.profiles or in the /etc/salt/cloud.profiles.d/ directory:

scalewa-ubuntu:
    provider: my-scaleway-config
    image: Ubuntu Trusty (14.04 LTS)

Images can be obtained using the --list-images option for the salt-cloud command:

#salt-cloud --list-images my-scaleway-config
my-scaleway-config:
  ----------
  scaleway:
      ----------
      069fd876-eb04-44ab-a9cd-47e2fa3e5309:
          ----------
          arch:
              arm
          creation_date:
              2015-03-12T09:35:45.764477+00:00
          default_bootscript:
              {u'kernel': {u'dtb': u'', u'title': u'Pimouss 3.2.34-30-std', u'id': u'cfda4308-cd6f-4e51-9744-905fc0da370f', u'path': u'kernel/pimouss-uImage-3.2.34-30-std'}, u'title': u'3.2.34-std #30 (stable)', u'id': u'c5af0215-2516-4316-befc-5da1cfad609c', u'initrd': {u'path': u'initrd/c1-uInitrd', u'id': u'1be14b1b-e24c-48e5-b0b6-7ba452e42b92', u'title': u'C1 initrd'}, u'bootcmdargs': {u'id': u'd22c4dde-e5a4-47ad-abb9-d23b54d542ff', u'value': u'ip=dhcp boot=local root=/dev/nbd0 USE_XNBD=1 nbd.max_parts=8'}, u'organization': u'11111111-1111-4111-8111-111111111111', u'public': True}
          extra_volumes:
              []
          id:
              069fd876-eb04-44ab-a9cd-47e2fa3e5309
          modification_date:
              2015-04-24T12:02:16.820256+00:00
          name:
              Ubuntu Vivid (15.04)
          organization:
              a283af0b-d13e-42e1-a43f-855ffbf281ab
          public:
              True
          root_volume:
              {u'name': u'distrib-ubuntu-vivid-2015-03-12_10:32-snapshot', u'id': u'a6d02e63-8dee-4bce-b627-b21730f35a05', u'volume_type': u'l_ssd', u'size': 50000000000L}
...

Execute a query and return all information about the nodes running on configured cloud providers using the -Q option for the salt-cloud command:

# salt-cloud -F
[INFO    ] salt-cloud starting
[INFO    ] Starting new HTTPS connection (1): api.scaleway.com
my-scaleway-config:
  ----------
  scaleway:
      ----------
      salt-manager:
          ----------
          creation_date:
              2015-06-03T08:17:38.818068+00:00
          hostname:
              salt-manager
...

Note

Additional documentation about Scaleway can be found at https://www.scaleway.com/docs.

Getting Started With SoftLayer

SoftLayer is a public cloud provider, and baremetal hardware hosting provider.

Dependencies

The SoftLayer driver for Salt Cloud requires the softlayer package, which is available at PyPI:

https://pypi.python.org/pypi/SoftLayer

This package can be installed using pip or easy_install:

# pip install softlayer
# easy_install softlayer
Configuration

Set up the cloud config at /etc/salt/cloud.providers:

# Note: These examples are for /etc/salt/cloud.providers

  my-softlayer:
    # Set up the location of the salt master
    minion:
      master: saltmaster.example.com

    # Set the SoftLayer access credentials (see below)
    user: MYUSER1138
    apikey: 'e3b68aa711e6deadc62d5b76355674beef7cc3116062ddbacafe5f7e465bfdc9'

    provider: softlayer


  my-softlayer-hw:
    # Set up the location of the salt master
    minion:
      master: saltmaster.example.com

    # Set the SoftLayer access credentials (see below)
    user: MYUSER1138
    apikey: 'e3b68aa711e6deadc62d5b76355674beef7cc3116062ddbacafe5f7e465bfdc9'

    provider: softlayer_hw
Access Credentials

The user setting is the same user as is used to log into the SoftLayer Administration area. The apikey setting is found inside the Admin area after logging in:

  • Hover over the Administrative menu item.
  • Click the API Access link.
  • The apikey is located next to the user setting.
Profiles
Cloud Profiles

Set up an initial profile at /etc/salt/cloud.profiles:

base_softlayer_ubuntu:
  provider: my-softlayer
  image: UBUNTU_LATEST
  cpu_number: 1
  ram: 1024
  disk_size: 100
  local_disk: True
  hourly_billing: True
  domain: example.com
  location: sjc01
  # Optional
  max_net_speed: 1000
  private_vlan: 396
  private_network: True
  private_ssh: True
  # May be used _instead_of_ image
  global_identifier: 320d8be5-46c0-dead-cafe-13e3c51

Most of the above items are required; optional items are specified below.

image

Images to build an instance can be found using the --list-images option:

# salt-cloud --list-images my-softlayer

The setting used will be labeled as template.

cpu_number

This is the number of CPU cores that will be used for this instance. This number may be dependent upon the image that is used. For instance:

Red Hat Enterprise Linux 6 - Minimal Install (64 bit) (1 - 4 Core):
    ----------
    name:
        Red Hat Enterprise Linux 6 - Minimal Install (64 bit) (1 - 4 Core)
    template:
        REDHAT_6_64
Red Hat Enterprise Linux 6 - Minimal Install (64 bit) (5 - 100 Core):
    ----------
    name:
        Red Hat Enterprise Linux 6 - Minimal Install (64 bit) (5 - 100 Core)
    template:
        REDHAT_6_64

Note that the template (meaning, the image option) for both of these is the same, but the names suggests how many CPU cores are supported.

ram

This is the amount of memory, in megabytes, that will be allocated to this instance.

disk_size

The amount of disk space that will be allocated to this image, in megabytes.

local_disk

When true the disks for the computing instance will be provisioned on the host which it runs, otherwise SAN disks will be provisioned.

hourly_billing

When true the computing instance will be billed on hourly usage, otherwise it will be billed on a monthly basis.

domain

The domain name that will be used in the FQDN (Fully Qualified Domain Name) for this instance. The domain setting will be used in conjunction with the instance name to form the FQDN.

location

Images to build an instance can be found using the --list-locations option:

# salt-cloud --list-location my-softlayer
max_net_speed

Specifies the connection speed for the instance's network components. This setting is optional. By default, this is set to 10.

public_vlan

If it is necessary for an instance to be created within a specific frontend VLAN, the ID for that VLAN can be specified in either the provider or profile configuration.

This ID can be queried using the list_vlans function, as described below. This setting is optional.

private_vlan

If it is necessary for an instance to be created within a specific backend VLAN, the ID for that VLAN can be specified in either the provider or profile configuration.

This ID can be queried using the list_vlans function, as described below. This setting is optional.

private_network

If a server is to only be used internally, meaning it does not have a public VLAN associated with it, this value would be set to True. This setting is optional. The default is False.

private_ssh

Whether to run the deploy script on the server using the public IP address or the private IP address. If set to True, Salt Cloud will attempt to SSH into the new server using the private IP address. The default is False. This settiong is optional.

global_identifier

When creating an instance using a custom template, this option is set to the corresponding value obtained using the list_custom_images function. This option will not be used if an image is set, and if an image is not set, it is required.

The profile can be realized now with a salt command:

# salt-cloud -p base_softlayer_ubuntu myserver

Using the above configuration, this will create myserver.example.com.

Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt:

# salt 'myserver.example.com' test.ping
Cloud Profiles

Set up an initial profile at /etc/salt/cloud.profiles:

base_softlayer_hw_centos:
  provider: my-softlayer-hw
  # CentOS 6.0 - Minimal Install (64 bit)
  image: 13963
  # 2 x 2.0 GHz Core Bare Metal Instance - 2 GB Ram
  size: 1921
  # 250GB SATA II
  hdd: 19
  # San Jose 01
  location: 168642
  domain: example.com
  # Optional
  vlan: 396
  port_speed: 273
  banwidth: 248

Most of the above items are required; optional items are specified below.

image

Images to build an instance can be found using the --list-images option:

# salt-cloud --list-images my-softlayer-hw

A list of id`s and names will be provided. The `name will describe the operating system and architecture. The id will be the setting to be used in the profile.

size

Sizes to build an instance can be found using the --list-sizes option:

# salt-cloud --list-sizes my-softlayer-hw

A list of id`s and names will be provided. The `name will describe the speed and quantity of CPU cores, and the amount of memory that the hardware will contain. The id will be the setting to be used in the profile.

hdd

There are currently two sizes of hard disk drive (HDD) that are available for hardware instances on SoftLayer:

19: 250GB SATA II
1267: 500GB SATA II

The hdd setting in the profile will be either 19 or 1267. Other sizes may be added in the future.

location

Locations to build an instance can be found using the --list-images option:

# salt-cloud --list-locations my-softlayer-hw

A list of IDs and names will be provided. The location will describe the location in human terms. The id will be the setting to be used in the profile.

domain

The domain name that will be used in the FQDN (Fully Qualified Domain Name) for this instance. The domain setting will be used in conjunction with the instance name to form the FQDN.

vlan

If it is necessary for an instance to be created within a specific VLAN, the ID for that VLAN can be specified in either the provider or profile configuration.

This ID can be queried using the list_vlans function, as described below.

port_speed

Specifies the speed for the instance's network port. This setting refers to an ID within the SoftLayer API, which sets the port speed. This setting is optional. The default is 273, or, 100 Mbps Public & Private Networks. The following settings are available:

  • 273: 100 Mbps Public & Private Networks
  • 274: 1 Gbps Public & Private Networks
  • 21509: 10 Mbps Dual Public & Private Networks (up to 20 Mbps)
  • 21513: 100 Mbps Dual Public & Private Networks (up to 200 Mbps)
  • 2314: 1 Gbps Dual Public & Private Networks (up to 2 Gbps)
  • 272: 10 Mbps Public & Private Networks
bandwidth

Specifies the network bandwidth available for the instance. This setting refers to an ID within the SoftLayer API, which sets the bandwidth. This setting is optional. The default is 248, or, 5000 GB Bandwidth. The following settings are available:

  • 248: 5000 GB Bandwidth
  • 129: 6000 GB Bandwidth
  • 130: 8000 GB Bandwidth
  • 131: 10000 GB Bandwidth
  • 36: Unlimited Bandwidth (10 Mbps Uplink)
  • 125: Unlimited Bandwidth (100 Mbps Uplink)
Actions

The following actions are currently supported by the SoftLayer Salt Cloud driver.

show_instance

This action is a thin wrapper around --full-query, which displays details on a single instance only. In an environment with several machines, this will save a user from having to sort through all instance data, just to examine a single instance.

$ salt-cloud -a show_instance myinstance
Functions

The following functions are currently supported by the SoftLayer Salt Cloud driver.

list_vlans

This function lists all VLANs associated with the account, and all known data from the SoftLayer API concerning those VLANs.

$ salt-cloud -f list_vlans my-softlayer
$ salt-cloud -f list_vlans my-softlayer-hw

The id returned in this list is necessary for the vlan option when creating an instance.

list_custom_images

This function lists any custom templates associated with the account, that can be used to create a new instance.

$ salt-cloud -f list_custom_images my-softlayer

The globalIdentifier returned in this list is necessary for the global_identifier option when creating an image using a custom template.

Optional Products for SoftLayer HW

The softlayer_hw provider supports the ability to add optional products, which are supported by SoftLayer's API. These products each have an ID associated with them, that can be passed into Salt Cloud with the optional_products option:

softlayer_hw_test:
  provider: my-softlayer-hw
  # CentOS 6.0 - Minimal Install (64 bit)
  image: 13963
  # 2 x 2.0 GHz Core Bare Metal Instance - 2 GB Ram
  size: 1921
  # 250GB SATA II
  hdd: 19
  # San Jose 01
  location: 168642
  domain: example.com
  optional_products:
    # MySQL for Linux
    - id: 28
    # Business Continuance Insurance
    - id: 104

These values can be manually obtained by looking at the source of an order page on the SoftLayer web interface. For convenience, many of these values are listed here:

Public Secondary IP Addresses
  • 22: 4 Public IP Addresses
  • 23: 8 Public IP Addresses
Primary IPv6 Addresses
  • 17129: 1 IPv6 Address
Public Static IPv6 Addresses
  • 1481: /64 Block Static Public IPv6 Addresses
OS-Specific Addon
  • 17139: XenServer Advanced for XenServer 6.x
  • 17141: XenServer Enterprise for XenServer 6.x
  • 2334: XenServer Advanced for XenServer 5.6
  • 2335: XenServer Enterprise for XenServer 5.6
  • 13915: Microsoft WebMatrix
  • 21276: VMware vCenter 5.1 Standard
Control Panel Software
  • 121: cPanel/WHM with Fantastico and RVskin
  • 20778: Parallels Plesk Panel 11 (Linux) 100 Domain w/ Power Pack
  • 20786: Parallels Plesk Panel 11 (Windows) 100 Domain w/ Power Pack
  • 20787: Parallels Plesk Panel 11 (Linux) Unlimited Domain w/ Power Pack
  • 20792: Parallels Plesk Panel 11 (Windows) Unlimited Domain w/ Power Pack
  • 2340: Parallels Plesk Panel 10 (Linux) 100 Domain w/ Power Pack
  • 2339: Parallels Plesk Panel 10 (Linux) Unlimited Domain w/ Power Pack
  • 13704: Parallels Plesk Panel 10 (Windows) Unlimited Domain w/ Power Pack
Database Software
  • 29: MySQL 5.0 for Windows
  • 28: MySQL for Linux
  • 21501: Riak 1.x
  • 20893: MongoDB
  • 30: Microsoft SQL Server 2005 Express
  • 92: Microsoft SQL Server 2005 Workgroup
  • 90: Microsoft SQL Server 2005 Standard
  • 94: Microsoft SQL Server 2005 Enterprise
  • 1330: Microsoft SQL Server 2008 Express
  • 1340: Microsoft SQL Server 2008 Web
  • 1337: Microsoft SQL Server 2008 Workgroup
  • 1334: Microsoft SQL Server 2008 Standard
  • 1331: Microsoft SQL Server 2008 Enterprise
  • 2179: Microsoft SQL Server 2008 Express R2
  • 2173: Microsoft SQL Server 2008 Web R2
  • 2183: Microsoft SQL Server 2008 Workgroup R2
  • 2180: Microsoft SQL Server 2008 Standard R2
  • 2176: Microsoft SQL Server 2008 Enterprise R2
Anti-Virus & Spyware Protection
  • 594: McAfee VirusScan Anti-Virus - Windows
  • 414: McAfee Total Protection - Windows
Insurance
  • 104: Business Continuance Insurance
Monitoring
  • 55: Host Ping
  • 56: Host Ping and TCP Service Monitoring
Notification
  • 57: Email and Ticket
Advanced Monitoring
  • 2302: Monitoring Package - Basic
  • 2303: Monitoring Package - Advanced
  • 2304: Monitoring Package - Premium Application
Response
  • 58: Automated Notification
  • 59: Automated Reboot from Monitoring
  • 60: 24x7x365 NOC Monitoring, Notification, and Response
Intrusion Detection & Protection
  • 413: McAfee Host Intrusion Protection w/Reporting
Hardware & Software Firewalls
  • 411: APF Software Firewall for Linux
  • 894: Microsoft Windows Firewall
  • 410: 10Mbps Hardware Firewall
  • 409: 100Mbps Hardware Firewall
  • 408: 1000Mbps Hardware Firewall

Getting Started with VEXXHOST

VEXXHOST is an cloud computing provider which provides Canadian cloud computing services which are based in Monteral and uses the libcloud OpenStack driver. VEXXHOST currently runs the Havana release of OpenStack. When provisioning new instances, they automatically get a public IP and private IP address. Therefore, you do not need to assign a floating IP to access your instance once it's booted.

Cloud Provider Configuration

To use the openstack driver for the VEXXHOST public cloud, you will need to set up the cloud provider configuration file as in the example below:

/etc/salt/cloud.providers.d/vexxhost.conf: In order to use the VEXXHOST public cloud, you will need to setup a cloud provider configuration file as in the example below which uses the OpenStack driver.

vexxhost:
  # Set the location of the salt-master
  #
  minion:
    master: saltmaster.example.com

  # Configure VEXXHOST using the OpenStack plugin
  #
  identity_url: http://auth.api.thenebulacloud.com:5000/v2.0/tokens
  compute_name: nova

  # Set the compute region:
  #
  compute_region: na-yul-nhs1

  # Configure VEXXHOST authentication credentials
  #
  user: your-tenant-id
  password: your-api-key
  tenant: your-tenant-name

  # keys to allow connection to the instance launched
  #
  ssh_key_name: yourkey
  ssh_key_file: /path/to/key/yourkey.priv

  provider: openstack
Authentication

All of the authentication fields that you need can be found by logging into your VEXXHOST customer center. Once you've logged in, you will need to click on "CloudConsole" and then click on "API Credentials".

Cloud Profile Configuration

In order to get the correct image UUID and the instance type to use in the cloud profile, you can run the following command respectively:

# salt-cloud --list-images=vexxhost-config
# salt-cloud --list-sizes=vexxhost-config

Once you have that, you can go ahead and create a new cloud profile. This profile will build an Ubuntu 12.04 LTS nb.2G instance.

/etc/salt/cloud.profiles.d/vh_ubuntu1204_2G.conf:

vh_ubuntu1204_2G:
    provider: vexxhost
    image: 4051139f-750d-4d72-8ef0-074f2ccc7e5a
    size: nb.2G
Provision an instance

To create an instance based on the sample profile that we created above, you can run the following salt-cloud command.

# salt-cloud -p vh_ubuntu1204_2G vh_instance1

Typically, instances are provisioned in under 30 seconds on the VEXXHOST public cloud. After the instance provisions, it will be set up a minion and then return all the instance information once it's complete.

Once the instance has been setup, you can test connectivity to it by running the following command:

# salt vh_instance1 test.ping

You can now continue to provision new instances and they will all automatically be set up as minions of the master you've defined in the configuration file.

Getting Started With VMware

New in version Beryllium.

Author: Nitin Madhok <nmadhok@clemson.edu>

The VMware cloud module allows you to manage VMware ESX, ESXi, and vCenter.

Dependencies

The vmware module for Salt Cloud requires the pyVmomi package, which is available at PyPI:

https://pypi.python.org/pypi/pyvmomi

This package can be installed using pip or easy_install:

pip install pyvmomi
easy_install pyvmomi
Configuration

The VMware cloud module needs the vCenter URL, username and password to be set up in the cloud configuration at /etc/salt/cloud.providers or /etc/salt/cloud.providers.d/vmware.conf:

my-vmware-config:
  provider: vmware
  user: "DOMAIN\user"
  password: "verybadpass"
  url: "vcenter01.domain.com"

vmware-vcenter02:
  provider: vmware
  user: "DOMAIN\user"
  password: "verybadpass"
  url: "vcenter02.domain.com"

vmware-vcenter03:
  provider: vmware
  user: "DOMAIN\user"
  password: "verybadpass"
  url: "vcenter03.domain.com"
  protocol: "http"
  port: 80

Note

Optionally, protocol and port can be specified if the vCenter server is not using the defaults. Default is protocol: https and port: 443.

Profiles

Set up an initial profile at /etc/salt/cloud.profiles or /etc/salt/cloud.profiles.d/vmware.conf:

vmware-centos6.5:
  provider: vmware-vcenter01
  clonefrom: test-vm

  ## Optional arguments
  num_cpus: 4
  memory: 8192
  devices:
    cd:
      CD/DVD drive 1:
        device_type: datastore_iso_file
        iso_path: "[nap004-1] vmimages/tools-isoimages/linux.iso"
      CD/DVD drive 2:
        device_type: client_device
        mode: atapi
      CD/DVD drive 3:
        device_type: client_device
        mode: passthrough
    disk:
      Hard disk 1:
        size: 30
      Hard disk 2:
        size: 20
      Hard disk 3:
        size: 5
    network:
      Network adapter 1:
        name: 10.20.30-400-Test
        switch_type: standard
        ip: 10.20.30.123
        gateway: [10.20.30.110]
        subnet_mask: 255.255.255.128
        domain: mycompany.com
      Network adapter 2:
        name: 10.30.40-500-Dev-DHCP
        adapter_type: e1000
        switch_type: distributed
      Network adapter 3:
        name: 10.40.50-600-Prod
        adapter_type: vmxnet3
        switch_type: distributed
        ip: 10.40.50.123
        gateway: [10.40.50.110]
        subnet_mask: 255.255.255.128
        domain: mycompany.com
    scsi:
      SCSI controller 1:
        type: lsilogic
      SCSI controller 2:
        type: lsilogic_sas
        bus_sharing: virtual
      SCSI controller 3:
        type: paravirtual
        bus_sharing: physical

  domain: mycompany.com
  dns_servers:
    - 123.127.255.240
    - 123.127.255.241
    - 123.127.255.242

  # If cloning from template, either resourcepool or cluster MUST be specified!
  resourcepool: Resources
  cluster: Prod

  datastore: HUGE-DATASTORE-Cluster
  folder: Development
  datacenter: DC1
  host: c4212n-002.domain.com
  template: False
  power_on: True
  extra_config:
    mem.hotadd: 'yes'
    guestinfo.foo: bar
    guestinfo.domain: foobar.com
    guestinfo.customVariable: customValue

  deploy: True
  private_key: /root/.ssh/mykey.pem
  ssh_username: cloud-user
  password: veryVeryBadPassword
  minion:
    master: 123.127.193.105

  file_map:
    /path/to/local/custom/script: /path/to/remote/script
    /path/to/local/file: /path/to/remote/file
    /srv/salt/yum/epel.repo: /etc/yum.repos.d/epel.repo
provider
Enter the name that was specified when the cloud provider config was created.
clonefrom
Enter the name of the VM/template to clone from.
num_cpus
Enter the number of vCPUS you want the VM/template to have. If not specified, the current VM/template's vCPU count is used.
memory
Enter memory (in MB) you want the VM/template to have. If not specified, the current VM/template's memory size is used.
devices

Enter the device specifications here. Currently, the following devices can be created or reconfigured:

cd

Enter the CD/DVD drive specification here. If the CD/DVD drive doesn't exist, it will be created with the specified configuration. If the CD/DVD drive already exists, it will be reconfigured with the specifications. The following options can be specified per CD/DVD drive:

device_type
Specify how the CD/DVD drive should be used. Currently supported types are client_device and datastore_iso_file. Default is device_type: client_device
iso_path
Enter the path to the iso file present on the datastore only if device_type: datastore_iso_file. The syntax to specify this is iso_path: "[datastoreName] vmimages/tools-isoimages/linux.iso". This field is ignored if device_type: client_device
mode
Enter the mode of connection only if device_type: client_device. Currently supported modes are passthrough and atapi. This field is ignored if device_type: datastore_iso_file. Default is mode: passthrough
disk
Enter the disk specification here. If the hard disk doesn't exist, it will be created with the provided size. If the hard disk already exists, it will be expanded if the provided size is greater than the current size of the disk.
network

Enter the network adapter specification here. If the network adapter doesn't exist, a new network adapter will be created with the specified network name, type and other configuration. If the network adapter already exists, it will be reconfigured with the specifications. The following additional options can be specified per network adapter (See example above):

name
Enter the network name you want the network adapter to be mapped to.
adapter_type
Enter the network adapter type you want to create. Currently supported types are vmxnet, vmxnet2, vmxnet3, e1000 and e1000e. If no type is specified, by default vmxnet3 will be used.
switch_type
Enter the type of switch to use. This decides whether to use a standard switch network or a distributed virtual portgroup. Currently supported types are standard for standard portgroups and distributed for distributed virtual portgroups.
ip
Enter the static IP you want the network adapter to be mapped to. If the network specified is DHCP enabled, you do not have to specify this.
gateway
Enter the gateway for the network as a list. If the network specified is DHCP enabled, you do not have to specify this.
subnet_mask
Enter the subnet mask for the network. If the network specified is DHCP enabled, you do not have to specify this.
domain
Enter the domain to be used with the network adapter. If the network specified is DHCP enabled, you do not have to specify this.
scsi

Enter the SCSI adapter specification here. If the SCSI adapter doesn't exist, a new SCSI adapter will be created of the specified type. If the SCSI adapter already exists, it will be reconfigured with the specifications. The following additional options can be specified per SCSI adapter:

type
Enter the SCSI adapter type you want to create. Currently supported types are lsilogic, lsilogic_sas and paravirtual. Type must be specified when creating a new SCSI adapter.
bus_sharing

Specify this if sharing of virtual disks between virtual machines is desired. The following can be specified:

virtual
Virtual disks can be shared between virtual machines on the same server.
physical
Virtual disks can be shared between virtual machines on any server.
no
Virtual disks cannot be shared between virtual machines.
domain
Enter the global domain name to be used for DNS. If not specified and if the VM name is a FQDN, domain is set to the domain from the VM name. Default is local.
dns_servers
Enter the list of DNS servers to use in order of priority.
resourcepool

Enter the name of the resourcepool to which the new virtual machine should be attached. This determines what compute resources will be available to the clone.

Note

  • For a clone operation from a virtual machine, it will use the same resourcepool as the original virtual machine unless specified.
  • For a clone operation from a template to a virtual machine, specifying either this or cluster is required. If both are specified, the resourcepool value will be used.
  • For a clone operation to a template, this argument is ignored.
cluster

Enter the name of the cluster whose resource pool the new virtual machine should be attached to.

Note

  • For a clone operation from a virtual machine, it will use the same cluster's resourcepool as the original virtual machine unless specified.
  • For a clone operation from a template to a virtual machine, specifying either this or resourcepool is required. If both are specified, the resourcepool value will be used.
  • For a clone operation to a template, this argument is ignored.
datastore

Enter the name of the datastore or the datastore cluster where the virtual machine should be located on physical storage. If not specified, the current datastore is used.

Note

  • If you specify a datastore cluster name, DRS Storage recommendation is automatically applied.
  • If you specify a datastore name, DRS Storage recommendation is disabled.
folder

Enter the name of the folder that will contain the new virtual machine.

Note

  • For a clone operation from a VM/template, the new VM/template will be added to the same folder that the original VM/template belongs to unless specified.
  • If both folder and datacenter are specified, the folder value will be used.
datacenter

Enter the name of the datacenter that will contain the new virtual machine.

Note

  • For a clone operation from a VM/template, the new VM/template will be added to the same folder that the original VM/template belongs to unless specified.
  • If both folder and datacenter are specified, the folder value will be used.
host

Enter the name of the target host where the virtual machine should be registered.

If not specified:

Note

  • If resource pool is not specified, current host is used.
  • If resource pool is specified, and the target pool represents a stand-alone host, the host is used.
  • If resource pool is specified, and the target pool represents a DRS-enabled cluster, a host selected by DRS is used.
  • If resource pool is specified and the target pool represents a cluster without DRS enabled, an InvalidArgument exception be thrown.
template
Specifies whether the new virtual machine should be marked as a template or not. Default is template: False.
power_on
Specifies whether the new virtual machine should be powered on or not. If template: True is set, this field is ignored. Default is power_on: True.
extra_config
Specifies the additional configuration information for the virtual machine. This describes a set of modifications to the additional options. If the key is already present, it will be reset with the new value provided. Otherwise, a new option is added. Keys with empty values will be removed.
deploy
Specifies if salt should be installed on the newly created VM. Default is True so salt will be installed using the bootstrap script. If template: True or power_on: False is set, this field is ignored and salt will not be installed.
private_key
Specify the path to the private key to use to be able to ssh to the VM.
ssh_username
Specify the username to use in order to ssh to the VM. Default is root
password
Specify a password to use in order to ssh to the VM. If private_key is specified, you do not need to specify this.
minion
Specify custom minion configuration you want the salt minion to have. A good example would be to specify the master as the IP/DNS name of the master.
file_map
Specify file/files you want to copy to the VM before the bootstrap script is run and salt is installed. A good example of using this would be if you need to put custom repo files on the server in case your server will be in a private network and cannot reach external networks.

Miscellaneous Options

Miscellaneous Salt Cloud Options

This page describes various miscellaneous options available in Salt Cloud

Deploy Script Arguments

Custom deploy scripts are unlikely to need custom arguments to be passed to them, but salt-bootstrap has been extended quite a bit, and this may be necessary. script_args can be specified in either the profile or the map file, to pass arguments to the deploy script:

ec2-amazon:
    provider: ec2
    image: ami-1624987f
    size: t1.micro
    ssh_username: ec2-user
    script: bootstrap-salt
    script_args: -c /tmp/

This has also been tested to work with pipes, if needed:

script_args: | head
Selecting the File Transport

By default, Salt Cloud uses SFTP to transfer files to Linux hosts. However, if SFTP is not available, or specific SCP functionality is needed, Salt Cloud can be configured to use SCP instead.

file_transport: sftp
file_transport: scp
Sync After Install

Salt allows users to create custom modules, grains, and states which can be synchronised to minions to extend Salt with further functionality.

This option will inform Salt Cloud to synchronise your custom modules, grains, states or all these to the minion just after it has been created. For this to happen, the following line needs to be added to the main cloud configuration file:

sync_after_install: all

The available options for this setting are:

modules
grains
states
all
Setting up New Salt Masters

It has become increasingly common for users to set up multi-hierarchal infrastructures using Salt Cloud. This sometimes involves setting up an instance to be a master in addition to a minion. With that in mind, you can now lay down master configuration on a machine by specifying master options in the profile or map file.

make_master: True

This will cause Salt Cloud to generate master keys for the instance, and tell salt-bootstrap to install the salt-master package, in addition to the salt-minion package.

The default master configuration is usually appropriate for most users, and will not be changed unless specific master configuration has been added to the profile or map:

master:
    user: root
    interface: 0.0.0.0
Delete SSH Keys

When Salt Cloud deploys an instance, the SSH pub key for the instance is added to the known_hosts file for the user that ran the salt-cloud command. When an instance is deployed, a cloud provider generally recycles the IP address for the instance. When Salt Cloud attempts to deploy an instance using a recycled IP address that has previously been accessed from the same machine, the old key in the known_hosts file will cause a conflict.

In order to mitigate this issue, Salt Cloud can be configured to remove old keys from the known_hosts file when destroying the node. In order to do this, the following line needs to be added to the main cloud configuration file:

delete_sshkeys: True
Keeping /tmp/ Files

When Salt Cloud deploys an instance, it uploads temporary files to /tmp/ for salt-bootstrap to put in place. After the script has run, they are deleted. To keep these files around (mostly for debugging purposes), the --keep-tmp option can be added:

salt-cloud -p myprofile mymachine --keep-tmp

For those wondering why /tmp/ was used instead of /root/, this had to be done for images which require the use of sudo, and therefore do not allow remote root logins, even for file transfers (which makes /root/ unavailable).

Hide Output From Minion Install

By default Salt Cloud will stream the output from the minion deploy script directly to STDOUT. Although this can been very useful, in certain cases you may wish to switch this off. The following config option is there to enable or disable this output:

display_ssh_output: False
Connection Timeout

There are several stages when deploying Salt where Salt Cloud needs to wait for something to happen. The VM getting it's IP address, the VM's SSH port is available, etc.

If you find that the Salt Cloud defaults are not enough and your deployment fails because Salt Cloud did not wait log enough, there are some settings you can tweak.

Note

All values should be provided in seconds

You can tweak these settings globally, per cloud provider, or event per profile definition.

wait_for_ip_timeout

The amount of time Salt Cloud should wait for a VM to start and get an IP back from the cloud provider. Default: varies by cloud provider ( between 5 and 25 minutes)

wait_for_ip_interval

The amount of time Salt Cloud should sleep while querying for the VM's IP. Default: varies by cloud provider ( between .5 and 10 seconds)

ssh_connect_timeout

The amount of time Salt Cloud should wait for a successful SSH connection to the VM. Default: varies by cloud provider (between 5 and 15 minutes)

wait_for_passwd_timeout

The amount of time until an ssh connection can be established via password or ssh key. Default: varies by cloud provider (mostly 15 seconds)

wait_for_passwd_maxtries

The number of attempts to connect to the VM until we abandon. Default: 15 attempts

wait_for_fun_timeout

Some cloud drivers check for an available IP or a successful SSH connection using a function, namely, SoftLayer, and SoftLayer-HW. So, the amount of time Salt Cloud should retry such functions before failing. Default: 15 minutes.

wait_for_spot_timeout

The amount of time Salt Cloud should wait before an EC2 Spot instance is available. This setting is only available for the EC2 cloud driver. Default: 10 minutes

Salt Cloud Cache

Salt Cloud can maintain a cache of node data, for supported providers. The following options manage this functionality.

update_cachedir

On supported cloud providers, whether or not to maintain a cache of nodes returned from a --full-query. The data will be stored in msgpack format under <SALT_CACHEDIR>/cloud/active/<DRIVER>/<PROVIDER>/<NODE_NAME>.p. This setting can be True or False.

diff_cache_events

When the cloud cachedir is being managed, if differences are encountered between the data that is returned live from the cloud provider and the data in the cache, fire events which describe the changes. This setting can be True or False.

Some of these events will contain data which describe a node. Because some of the fields returned may contain sensitive data, the cache_event_strip_fields configuration option exists to strip those fields from the event return.

cache_event_strip_fields:
  - password
  - priv_key

The following are events that can be fired based on this data.

salt/cloud/minionid/cache_node_new

A new node was found on the cloud provider which was not listed in the cloud cachedir. A dict describing the new node will be contained in the event.

salt/cloud/minionid/cache_node_missing

A node that was previously listed in the cloud cachedir is no longer available on the cloud provider.

salt/cloud/minionid/cache_node_diff

One or more pieces of data in the cloud cachedir has changed on the cloud provider. A dict containing both the old and the new data will be contained in the event.

SSH Known Hosts

Normally when bootstrapping a VM, salt-cloud will ignore the SSH host key. This is because it does not know what the host key is before starting (because it doesn't exist yet). If strict host key checking is turned on without the key in the known_hosts file, then the host will never be available, and cannot be bootstrapped.

If a provider is able to determine the host key before trying to bootstrap it, that provider's driver can add it to the known_hosts file, and then turn on strict host key checking. This can be set up in the main cloud configuration file (normally /etc/salt/cloud) or in the provider-specific configuration file:

known_hosts_file: /path/to/.ssh/known_hosts

If this is not set, it will default to /dev/null, and strict host key checking will be turned off.

It is highly recommended that this option is not set, unless the user has verified that the provider supports this functionality, and that the image being used is capable of providing the necessary information. At this time, only the EC2 driver supports this functionality.

SSH Agent

New in version 2015.5.0.

If the ssh key is not stored on the server salt-cloud is being run on, set ssh_agent, and salt-cloud will use the forwarded ssh-agent to authenticate.

ssh_agent: True
File Map Upload

New in version 2014.7.0.

The file_map option allows an arbitrary group of files to be uploaded to the target system before running the deploy script. This functionality requires a provider uses salt.utils.cloud.bootstrap(), which is currently limited to the ec2, gce, openstack and nova drivers.

The file_map can be configured globally in /etc/salt/cloud, or in any cloud provider or profile file. For example, to upload an extra package or a custom deploy script, a cloud profile using file_map might look like:

ubuntu14:
  provider: ec2-config
  image: ami-98aa1cf0
  size: t1.micro
  ssh_username: root
  securitygroup: default
  file_map:
    /local/path/to/custom/script: /remote/path/to/use/custom/script
    /local/path/to/package: /remote/path/to/store/package

Troubleshooting Steps

Troubleshooting Salt Cloud

This page describes various steps for troubleshooting problems that may arise while using Salt Cloud.

Virtual Machines Are Created, But Do Not Respond

Are TCP ports 4505 and 4506 open on the master? This is easy to overlook on new masters. Information on how to open firewall ports on various platforms can be found here.

Generic Troubleshooting Steps

This section describes a set of instructions that are useful to a large number of situations, and are likely to solve most issues that arise.

Version Compatibility

One of the most common issues that Salt Cloud users run into is import errors. These are often caused by version compatibility issues with Salt.

Salt 0.16.x works with Salt Cloud 0.8.9 or greater.

Salt 0.17.x requires Salt Cloud 0.8.11.

Releases after 0.17.x (0.18 or greater) should not encounter issues as Salt Cloud has been merged into Salt itself.

Debug Mode

Frequently, running Salt Cloud in debug mode will reveal information about a deployment which would otherwise not be obvious:

salt-cloud -p myprofile myinstance -l debug

Keep in mind that a number of messages will appear that look at first like errors, but are in fact intended to give developers factual information to assist in debugging. A number of messages that appear will be for cloud providers that you do not have configured; in these cases, the message usually is intended to confirm that they are not configured.

Salt Bootstrap

By default, Salt Cloud uses the Salt Bootstrap script to provision instances:

This script is packaged with Salt Cloud, but may be updated without updating the Salt package:

salt-cloud -u
The Bootstrap Log

If the default deploy script was used, there should be a file in the /tmp/ directory called bootstrap-salt.log. This file contains the full output from the deployment, including any errors that may have occurred.

Keeping Temp Files

Salt Cloud uploads minion-specific files to instances once they are available via SSH, and then executes a deploy script to put them into the correct place and install Salt. The --keep-tmp option will instruct Salt Cloud not to remove those files when finished with them, so that the user may inspect them for problems:

salt-cloud -p myprofile myinstance --keep-tmp

By default, Salt Cloud will create a directory on the target instance called /tmp/.saltcloud/. This directory should be owned by the user that is to execute the deploy script, and should have permissions of 0700.

Most cloud providers are configured to use root as the default initial user for deployment, and as such, this directory and all files in it should be owned by the root user.

The /tmp/.saltcloud/ directory should the following files:

  • A deploy.sh script. This script should have permissions of 0755.
  • A .pem and .pub key named after the minion. The .pem file should have permissions of 0600. Ensure that the .pem and .pub files have been properly copied to the /etc/salt/pki/minion/ directory.
  • A file called minion. This file should have been copied to the /etc/salt/ directory.
  • Optionally, a file called grains. This file, if present, should have been copied to the /etc/salt/ directory.
Unprivileged Primary Users

Some providers, most notably EC2, are configured with a different primary user. Some common examples are ec2-user, ubuntu, fedora, and bitnami. In these cases, the /tmp/.saltcloud/ directory and all files in it should be owned by this user.

Some providers, such as EC2, are configured to not require these users to provide a password when using the sudo command. Because it is more secure to require sudo users to provide a password, other providers are configured that way.

If this instance is required to provide a password, it needs to be configured in Salt Cloud. A password for sudo to use may be added to either the provider configuration or the profile configuration:

sudo_password: mypassword
/tmp/ is Mounted as noexec

It is more secure to mount the /tmp/ directory with a noexec option. This is uncommon on most cloud providers, but very common in private environments. To see if the /tmp/ directory is mounted this way, run the following command:

mount | grep tmp

The if the output of this command includes a line that looks like this, then the /tmp/ directory is mounted as noexec:

tmpfs on /tmp type tmpfs (rw,noexec)

If this is the case, then the deploy_command will need to be changed in order to run the deploy script through the sh command, rather than trying to execute it directly. This may be specified in either the provider or the profile config:

deploy_command: sh /tmp/.saltcloud/deploy.sh

Please note that by default, Salt Cloud will place its files in a directory called /tmp/.saltcloud/. This may be also be changed in the provider or profile configuration:

tmp_dir: /tmp/.saltcloud/

If this directory is changed, then the deploy_command need to be changed in order to reflect the tmp_dir configuration.

Executing the Deploy Script Manually

If all of the files needed for deployment were successfully uploaded to the correct locations, and contain the correct permissions and ownerships, the deploy script may be executed manually in order to check for other issues:

cd /tmp/.saltcloud/
./deploy.sh

Extending Salt Cloud

Writing Cloud Provider Modules

Salt Cloud runs on a module system similar to the main Salt project. The modules inside saltcloud exist in the salt/cloud/clouds directory of the salt source.

There are two basic types of cloud modules. If a cloud provider is supported by libcloud, then using it is the fastest route to getting a module written. The Apache Libcloud project is located at:

http://libcloud.apache.org/

Not every cloud provider is supported by libcloud. Additionally, not every feature in a supported cloud provider is necessary supported by libcloud. In either of these cases, a module can be created which does not rely on libcloud.

All Modules

The following functions are required by all modules, whether or not they are based on libcloud.

The __virtual__() Function

This function determines whether or not to make this cloud module available upon execution. Most often, it uses get_configured_provider() to determine if the necessary configuration has been set up. It may also check for necessary imports, to decide whether to load the module. In most cases, it will return a True or False value. If the name of the driver used does not match the filename, then that name should be returned instead of True. An example of this may be seen in the Azure module:

https://github.com/saltstack/salt/tree/develop/salt/cloud/clouds/msazure.py

The get_configured_provider() Function

This function uses config.is_provider_configured() to determine wither all required information for this driver has been configured. The last value in the list of required settings should be followed by a comma.

Libcloud Based Modules

Writing a cloud module based on libcloud has two major advantages. First of all, much of the work has already been done by the libcloud project. Second, most of the functions necessary to Salt have already been added to the Salt Cloud project.

The create() Function

The most important function that does need to be manually written is the create() function. This is what is used to request a virtual machine to be created by the cloud provider, wait for it to become available, and then (optionally) log in and install Salt on it.

A good example to follow for writing a cloud provider module based on libcloud is the module provided for Linode:

https://github.com/saltstack/salt/tree/develop/salt/cloud/clouds/linode.py

The basic flow of a create() function is as follows:

  • Send a request to the cloud provider to create a virtual machine.
  • Wait for the virtual machine to become available.
  • Generate kwargs to be used to deploy Salt.
  • Log into the virtual machine and deploy Salt.
  • Return a data structure that describes the newly-created virtual machine.

At various points throughout this function, events may be fired on the Salt event bus. Four of these events, which are described below, are required. Other events may be added by the user, where appropriate.

When the create() function is called, it is passed a data structure called vm_. This dict contains a composite of information describing the virtual machine to be created. A dict called __opts__ is also provided by Salt, which contains the options used to run Salt Cloud, as well as a set of configuration and environment variables.

The first thing the create() function must do is fire an event stating that it has started the create process. This event is tagged salt/cloud/<vm name>/creating. The payload contains the names of the VM, profile and provider.

A set of kwargs is then usually created, to describe the parameters required by the cloud provider to request the virtual machine.

An event is then fired to state that a virtual machine is about to be requested. It is tagged as salt/cloud/<vm name>/requesting. The payload contains most or all of the parameters that will be sent to the cloud provider. Any private information (such as passwords) should not be sent in the event.

After a request is made, a set of deploy kwargs will be generated. These will be used to install Salt on the target machine. Windows options are supported at this point, and should be generated, even if the cloud provider does not currently support Windows. This will save time in the future if the provider does eventually decide to support Windows.

An event is then fired to state that the deploy process is about to begin. This event is tagged salt/cloud/<vm name>/deploying. The payload for the event will contain a set of deploy kwargs, useful for debugging purposed. Any private data, including passwords and keys (including public keys) should be stripped from the deploy kwargs before the event is fired.

If any Windows options have been passed in, the salt.utils.cloud.deploy_windows() function will be called. Otherwise, it will be assumed that the target is a Linux or Unix machine, and the salt.utils.cloud.deploy_script() will be called.

Both of these functions will wait for the target machine to become available, then the necessary port to log in, then a successful login that can be used to install Salt. Minion configuration and keys will then be uploaded to a temporary directory on the target by the appropriate function. On a Windows target, the Windows Minion Installer will be run in silent mode. On a Linux/Unix target, a deploy script (bootstrap-salt.sh, by default) will be run, which will auto-detect the operating system, and install Salt using its native package manager. These do not need to be handled by the developer in the cloud module.

The salt.utils.cloud.validate_windows_cred() function has been extended to take the number of retries and retry_delay parameters in case a specific cloud provider has a delay between providing the Windows credentials and the credentials being available for use. In their create() function, or as a a sub-function called during the creation process, developers should use the win_deploy_auth_retries and win_deploy_auth_retry_delay parameters from the provider configuration to allow the end-user the ability to customize the number of tries and delay between tries for their particular provider.

After the appropriate deploy function completes, a final event is fired which describes the virtual machine that has just been created. This event is tagged salt/cloud/<vm name>/created. The payload contains the names of the VM, profile, and provider.

Finally, a dict (queried from the provider) which describes the new virtual machine is returned to the user. Because this data is not fired on the event bus it can, and should, return any passwords that were returned by the cloud provider. In some cases (for example, Rackspace), this is the only time that the password can be queried by the user; post-creation queries may not contain password information (depending upon the provider).

The libcloudfuncs Functions

A number of other functions are required for all cloud providers. However, with libcloud-based modules, these are all provided for free by the libcloudfuncs library. The following two lines set up the imports:

from salt.cloud.libcloudfuncs import *   # pylint: disable=W0614,W0401
from salt.utils import namespaced_function

And then a series of declarations will make the necessary functions available within the cloud module.

get_size = namespaced_function(get_size, globals())
get_image = namespaced_function(get_image, globals())
avail_locations = namespaced_function(avail_locations, globals())
avail_images = namespaced_function(avail_images, globals())
avail_sizes = namespaced_function(avail_sizes, globals())
script = namespaced_function(script, globals())
destroy = namespaced_function(destroy, globals())
list_nodes = namespaced_function(list_nodes, globals())
list_nodes_full = namespaced_function(list_nodes_full, globals())
list_nodes_select = namespaced_function(list_nodes_select, globals())
show_instance = namespaced_function(show_instance, globals())

If necessary, these functions may be replaced by removing the appropriate declaration line, and then adding the function as normal.

These functions are required for all cloud modules, and are described in detail in the next section.

Non-Libcloud Based Modules

In some cases, using libcloud is not an option. This may be because libcloud has not yet included the necessary driver itself, or it may be that the driver that is included with libcloud does not contain all of the necessary features required by the developer. When this is the case, some or all of the functions in libcloudfuncs may be replaced. If they are all replaced, the libcloud imports should be absent from the Salt Cloud module.

A good example of a non-libcloud provider is the DigitalOcean module:

https://github.com/saltstack/salt/tree/develop/salt/cloud/clouds/digital_ocean.py

The create() Function

The create() function must be created as described in the libcloud-based module documentation.

The get_size() Function

This function is only necessary for libcloud-based modules, and does not need to exist otherwise.

The get_image() Function

This function is only necessary for libcloud-based modules, and does not need to exist otherwise.

The avail_locations() Function

This function returns a list of locations available, if the cloud provider uses multiple data centers. It is not necessary if the cloud provider only uses one data center. It is normally called using the --list-locations option.

salt-cloud --list-locations my-cloud-provider
The avail_images() Function

This function returns a list of images available for this cloud provider. There are not currently any known cloud providers that do not provide this functionality, though they may refer to images by a different name (for example, "templates"). It is normally called using the --list-images option.

salt-cloud --list-images my-cloud-provider
The avail_sizes() Function

This function returns a list of sizes available for this cloud provider. Generally, this refers to a combination of RAM, CPU, and/or disk space. This functionality may not be present on some cloud providers. For example, the Parallels module breaks down RAM, CPU, and disk space into separate options, whereas in other providers, these options are baked into the image. It is normally called using the --list-sizes option.

salt-cloud --list-sizes my-cloud-provider
The script() Function

This function builds the deploy script to be used on the remote machine. It is likely to be moved into the salt.utils.cloud library in the near future, as it is very generic and can usually be copied wholesale from another module. An excellent example is in the Azure driver.

The destroy() Function

This function irreversibly destroys a virtual machine on the cloud provider. Before doing so, it should fire an event on the Salt event bus. The tag for this event is salt/cloud/<vm name>/destroying. Once the virtual machine has been destroyed, another event is fired. The tag for that event is salt/cloud/<vm name>/destroyed.

This function is normally called with the -d options:

salt-cloud -d myinstance
The list_nodes() Function

This function returns a list of nodes available on this cloud provider, using the following fields:

  • id (str)
  • image (str)
  • size (str)
  • state (str)
  • private_ips (list)
  • public_ips (list)

No other fields should be returned in this function, and all of these fields should be returned, even if empty. The private_ips and public_ips fields should always be of a list type, even if empty, and the other fields should always be of a str type. This function is normally called with the -Q option:

salt-cloud -Q
The list_nodes_full() Function

All information available about all nodes should be returned in this function. The fields in the list_nodes() function should also be returned, even if they would not normally be provided by the cloud provider. This is because some functions both within Salt and 3rd party will break if an expected field is not present. This function is normally called with the -F option:

salt-cloud -F
The list_nodes_select() Function

This function returns only the fields specified in the query.selection option in /etc/salt/cloud. Because this function is so generic, all of the heavy lifting has been moved into the salt.utils.cloud library.

A function to call list_nodes_select() still needs to be present. In general, the following code can be used as-is:

def list_nodes_select(call=None):
    '''
    Return a list of the VMs that are on the provider, with select fields
    '''
    return salt.utils.cloud.list_nodes_select(
        list_nodes_full('function'), __opts__['query.selection'], call,
    )

However, depending on the cloud provider, additional variables may be required. For instance, some modules use a conn object, or may need to pass other options into list_nodes_full(). In this case, be sure to update the function appropriately:

def list_nodes_select(conn=None, call=None):
    '''
    Return a list of the VMs that are on the provider, with select fields
    '''
    if not conn:
        conn = get_conn()   # pylint: disable=E0602

    return salt.utils.cloud.list_nodes_select(
        list_nodes_full(conn, 'function'),
        __opts__['query.selection'],
        call,
    )

This function is normally called with the -S option:

salt-cloud -S
The show_instance() Function

This function is used to display all of the information about a single node that is available from the cloud provider. The simplest way to provide this is usually to call list_nodes_full(), and return just the data for the requested node. It is normally called as an action:

salt-cloud -a show_instance myinstance
Actions and Functions

Extra functionality may be added to a cloud provider in the form of an --action or a --function. Actions are performed against a cloud instance/virtual machine, and functions are performed against a cloud provider.

Actions

Actions are calls that are performed against a specific instance or virtual machine. The show_instance action should be available in all cloud modules. Actions are normally called with the -a option:

salt-cloud -a show_instance myinstance

Actions must accept a name as a first argument, may optionally support any number of kwargs as appropriate, and must accept an argument of call, with a default of None.

Before performing any other work, an action should normally verify that it has been called correctly. It may then perform the desired feature, and return useful information to the user. A basic action looks like:

def show_instance(name, call=None):
'''
Show the details from EC2 concerning an AMI
'''
if call != 'action':
    raise SaltCloudSystemExit(
        'The show_instance action must be called with -a or --action.'
    )

return _get_node(name)

Please note that generic kwargs, if used, are passed through to actions as kwargs and not **kwargs. An example of this is seen in the Functions section.

Functions

Functions are called that are performed against a specific cloud provider. An optional function that is often useful is show_image, which describes an image in detail. Functions are normally called with the -f option:

salt-cloud -f show_image my-cloud-provider image='Ubuntu 13.10 64-bit'

A function may accept any number of kwargs as appropriate, and must accept an argument of call with a default of None.

Before performing any other work, a function should normally verify that it has been called correctly. It may then perform the desired feature, and return useful information to the user. A basic function looks like:

def show_image(kwargs, call=None):
    '''
    Show the details from EC2 concerning an AMI
    '''
    if call != 'function':
        raise SaltCloudSystemExit(
            'The show_image action must be called with -f or --function.'
        )

    params = {'ImageId.1': kwargs['image'],
              'Action': 'DescribeImages'}
    result = query(params)
    log.info(result)

    return result

Take note that generic kwargs are passed through to functions as kwargs and not **kwargs.

OS Support for Cloud VMs

Salt Cloud works primarily by executing a script on the virtual machines as soon as they become available. The script that is executed is referenced in the cloud profile as the script. In older versions, this was the os argument. This was changed in 0.8.2.

A number of legacy scripts exist in the deploy directory in the saltcloud source tree. The preferred method is currently to use the salt-bootstrap script. A stable version is included with each release tarball starting with 0.8.4. The most updated version can be found at:

https://github.com/saltstack/salt-bootstrap

If you do not specify a script argument, this script will be used at the default.

If the Salt Bootstrap script does not meet your needs, you may write your own. The script should be written in bash and is a Jinja template. Deploy scripts need to execute a number of functions to do a complete salt setup. These functions include:

  1. Install the salt minion. If this can be done via system packages this method is HIGHLY preferred.
  2. Add the salt minion keys before the minion is started for the first time. The minion keys are available as strings that can be copied into place in the Jinja template under the dict named "vm".
  3. Start the salt-minion daemon and enable it at startup time.
  4. Set up the minion configuration file from the "minion" data available in the Jinja template.

A good, well commented, example of this process is the Fedora deployment script:

https://github.com/saltstack/salt-cloud/blob/master/saltcloud/deploy/Fedora.sh

A number of legacy deploy scripts are included with the release tarball. None of them are as functional or complete as Salt Bootstrap, and are still included for academic purposes.

Other Generic Deploy Scripts

If you want to be assured of always using the latest Salt Bootstrap script, there are a few generic templates available in the deploy directory of your saltcloud source tree:

curl-bootstrap
curl-bootstrap-git
python-bootstrap
wget-bootstrap
wget-bootstrap-git

These are example scripts which were designed to be customized, adapted, and refit to meet your needs. One important use of them is to pass options to the salt-bootstrap script, such as updating to specific git tags.

Post-Deploy Commands

Once a minion has been deployed, it has the option to run a salt command. Normally, this would be the state.highstate command, which would finish provisioning the VM. Another common option is state.sls, or for just testing, test.ping. This is configured in the main cloud config file:

start_action: state.highstate

This is currently considered to be experimental functionality, and may not work well with all providers. If you experience problems with Salt Cloud hanging after Salt is deployed, consider using Startup States instead:

http://docs.saltstack.com/ref/states/startup.html

Skipping the Deploy Script

For whatever reason, you may want to skip the deploy script altogether. This results in a VM being spun up much faster, with absolutely no configuration. This can be set from the command line:

salt-cloud --no-deploy -p micro_aws my_instance

Or it can be set from the main cloud config file:

deploy: False

Or it can be set from the provider's configuration:

RACKSPACE.user: example_user
RACKSPACE.apikey: 123984bjjas87034
RACKSPACE.deploy: False

Or even on the VM's profile settings:

ubuntu_aws:
  provider: aws
  image: ami-7e2da54e
  size: t1.micro
  deploy: False

The default for deploy is True.

In the profile, you may also set the script option to None:

script: None

This is the slowest option, since it still uploads the None deploy script and executes it.

Updating Salt Bootstrap

Salt Bootstrap can be updated automatically with salt-cloud:

salt-cloud -u
salt-cloud --update-bootstrap

Bear in mind that this updates to the latest (unstable) version, so use with caution.

Keeping /tmp/ Files

When Salt Cloud deploys an instance, it uploads temporary files to /tmp/ for salt-bootstrap to put in place. After the script has run, they are deleted. To keep these files around (mostly for debugging purposes), the --keep-tmp option can be added:

salt-cloud -p myprofile mymachine --keep-tmp

For those wondering why /tmp/ was used instead of /root/, this had to be done for images which require the use of sudo, and therefore do not allow remote root logins, even for file transfers (which makes /root/ unavailable).

Deploy Script Arguments

Custom deploy scripts are unlikely to need custom arguments to be passed to them, but salt-bootstrap has been extended quite a bit, and this may be necessary. script_args can be specified in either the profile or the map file, to pass arguments to the deploy script:

aws-amazon:
    provider: aws
    image: ami-1624987f
    size: t1.micro
    ssh_username: ec2-user
    script: bootstrap-salt
    script_args: -c /tmp/

This has also been tested to work with pipes, if needed:

script_args: | head

Using Salt Cloud from Salt

Using the Salt Modules for Cloud

In addition to the salt-cloud command, Salt Cloud can be called from Salt, in a variety of different ways. Most users will be interested in either the execution module or the state module, but it is also possible to call Salt Cloud as a runner.

Because the actual work will be performed on a remote minion, the normal Salt Cloud configuration must exist on any target minion that needs to execute a Salt Cloud command. Because Salt Cloud now supports breaking out configuration into individual files, the configuration is easily managed using Salt's own file.managed state function. For example, the following directories allow this configuration to be managed easily:

/etc/salt/cloud.providers.d/
/etc/salt/cloud.profiles.d/
Minion Keys

Keep in mind that when creating minions, Salt Cloud will create public and private minion keys, upload them to the minion, and place the public key on the machine that created the minion. It will not attempt to place any public minion keys on the master, unless the minion which was used to create the instance is also the Salt Master. This is because granting arbitrary minions access to modify keys on the master is a serious security risk, and must be avoided.

Execution Module

The cloud module is available to use from the command line. At the moment, almost every standard Salt Cloud feature is available to use. The following commands are available:

list_images

This command is designed to show images that are available to be used to create an instance using Salt Cloud. In general they are used in the creation of profiles, but may also be used to create an instance directly (see below). Listing images requires a provider to be configured, and specified:

salt myminion cloud.list_images my-cloud-provider
list_sizes

This command is designed to show sizes that are available to be used to create an instance using Salt Cloud. In general they are used in the creation of profiles, but may also be used to create an instance directly (see below). This command is not available for all cloud providers; see the provider-specific documentation for details. Listing sizes requires a provider to be configured, and specified:

salt myminion cloud.list_sizes my-cloud-provider
list_locations

This command is designed to show locations that are available to be used to create an instance using Salt Cloud. In general they are used in the creation of profiles, but may also be used to create an instance directly (see below). This command is not available for all cloud providers; see the provider-specific documentation for details. Listing locations requires a provider to be configured, and specified:

salt myminion cloud.list_locations my-cloud-provider
query

This command is used to query all configured cloud providers, and display all instances associated with those accounts. By default, it will run a standard query, returning the following fields:

id
The name or ID of the instance, as used by the cloud provider.
image
The disk image that was used to create this instance.
private_ips
Any public IP addresses currently assigned to this instance.
public_ips
Any private IP addresses currently assigned to this instance.
size
The size of the instance; can refer to RAM, CPU(s), disk space, etc., depending on the cloud provider.
state
The running state of the instance; for example, running, stopped, pending, etc. This state is dependent upon the provider.

This command may also be used to perform a full query or a select query, as described below. The following usages are available:

salt myminion cloud.query
salt myminion cloud.query list_nodes
salt myminion cloud.query list_nodes_full
full_query

This command behaves like the query command, but lists all information concerning each instance as provided by the cloud provider, in addition to the fields returned by the query command.

salt myminion cloud.full_query
select_query

This command behaves like the query command, but only returned select fields as defined in the /etc/salt/cloud configuration file. A sample configuration for this section of the file might look like:

query.selection:
  - id
  - key_name

This configuration would only return the id and key_name fields, for those cloud providers that support those two fields. This would be called using the following command:

salt myminion cloud.select_query
profile

This command is used to create an instance using a profile that is configured on the target minion. Please note that the profile must be configured before this command can be used with it.

salt myminion cloud.profile ec2-centos64-x64 my-new-instance

Please note that the execution module does not run in parallel mode. Using multiple minions to create instances can effectively perform parallel instance creation.

create

This command is similar to the profile command, in that it is used to create a new instance. However, it does not require a profile to be pre-configured. Instead, all of the options that are normally configured in a profile are passed directly to Salt Cloud to create the instance:

salt myminion cloud.create my-ec2-config my-new-instance \
    image=ami-1624987f size='t1.micro' ssh_username=ec2-user \
    securitygroup=default delvol_on_destroy=True

Please note that the execution module does not run in parallel mode. Using multiple minions to create instances can effectively perform parallel instance creation.

destroy

This command is used to destroy an instance or instances. This command will search all configured providers and remove any instance(s) which matches the name(s) passed in here. The results of this command are non-reversable and should be used with caution.

salt myminion cloud.destroy myinstance
salt myminion cloud.destroy myinstance1,myinstance2
action

This command implements both the action and the function commands used in the standard salt-cloud command. If one of the standard action commands is used, an instance name must be provided. If one of the standard function commands is used, a provider configuration must be named.

salt myminion cloud.action start instance=myinstance
salt myminion cloud.action show_image provider=my-ec2-config \
    image=ami-1624987f

The actions available are largely dependent upon the module for the specific cloud provider. The following actions are available for all cloud providers:

list_nodes
This is a direct call to the query function as described above, but is only performed against a single cloud provider. A provider configuration must be included.
list_nodes_select
This is a direct call to the full_query function as described above, but is only performed against a single cloud provider. A provider configuration must be included.
list_nodes_select
This is a direct call to the select_query function as described above, but is only performed against a single cloud provider. A provider configuration must be included.
show_instance
This is a thin wrapper around list_nodes, which returns the full information about a single instance. An instance name must be provided.
State Module

A subset of the execution module is available through the cloud state module. Not all functions are currently included, because there is currently insufficient code for them to perform statefully. For example, a command to create an instance may be issued with a series of options, but those options cannot currently be statefully managed. Additional states to manage these options will be released at a later time.

cloud.present

This state will ensure that an instance is present inside a particular cloud provider. Any option that is normally specified in the cloud.create execution module and function may be declared here, but only the actual presence of the instance will be managed statefully.

my-instance-name:
  cloud.present:
    - provider: my-ec2-config
    - image: ami-1624987f
    - size: 't1.micro'
    - ssh_username: ec2-user
    - securitygroup: default
    - delvol_on_destroy: True
cloud.profile

This state will ensure that an instance is present inside a particular cloud provider. This function calls the cloud.profile execution module and function, but as with cloud.present, only the actual presence of the instance will be managed statefully.

my-instance-name:
  cloud.profile:
    - profile: ec2-centos64-x64
cloud.absent

This state will ensure that an instance (identified by name) does not exist in any of the cloud providers configured on the target minion. Please note that this state is non-reversable and may be considered especially destructive when issued as a cloud state.

my-instance-name:
  cloud.absent
Runner Module

The cloud runner module is executed on the master, and performs actions using the configuration and Salt modules on the master itself. This means that any public minion keys will also be properly accepted by the master.

Using the functions in the runner module is no different than using those in the execution module, outside of the behavior described in the above paragraph. The following functions are available inside the runner:

  • list_images
  • list_sizes
  • list_locations
  • query
  • full_query
  • select_query
  • profile
  • destroy
  • action

Outside of the standard usage of salt-run itself, commands are executed as usual:

salt-run cloud.profile ec2-centos64-x86_64 my-instance-name
CloudClient

The execution, state, and runner modules ultimately all use the CloudClient library that ships with Salt. To use the CloudClient library locally (either on the master or a minion), create a client object and issue a command against it:

import salt.cloud
import pprint
client = salt.cloud.CloudClient('/etc/salt/cloud')
nodes = client.query()
pprint.pprint(nodes)

Feature Comparison

Feature Matrix

A number of features are available in most cloud providers, but not all are available everywhere. This may be because the feature isn't supported by the cloud provider itself, or it may only be that the feature has not yet been added to Salt Cloud. In a handful of cases, it is because the feature does not make sense for a particular cloud provider (Saltify, for instance).

This matrix shows which features are available in which cloud providers, as far as Salt Cloud is concerned. This is not a comprehensive list of all features available in all cloud providers, and should not be used to make business decisions concerning choosing a cloud provider. In most cases, adding support for a feature to Salt Cloud requires only a little effort.

Legacy Drivers

Both AWS and Rackspace are listed as "Legacy". This is because those drivers have been replaced by other drivers, which are generally the preferred method for working with those providers.

The EC2 driver should be used instead of the AWS driver, when possible. The OpenStack driver should be used instead of the Rackspace driver, unless the user is dealing with instances in "the old cloud" in Rackspace.

Note for Developers

When adding new features to a particular cloud provider, please make sure to add the feature to this table. Additionally, if you notice a feature that is not properly listed here, pull requests to fix them is appreciated.

Standard Features

These are features that are available for almost every provider.

  AWS (Legacy) CloudStack Digital Ocean EC2 GoGrid JoyEnt Linode OpenStack Parallels Rackspace (Legacy) Saltify Softlayer Softlayer Hardware Aliyun
Query Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes   Yes Yes Yes
Full Query Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes   Yes Yes Yes
Selective Query Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes   Yes Yes Yes
List Sizes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes   Yes Yes Yes
List Images Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes   Yes Yes Yes
List Locations Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes   Yes Yes Yes
create Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
destroy Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes   Yes Yes Yes
Actions

These are features that are performed on a specific instance, and require an instance name to be passed in. For example:

# salt-cloud -a attach_volume ami.example.com
Actions AWS (Legacy) CloudStack Digital Ocean EC2 GoGrid JoyEnt Linode OpenStack Parallels Rackspace (Legacy) Saltify Softlayer Softlayer Hardware Aliyun
attach_volume       Yes                    
create_attach_volumes Yes     Yes                    
del_tags Yes     Yes                    
delvol_on_destroy       Yes                    
detach_volume       Yes                    
disable_term_protect Yes     Yes                    
enable_term_protect Yes     Yes                    
get_tags Yes     Yes                    
keepvol_on_destroy       Yes                    
list_keypairs     Yes                      
rename Yes     Yes                    
set_tags Yes     Yes                    
show_delvol_on_destroy       Yes                    
show_instance     Yes Yes         Yes     Yes Yes Yes
show_term_protect       Yes                    
start Yes     Yes   Yes     Yes         Yes
stop Yes     Yes   Yes     Yes         Yes
take_action           Yes                
Functions

These are features that are performed against a specific cloud provider, and require the name of the provider to be passed in. For example:

# salt-cloud -f list_images my_digitalocean
Functions AWS (Legacy) CloudStack Digital Ocean EC2 GoGrid JoyEnt Linode OpenStack Parallels Rackspace (Legacy) Saltify Softlayer Softlayer Hardware Aliyun
block_device_mappings Yes                          
create_keypair       Yes                    
create_volume       Yes                    
delete_key           Yes                
delete_keypair       Yes                    
delete_volume       Yes                    
get_image     Yes     Yes     Yes         Yes
get_ip   Yes                        
get_key   Yes                        
get_keyid     Yes                      
get_keypair   Yes                        
get_networkid   Yes                        
get_node           Yes                
get_password   Yes                        
get_size     Yes     Yes               Yes
get_spot_config       Yes                    
get_subnetid       Yes                    
iam_profile Yes     Yes                   Yes
import_key           Yes                
key_list           Yes                
keyname Yes     Yes                    
list_availability_zones       Yes                   Yes
list_custom_images                       Yes    
list_keys           Yes                
list_nodes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
list_nodes_full Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
list_nodes_select Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
list_vlans                       Yes Yes  
rackconnect               Yes            
reboot       Yes   Yes               Yes
reformat_node           Yes                
securitygroup Yes     Yes                    
securitygroupid       Yes                   Yes
show_image       Yes         Yes         Yes
show_key           Yes                
show_keypair     Yes Yes                    
show_volume       Yes                   Yes

Tutorials

Using Salt Cloud with the Event Reactor

One of the most powerful features of the Salt framework is the Event Reactor. As the Reactor was in development, Salt Cloud was regularly updated to take advantage of the Reactor upon completion. As such, various aspects of both the creation and destruction of instances with Salt Cloud fire events to the Salt Master, which can be used by the Event Reactor.

Event Structure

As of this writing, all events in Salt Cloud have a tag, which includes the ID of the instance being managed, and a payload which describes the task that is currently being handled. A Salt Cloud tag looks like:

salt/cloud/<minion_id>/<task>

For instance, the first event fired when creating an instance named web1 would look like:

salt/cloud/web1/creating

Assuming this instance is using the ec2-centos profile, which is in turn using the ec2-config provider, the payload for this tag would look like:

{'name': 'web1',
 'profile': 'ec2-centos',
 'provider': 'ec2-config'}
Available Events

When an instance is created in Salt Cloud, whether by map, profile, or directly through an API, a minimum of five events are normally fired. More may be available, depending upon the cloud provider being used. Some of the common events are described below.

salt/cloud/<minion_id>/creating

This event states simply that the process to create an instance has begun. At this point in time, no actual work has begun. The payload for this event includes:

name profile provider

salt/cloud/<minion_id>/requesting

Salt Cloud is about to make a request to the cloud provider to create an instance. At this point, all of the variables required to make the request have been gathered, and the payload of the event will reflect those variables which do not normally pose a security risk. What is returned here is dependent upon the cloud provider. Some common variables are:

name image size location

salt/cloud/<minion_id>/querying

The instance has been successfully requested, but the necessary information to log into the instance (such as IP address) is not yet available. This event marks the beginning of the process to wait for this information.

The payload for this event normally only includes the instance_id.

salt/cloud/<minion_id>/waiting_for_ssh

The information required to log into the instance has been retrieved, but the instance is not necessarily ready to be accessed. Following this event, Salt Cloud will wait for the IP address to respond to a ping, then wait for the specified port (usually 22) to respond to a connection, and on Linux systems, for SSH to become available. Salt Cloud will attempt to issue the date command on the remote system, as a means to check for availability. If no ssh_username has been specified, a list of usernames (starting with root) will be attempted. If one or more usernames was configured for ssh_username, they will be added to the beginning of the list, in order.

The payload for this event normally only includes the ip_address.

salt/cloud/<minion_id>/deploying

The necessary port has been detected as available, and now Salt Cloud can log into the instance, upload any files used for deployment, and run the deploy script. Once the script has completed, Salt Cloud will log back into the instance and remove any remaining files.

A number of variables are used to deploy instances, and the majority of these will be available in the payload. Any keys, passwords or other sensitive data will be scraped from the payload. Most of the variables returned will be related to the profile or provider config, and any default values that could have been changed in the profile or provider, but weren't.

salt/cloud/<minion_id>/created

The deploy sequence has completed, and the instance is now available, Salted, and ready for use. This event is the final task for Salt Cloud, before returning instance information to the user and exiting.

The payload for this event contains little more than the initial creating event. This event is required in all cloud providers.

Configuring the Event Reactor

The Event Reactor is built into the Salt Master process, and as such is configured via the master configuration file. Normally this will will be a YAML file located at /etc/salt/master. Additionally, master configuration items can be stored, in YAML format, inside the /etc/salt/master.d/ directory.

These configuration items may be stored in either location; however, they may only be stored in one location. For organizational and security purposes, it may be best to create a single configuration file, which contains only Event Reactor configuration, at /etc/salt/master.d/reactor.

The Event Reactor uses a top-level configuration item called reactor. This block contains a list of tags to be watched for, each of which also includes a list of sls files. For instance:

reactor:
  - 'salt/minion/*/start':
    - '/srv/reactor/custom-reactor.sls'
  - 'salt/cloud/*/created':
    - '/srv/reactor/cloud-alert.sls'
  - 'salt/cloud/*/destroyed':
    - '/srv/reactor/cloud-destroy-alert.sls'

The above configuration configures reactors for three different tags: one which is fired when a minion process has started and is available to receive commands, one which is fired when a cloud instance has been created, and one which is fired when a cloud instance is destroyed.

Note that each tag contains a wildcard (*) in it. For each of these tags, this will normally refer to a minion_id. This is not required of event tags, but is very common.

Reactor SLS Files

Reactor sls files should be placed in the /srv/reactor/ directory for consistency between environments, but this is not currently enforced by Salt.

Reactor sls files follow a similar format to other sls files in Salt. By default they are written in YAML and can be templated using Jinja, but since they are processed through Salt's rendering system, any available renderer (JSON, Mako, Cheetah, etc.) can be used.

As with other sls files, each stanza will start with a declaration ID, followed by the function to run, and then any arguments for that function. For example:

# /srv/reactor/cloud-alert.sls
new_instance_alert:
  cmd.pagerduty.create_event:
    - tgt: alertserver
    - kwarg:
        description: "New instance: {{ data['name'] }}"
        details: "New cloud instance created on {{ data['provider'] }}"
        service_key: 1626dead5ecafe46231e968eb1be29c4
        profile: my-pagerduty-account

When the Event Reactor receives an event notifying it that a new instance has been created, this sls will create a new incident in PagerDuty, using the configured PagerDuty account.

The declaration ID in this example is new_instance_alert. The function called is cmd.pagerduty.create_event. The cmd portion of this function specifies that an execution module and function will be called, in this case, the pagerduty.create_event function.

Because an execution module is specified, a target (tgt) must be specified on which to call the function. In this case, a minion called alertserver has been used. Any arguments passed through to the function are declared in the kwarg block.

Example: Reactor-Based Highstate

When Salt Cloud creates an instance, by default it will install the Salt Minion onto the instance, along with any specified minion configuration, and automatically accept that minion's keys on the master. One of the configuration options that can be specified is startup_states, which is commonly set to highstate. This will tell the minion to immediately apply a highstate, as soon as it is able to do so.

This can present a problem with some system images on some cloud providers. For instance, Salt Cloud can be configured to log in as either the root user, or a user with sudo access. While some providers commonly use images that lock out remote root access and require a user with sudo privileges to log in (notably EC2, with their ec2-user login), most cloud providers fall back to root as the default login on all images, including for operating systems (such as Ubuntu) which normally disallow remote root login.

For users of these operating systems, it is understandable that a highstate would include configuration to block remote root logins again. However, Salt Cloud may not have finished cleaning up its deployment files by the time the minion process has started, and kicked off a highstate run. Users have reported errors from Salt Cloud getting locked out while trying to clean up after itself.

The goal of a startup state may be achieved using the Event Reactor. Because a minion fires an event when it is able to receive commands, this event can effectively be used inside the reactor system instead. The following will point the reactor system to the right sls file:

reactor:
  - 'salt/cloud/*/created':
    - '/srv/reactor/startup_highstate.sls'

And the following sls file will start a highstate run on the target minion:

# /srv/reactor/startup_highstate.sls
reactor_highstate:
  cmd.state.highstate:
    - tgt: {{ data['name'] }}

Because this event will not be fired until Salt Cloud has cleaned up after itself, the highstate run will not step on Salt Cloud's toes. And because every file on the minion is configurable, including /etc/salt/minion, the startup_states can still be configured for future minion restarts, if desired.

netapi modules

Writing netapi modules

netapi modules, put simply, bind a port and start a service. They are purposefully open-ended and can be used to present a variety of external interfaces to Salt, and even present multiple interfaces at once.

Configuration

All netapi configuration is done in the Salt master config and takes a form similar to the following:

rest_cherrypy:
  port: 8000
  debug: True
  ssl_crt: /etc/pki/tls/certs/localhost.crt
  ssl_key: /etc/pki/tls/certs/localhost.key

The __virtual__ function

Like all module types in Salt, netapi modules go through Salt's loader interface to determine if they should be loaded into memory and then executed.

The __virtual__ function in the module makes this determination and should return False or a string that will serve as the name of the module. If the module raises an ImportError or any other errors, it will not be loaded.

The start function

The start() function will be called for each netapi module that is loaded. This function should contain the server loop that actually starts the service. This is started in a multiprocess.

Inline documentation

As with the rest of Salt, it is a best-practice to include liberal inline documentation in the form of a module docstring and docstrings on any classes, methods, and functions in your netapi module.

Loader “magic” methods

The loader makes the __opts__ data structure available to any function in a netapi module.

Introduction to netapi modules

netapi modules provide API-centric access to Salt. Usually externally-facing services such as REST or WebSockets, XMPP, XMLRPC, etc.

In general netapi modules bind to a port and start a service. They are purposefully open-ended. A single module can be configured to run as well as multiple modules simultaneously.

netapi modules are enabled by adding configuration to your Salt Master config file and then starting the salt-api daemon. Check the docs for each module to see external requirements and configuration settings.

Communication with Salt and Salt satellite projects is done using Salt's own Python API. A list of available client interfaces is below.

salt-api

Prior to Salt's 2014.7.0 release, netapi modules lived in the separate sister projected salt-api. That project has been merged into the main Salt project.

Client interfaces

Salt's client interfaces expose executing functions by crafting a dictionary of values that are mapped to function arguments. This allows calling functions simply by creating a data structure. (And this is exactly how much of Salt's own internals work!)

class salt.netapi.NetapiClient(opts)

Provide a uniform method of accessing the various client interfaces in Salt in the form of low-data data structures. For example:

>>> client = NetapiClient(__opts__)
>>> lowstate = {'client': 'local', 'tgt': '*', 'fun': 'test.ping', 'arg': ''}
>>> client.run(lowstate)
local(*args, **kwargs)

Run execution modules synchronously

See salt.client.LocalClient.cmd() for all available parameters.

Sends a command from the master to the targeted minions. This is the same interface that Salt's own CLI uses. Note the arg and kwarg parameters are sent down to the minion(s) and the given function, fun, is called with those parameters.

Returns:Returns the result from the execution module
local_async(*args, **kwargs)

Run execution modules asynchronously

Wraps salt.client.LocalClient.run_job().

Returns:job ID
local_batch(*args, **kwargs)

Run execution modules against batches of minions

New in version 0.8.4.

Wraps salt.client.LocalClient.cmd_batch()

Returns:Returns the result from the exeuction module for each batch of returns
runner(fun, timeout=None, **kwargs)

Run runner modules <all-salt.runners> synchronously

Wraps salt.runner.RunnerClient.cmd_sync().

Note that runner functions must be called using keyword arguments. Positional arguments are not supported.

Returns:Returns the result from the runner module
wheel(fun, **kwargs)

Run wheel modules synchronously

Wraps salt.wheel.WheelClient.master_call().

Note that wheel functions must be called using keyword arguments. Positional arguments are not supported.

Returns:Returns the result from the wheel module

Salt Virt

The Salt Virt cloud controller capability was initially added to Salt in version 0.14.0 as an alpha technology.

The initial Salt Virt system supports core cloud operations:

  • Virtual machine deployment
  • Inspection of deployed VMs
  • Virtual machine migration
  • Network profiling
  • Automatic VM integration with all aspects of Salt
  • Image Pre-seeding

Many features are currently under development to enhance the capabilities of the Salt Virt systems.

Note

It is noteworthy that Salt was originally developed with the intent of using the Salt communication system as the backbone to a cloud controller. This means that the Salt Virt system is not an afterthought, simply a system that took the back seat to other development. The original attempt to develop the cloud control aspects of Salt was a project called butter. This project never took off, but was functional and proves the early viability of Salt to be a cloud controller.

Salt Virt Tutorial

A tutorial about how to get Salt Virt up and running has been added to the tutorial section:

Cloud Controller Tutorial

The Salt Virt Runner

The point of interaction with the cloud controller is the virt runner. The virt runner comes with routines to execute specific virtual machine routines.

Reference documentation for the virt runner is available with the runner module documentation:

Virt Runner Reference

Based on Live State Data

The Salt Virt system is based on using Salt to query live data about hypervisors and then using the data gathered to make decisions about cloud operations. This means that no external resources are required to run Salt Virt, and that the information gathered about the cloud is live and accurate.

Deploy from Network or Disk

Virtual Machine Disk Profiles

Salt Virt allows for the disks created for deployed virtual machines to be finely configured. The configuration is a simple data structure which is read from the config.option function, meaning that the configuration can be stored in the minion config file, the master config file, or the minion's pillar.

This configuration option is called virt.disk. The default virt.disk data structure looks like this:

virt.disk:
  default:
    - system:
      size: 8192
      format: qcow2
      model: virtio

Note

The format and model does not need to be defined, Salt will default to the optimal format used by the underlying hypervisor, in the case of kvm this it is qcow2 and virtio.

This configuration sets up a disk profile called default. The default profile creates a single system disk on the virtual machine.

Define More Profiles

Many environments will require more complex disk profiles and may require more than one profile, this can be easily accomplished:

virt.disk:
  default:
    - system:
        size: 8192
  database:
    - system:
        size: 8192
    - data:
        size: 30720
  web:
    - system:
        size: 1024
    - logs:
        size: 5120

This configuration allows for one of three profiles to be selected, allowing virtual machines to be created with different storage needs of the deployed vm.

Virtual Machine Network Profiles

Salt Virt allows for the network devices created for deployed virtual machines to be finely configured. The configuration is a simple data structure which is read from the config.option function, meaning that the configuration can be stored in the minion config file, the master config file, or the minion's pillar.

This configuration option is called virt.nic. By default the virt.nic option is empty but defaults to a data structure which looks like this:

virt.nic:
  default:
    eth0:
      bridge: br0
      model: virtio

Note

The model does not need to be defined, Salt will default to the optimal model used by the underlying hypervisor, in the case of kvm this model is virtio

This configuration sets up a network profile called default. The default profile creates a single Ethernet device on the virtual machine that is bridged to the hypervisor's br0 interface. This default setup does not require setting up the virt.nic configuration, and is the reason why a default install only requires setting up the br0 bridge device on the hypervisor.

Define More Profiles

Many environments will require more complex network profiles and may require more than one profile, this can be easily accomplished:

virt.nic:
  dual:
    eth0:
      bridge: service_br
    eth1:
      bridge: storage_br
  single:
    eth0:
      bridge: service_br
  triple:
    eth0:
      bridge: service_br
    eth1:
      bridge: storage_br
    eth2:
      bridge: dmz_br
  all:
    eth0:
      bridge: service_br
    eth1:
      bridge: storage_br
    eth2:
      bridge: dmz_br
    eth3:
      bridge: database_br
  dmz:
    eth0:
      bridge: service_br
    eth1:
      bridge: dmz_br
  database:
    eth0:
      bridge: service_br
    eth1:
      bridge: database_br

This configuration allows for one of six profiles to be selected, allowing virtual machines to be created which attach to different network depending on the needs of the deployed vm.

Understanding YAML

The default renderer for SLS files is the YAML renderer. YAML is a markup language with many powerful features. However, Salt uses a small subset of YAML that maps over very commonly used data structures, like lists and dictionaries. It is the job of the YAML renderer to take the YAML data structure and compile it into a Python data structure for use by Salt.

Though YAML syntax may seem daunting and terse at first, there are only three very simple rules to remember when writing YAML for SLS files.

Rule One: Indentation

YAML uses a fixed indentation scheme to represent relationships between data layers. Salt requires that the indentation for each level consists of exactly two spaces. Do not use tabs.

Rule Two: Colons

Python dictionaries are, of course, simply key-value pairs. Users from other languages may recognize this data type as hashes or associative arrays.

Dictionary keys are represented in YAML as strings terminated by a trailing colon. Values are represented by either a string following the colon, separated by a space:

my_key: my_value

In Python, the above maps to:

{'my_key': 'my_value'}

Alternatively, a value can be associated with a key through indentation.

my_key:
  my_value

Note

The above syntax is valid YAML but is uncommon in SLS files because most often, the value for a key is not singular but instead is a list of values.

In Python, the above maps to:

{'my_key': 'my_value'}

Dictionaries can be nested:

first_level_dict_key:
  second_level_dict_key: value_in_second_level_dict

And in Python:

{
    'first_level_dict_key': {
        'second_level_dict_key': 'value_in_second_level_dict'
    }
}

Rule Three: Dashes

To represent lists of items, a single dash followed by a space is used. Multiple items are a part of the same list as a function of their having the same level of indentation.

- list_value_one
- list_value_two
- list_value_three

Lists can be the value of a key-value pair. This is quite common in Salt:

my_dictionary:
  - list_value_one
  - list_value_two
  - list_value_three

In Python, the above maps to:

{'my_dictionary': ['list_value_one', 'list_value_two', 'list_value_three']}

Learning More

One easy way to learn more about how YAML gets rendered into Python data structures is to use an online YAML parser to see the Python output.

One excellent choice for experimenting with YAML parsing is: http://yaml-online-parser.appspot.com/

Master Tops System

In 0.10.4 the external_nodes system was upgraded to allow for modular subsystems to be used to generate the top file data for a highstate run on the master.

The old external_nodes option has been removed. The master tops system contains a number of subsystems that are loaded via the Salt loader interfaces like modules, states, returners, runners, etc.

Using the new master_tops option is simple:

master_tops:
  ext_nodes: cobbler-external-nodes

for Cobbler or:

master_tops:
  reclass:
    inventory_base_uri: /etc/reclass
    classes_uri: roles

for Reclass.

It's also possible to create custom master_tops modules. These modules must go in a subdirectory called tops in the extension_modules directory. The extension_modules directory is not defined by default (the default /srv/salt/_modules will NOT work as of this release)

Custom tops modules are written like any other execution module, see the source for the two modules above for examples of fully functional ones. Below is a degenerate example:

/etc/salt/master:

extension_modules: /srv/salt/modules
master_tops:
  customtop: True

/srv/salt/modules/tops/customtop.py:

import logging
import sys
# Define the module's virtual name
__virtualname__ = 'customtop'

log = logging.getLogger(__name__)

def __virtual__():
    return __virtualname__


def top(**kwargs):
    log.debug('Calling top in customtop')
    return {'base': ['test']}

salt minion state.show_top should then display something like:

$ salt minion state.show_top

minion
    ----------
    base:
      - test

Salt SSH

Note

Salt ssh is considered production ready in version 2014.7.0

Note

On many systems, the salt-ssh executable will be in its own package, usually named salt-ssh.

In version 0.17.0 of Salt a new transport system was introduced, the ability to use SSH for Salt communication. This addition allows for Salt routines to be executed on remote systems entirely through ssh, bypassing the need for a Salt Minion to be running on the remote systems and the need for a Salt Master.

Note

The Salt SSH system does not supercede the standard Salt communication systems, it simply offers an SSH based alternative that does not require ZeroMQ and a remote agent. Be aware that since all communication with Salt SSH is executed via SSH it is substantially slower than standard Salt with ZeroMQ.

Salt SSH is very easy to use, simply set up a basic roster file of the systems to connect to and run salt-ssh commands in a similar way as standard salt commands.

Note

The Salt SSH eventually is supposed to support the same set of commands and functionality as standard salt command.

At the moment fileserver operations must be wrapped to ensure that the relevant files are delivered with the salt-ssh commands. The state module is an exception, which compiles the state run on the master, and in the process finds all the references to salt:// paths and copies those files down in the same tarball as the state run. However, needed fileserver wrappers are still under development.

Salt SSH Roster

The roster system in Salt allows for remote minions to be easily defined.

Note

See the Roster documentation for more details.

Simply create the roster file, the default location is /etc/salt/roster:

web1: 192.168.42.1

This is a very basic roster file where a Salt ID is being assigned to an IP address. A more elaborate roster can be created:

web1:
  host: 192.168.42.1 # The IP addr or DNS hostname
  user: fred         # Remote executions will be executed as user fred
  passwd: foobarbaz  # The password to use for login, if omitted, keys are used
  sudo: True         # Whether to sudo to root, not enabled by default
web2:
  host: 192.168.42.2

Note

sudo works only if NOPASSWD is set for user in /etc/sudoers: fred ALL=(ALL) NOPASSWD: ALL

Calling Salt SSH

The salt-ssh command can be easily executed in the same way as a salt command:

salt-ssh '*' test.ping

Commands with salt-ssh follow the same syntax as the salt command.

The standard salt functions are available! The output is the same as salt and many of the same flags are available. Please see http://docs.saltstack.com/ref/cli/salt-ssh.html for all of the available options.

Raw Shell Calls

By default salt-ssh runs Salt execution modules on the remote system, but salt-ssh can also execute raw shell commands:

salt-ssh '*' -r 'ifconfig'

States Via Salt SSH

The Salt State system can also be used with salt-ssh. The state system abstracts the same interface to the user in salt-ssh as it does when using standard salt. The intent is that Salt Formulas defined for standard salt will work seamlessly with salt-ssh and vice-versa.

The standard Salt States walkthroughs function by simply replacing salt commands with salt-ssh.

Targeting with Salt SSH

Due to the fact that the targeting approach differs in salt-ssh, only glob and regex targets are supported as of this writing, the remaining target systems still need to be implemented.

Note

By default, Grains are settable through salt-ssh. By default, these grains will not be persisted across reboots.

See the "thin_dir" setting in Roster documentation for more details.

Configuring Salt SSH

Salt SSH takes its configuration from a master configuration file. Normally, this file is in /etc/salt/master. If one wishes to use a customized configuration file, the -c option to Salt SSH facilitates passing in a directory to look inside for a configuration file named master.

Minion Config

New in version 2015.5.1.

Minion config options can be defined globally using the master configuration option ssh_minion_opts. It can also be defined on a per-minion basis with the minion_opts entry in the roster.

Running Salt SSH as non-root user

By default, Salt read all the configuration from /etc/salt/. If you are running Salt SSH with a regular user you have to modify some paths or you will get "Permission denied" messages. You have to modify two parameters: pki_dir and cachedir. Those should point to a full path writable for the user.

It's recommed not to modify /etc/salt for this purpose. Create a private copy of /etc/salt for the user and run the command with -c /new/config/path.

Define CLI Options with Saltfile

If you are commonly passing in CLI options to salt-ssh, you can create a Saltfile to automatically use these options. This is common if you're managing several different salt projects on the same server.

So you can cd into a directory that has a Saltfile with the following YAML contents:

salt-ssh:
  config_dir: path/to/config/dir
  max_procs: 30
  wipe_ssh: True

Instead of having to call salt-ssh --config-dir=path/to/config/dir --max-procs=30 --wipe \* test.ping you can call salt-ssh \* test.ping.

Boolean-style options should be specified in their YAML representation.

Note

The option keys specified must match the destination attributes for the options specified in the parser salt.utils.parsers.SaltSSHOptionParser. For example, in the case of the --wipe command line option, its dest is configured to be wipe_ssh and thus this is what should be configured in the Saltfile. Using the names of flags for this option, being wipe: True or w: True, will not work.

Salt Rosters

Salt rosters are pluggable systems added in Salt 0.17.0 to facilitate the salt-ssh system. The roster system was created because salt-ssh needs a means to identify which systems need to be targeted for execution.

Note

The Roster System is not needed or used in standard Salt because the master does not need to be initially aware of target systems, since the Salt Minion checks itself into the master.

Since the roster system is pluggable, it can be easily augmented to attach to any existing systems to gather information about what servers are presently available and should be attached to by salt-ssh. By default the roster file is located at /etc/salt/roster.

How Rosters Work

The roster system compiles a data structure internally referred to as targets. The targets is a list of target systems and attributes about how to connect to said systems. The only requirement for a roster module in Salt is to return the targets data structure.

Targets Data

The information which can be stored in a roster target is the following:

<Salt ID>:       # The id to reference the target system with
    host:        # The IP address or DNS name of the remote host
    user:        # The user to log in as
    passwd:      # The password to log in with

    # Optional parameters
    port:        # The target system's ssh port number
    sudo:        # Boolean to run command via sudo
    priv:        # File path to ssh private key, defaults to salt-ssh.rsa
    timeout:     # Number of seconds to wait for response when establishing
                 # an SSH connection
    timeout:     # Number of seconds to wait for response
    minion_opts: # Dictionary of minion opts
    thin_dir:    # The target system's storage directory for Salt
                 # components. Defaults to /tmp/salt-<hash>.

thin_dir

Salt needs to upload a standalone environment to the target system, and this defaults to /tmp/salt-<hash>. This directory will be cleaned up per normal systems operation.

If you need a persistent Salt environment, for instance to set persistent grains, this value will need to be changed.

Reference

Full list of builtin auth modules

auto An "Always Approved" eauth interface to test against, not intended for
django Provide authentication using Django Web Framework
keystone Provide authentication using OpenStack Keystone
ldap Provide authentication using simple LDAP binds
mysql Provide authentication using MySQL.
pam Authenticate against PAM
pki Authenticate via a PKI certificate.
stormpath Provide authentication using Stormpath.
yubico Provide authentication using YubiKey.

Command Line Reference

Salt can be controlled by a command line client by the root user on the Salt master. The Salt command line client uses the Salt client API to communicate with the Salt master server. The Salt client is straightforward and simple to use.

Using the Salt client commands can be easily sent to the minions.

Each of these commands accepts an explicit --config option to point to either the master or minion configuration file. If this option is not provided and the default configuration file does not exist then Salt falls back to use the environment variables SALT_MASTER_CONFIG and SALT_MINION_CONFIG.

See also

Configuration

Using the Salt Command

The Salt command needs a few components to send information to the Salt minions. The target minions need to be defined, the function to call and any arguments the function requires.

Defining the Target Minions

The first argument passed to salt, defines the target minions, the target minions are accessed via their hostname. The default target type is a bash glob:

salt '*foo.com' sys.doc

Salt can also define the target minions with regular expressions:

salt -E '.*' cmd.run 'ls -l | grep foo'

Or to explicitly list hosts, salt can take a list:

salt -L foo.bar.baz,quo.qux cmd.run 'ps aux | grep foo'
More Powerful Targets

The simple target specifications, glob, regex, and list will cover many use cases, and for some will cover all use cases, but more powerful options exist.

Targeting with Grains

The Grains interface was built into Salt to allow minions to be targeted by system properties. So minions running on a particular operating system can be called to execute a function, or a specific kernel.

Calling via a grain is done by passing the -G option to salt, specifying a grain and a glob expression to match the value of the grain. The syntax for the target is the grain key followed by a globexpression: "os:Arch*".

salt -G 'os:Fedora' test.ping

Will return True from all of the minions running Fedora.

To discover what grains are available and what the values are, execute the grains.item salt function:

salt '*' grains.items

more info on using targeting with grains can be found here.

Targeting with Executions

As of 0.8.8 targeting with executions is still under heavy development and this documentation is written to reference the behavior of execution matching in the future.

Execution matching allows for a primary function to be executed, and then based on the return of the primary function the main function is executed.

Execution matching allows for matching minions based on any arbitrary running data on the minions.

Compound Targeting

New in version 0.9.5.

Multiple target interfaces can be used in conjunction to determine the command targets. These targets can then be combined using and or or statements. This is well defined with an example:

salt -C 'G@os:Debian and webser* or E@db.*' test.ping

In this example any minion who's id starts with webser and is running Debian, or any minion who's id starts with db will be matched.

The type of matcher defaults to glob, but can be specified with the corresponding letter followed by the @ symbol. In the above example a grain is used with G@ as well as a regular expression with E@. The webser* target does not need to be prefaced with a target type specifier because it is a glob.

more info on using compound targeting can be found here.

Node Group Targeting

New in version 0.9.5.

For certain cases, it can be convenient to have a predefined group of minions on which to execute commands. This can be accomplished using what are called nodegroups. Nodegroups allow for predefined compound targets to be declared in the master configuration file, as a sort of shorthand for having to type out complicated compound expressions.

nodegroups:
  group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com and bl*.domain.com'
  group2: 'G@os:Debian and foo.domain.com'
  group3: 'G@os:Debian and N@group1'
Calling the Function

The function to call on the specified target is placed after the target specification.

New in version 0.9.8.

Functions may also accept arguments, space-delimited:

salt '*' cmd.exec_code python 'import sys; print sys.version'

Optional, keyword arguments are also supported:

salt '*' pip.install salt timeout=5 upgrade=True

They are always in the form of kwarg=argument.

Arguments are formatted as YAML:

salt '*' cmd.run 'echo "Hello: $FIRST_NAME"' env='{FIRST_NAME: "Joe"}'

Note: dictionaries must have curly braces around them (like the env keyword argument above). This was changed in 0.15.1: in the above example, the first argument used to be parsed as the dictionary {'echo "Hello': '$FIRST_NAME"'}. This was generally not the expected behavior.

If you want to test what parameters are actually passed to a module, use the test.arg_repr command:

salt '*' test.arg_repr 'echo "Hello: $FIRST_NAME"' env='{FIRST_NAME: "Joe"}'
Finding available minion functions

The Salt functions are self documenting, all of the function documentation can be retried from the minions via the sys.doc() function:

salt '*' sys.doc
Compound Command Execution

If a series of commands needs to be sent to a single target specification then the commands can be sent in a single publish. This can make gathering groups of information faster, and lowers the stress on the network for repeated commands.

Compound command execution works by sending a list of functions and arguments instead of sending a single function and argument. The functions are executed on the minion in the order they are defined on the command line, and then the data from all of the commands are returned in a dictionary. This means that the set of commands are called in a predictable way, and the returned data can be easily interpreted.

Executing compound commands if done by passing a comma delimited list of functions, followed by a comma delimited list of arguments:

salt '*' cmd.run,test.ping,test.echo 'cat /proc/cpuinfo',,foo

The trick to look out for here, is that if a function is being passed no arguments, then there needs to be a placeholder for the absent arguments. This is why in the above example, there are two commas right next to each other. test.ping takes no arguments, so we need to add another comma, otherwise Salt would attempt to pass "foo" to test.ping.

If you need to pass arguments that include commas, then make sure you add spaces around the commas that separate arguments. For example:

salt '*' cmd.run,test.ping,test.echo 'echo "1,2,3"' , , foo

You may change the arguments separator using the --args-separator option:

salt --args-separator=:: '*' some.fun,test.echo params with , comma :: foo

salt-call

salt-call
Synopsis
salt-call [options]
Description

The salt-call command is used to run module functions locally on a minion instead of executing them from the master.

Options
--version

Print the version of Salt that is running.

--versions-report

Show program's dependencies and version number, and then exit

-h, --help

Show the help message and exit

-c CONFIG_DIR, --config-dir=CONFIG_dir

The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is /etc/salt.

--hard-crash

Raise any original exception rather than exiting gracefully Default: False

-g, --grains

Return the information generated by the Salt grains

-m MODULE_DIRS, --module-dirs=MODULE_DIRS

Specify an additional directory to pull modules from. Multiple directories can be provided by passing -m /--module-dirs multiple times.

-d, --doc, --documentation

Return the documentation for the specified module or for all modules if none are specified

--master=MASTER

Specify the master to use. The minion must be authenticated with the master. If this option is omitted, the master options from the minion config will be used. If multi masters are set up the first listed master that responds will be used.

--return RETURNER

Set salt-call to pass the return data to one or many returner interfaces. To use many returner interfaces specify a comma delimited list of returners.

--local

Run salt-call locally, as if there was no master running.

--file-root=FILE_ROOT

Set this directory as the base file root.

--pillar-root=PILLAR_ROOT

Set this directory as the base pillar root.

--retcode-passthrough

Exit with the salt call retcode and not the salt binary retcode

--metadata

Print out the execution metadata as well as the return. This will print out the outputter data, the return code, etc.

--id=ID

Specify the minion id to use. If this option is omitted, the id option from the minion config will be used.

--skip-grains

Do not load grains.

--refresh-grains-cache

Force a refresh of the grains cache

Logging Options

Logging options which override any settings defined on the configuration files.

-l LOG_LEVEL, --log-level=LOG_LEVEL

Console logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: info.

--log-file=LOG_FILE

Log file path. Default: /var/log/salt/minion.

--log-file-level=LOG_LEVEL_LOGFILE

Logfile logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: info.

Output Options
--out

Pass in an alternative outputter to display the return of data. This outputter can be any of the available outputters:

grains, highstate, json, key, overstatestage, pprint, raw, txt, yaml

Some outputters are formatted only for data returned from specific functions; for instance, the grains outputter will not work for non-grains data.

If an outputter is used that does not support the data passed into it, then Salt will fall back on the pprint outputter and display the return data using the Python pprint standard library module.

Note

If using --out=json, you will probably want --static as well. Without the static option, you will get a JSON string for each minion. This is due to using an iterative outputter. So if you want to feed it to a JSON parser, use --static as well.

--out-indent OUTPUT_INDENT, --output-indent OUTPUT_INDENT

Print the output indented by the provided value in spaces. Negative values disable indentation. Only applicable in outputters that support indentation.

--out-file=OUTPUT_FILE, --output-file=OUTPUT_FILE

Write the output to the specified file.

--no-color

Disable all colored output

--force-color

Force colored output

Note

When using colored output the color codes are as follows:

green denotes success, red denotes failure, blue denotes changes and success and yellow denotes a expected future change in configuration.

See also

salt(1) salt-master(1) salt-minion(1)

salt

salt
Synopsis

salt '*' [ options ] sys.doc

salt -E '.*' [ options ] sys.doc cmd

salt -G 'os:Arch.*' [ options ] test.ping

salt -C 'G@os:Arch.* and webserv* or G@kernel:FreeBSD' [ options ] test.ping

Description

Salt allows for commands to be executed across a swath of remote systems in parallel. This means that remote systems can be both controlled and queried with ease.

Options
--version

Print the version of Salt that is running.

--versions-report

Show program's dependencies and version number, and then exit

-h, --help

Show the help message and exit

-c CONFIG_DIR, --config-dir=CONFIG_dir

The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is /etc/salt.

-t TIMEOUT, --timeout=TIMEOUT

The timeout in seconds to wait for replies from the Salt minions. The timeout number specifies how long the command line client will wait to query the minions and check on running jobs. Default: 5

-s, --static

By default as of version 0.9.8 the salt command returns data to the console as it is received from minions, but previous releases would return data only after all data was received. To only return the data with a hard timeout and after all minions have returned then use the static option.

--async

Instead of waiting for the job to run on minions only print the job id of the started execution and complete.

--state-output=STATE_OUTPUT

New in version 0.17.

Override the configured state_output value for minion output. One of full, terse, mixed, changes or filter. Default: full.

--subset=SUBSET

Execute the routine on a random subset of the targeted minions. The minions will be verified that they have the named function before executing.

-v VERBOSE, --verbose

Turn on verbosity for the salt call, this will cause the salt command to print out extra data like the job id.

--hide-timeout

Instead of showing the return data for all minions. This option prints only the online minions which could be reached.

-b BATCH, --batch-size=BATCH

Instead of executing on all targeted minions at once, execute on a progressive set of minions. This option takes an argument in the form of an explicit number of minions to execute at once, or a percentage of minions to execute on.

-a EAUTH, --auth=EAUTH

Pass in an external authentication medium to validate against. The credentials will be prompted for. The options are auto, keystone, ldap, pam, and stormpath. Can be used with the -T option.

-T, --make-token

Used in conjunction with the -a option. This creates a token that allows for the authenticated user to send commands without needing to re-authenticate.

--return=RETURNER

Choose an alternative returner to call on the minion, if an alternative returner is used then the return will not come back to the command line but will be sent to the specified return system. The options are carbon, cassandra, couchbase, couchdb, elasticsearch, etcd, hipchat, local, local_cache, memcache, mongo, mysql, odbc, postgres, redis, sentry, slack, sms, smtp, sqlite3, syslog, and xmpp.

-d, --doc, --documentation

Return the documentation for the module functions available on the minions

--args-separator=ARGS_SEPARATOR

Set the special argument used as a delimiter between command arguments of compound commands. This is useful when one wants to pass commas as arguments to some of the commands in a compound command.

Logging Options

Logging options which override any settings defined on the configuration files.

-l LOG_LEVEL, --log-level=LOG_LEVEL

Console logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.

--log-file=LOG_FILE

Log file path. Default: /var/log/salt/master.

--log-file-level=LOG_LEVEL_LOGFILE

Logfile logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.

Target Selection
-E, --pcre

The target expression will be interpreted as a PCRE regular expression rather than a shell glob.

-L, --list

The target expression will be interpreted as a comma-delimited list; example: server1.foo.bar,server2.foo.bar,example7.quo.qux

-G, --grain

The target expression matches values returned by the Salt grains system on the minions. The target expression is in the format of '<grain value>:<glob expression>'; example: 'os:Arch*'

This was changed in version 0.9.8 to accept glob expressions instead of regular expression. To use regular expression matching with grains, use the --grain-pcre option.

--grain-pcre

The target expression matches values returned by the Salt grains system on the minions. The target expression is in the format of '<grain value>:< regular expression>'; example: 'os:Arch.*'

-N, --nodegroup

Use a predefined compound target defined in the Salt master configuration file.

-R, --range

Instead of using shell globs to evaluate the target, use a range expression to identify targets. Range expressions look like %cluster.

Using the Range option requires that a range server is set up and the location of the range server is referenced in the master configuration file.

-C, --compound

Utilize many target definitions to make the call very granular. This option takes a group of targets separated by and or or. The default matcher is a glob as usual. If something other than a glob is used, preface it with the letter denoting the type; example: 'webserv* and G@os:Debian or E@db*' Make sure that the compound target is encapsulated in quotes.

-I, --pillar

Instead of using shell globs to evaluate the target, use a pillar value to identify targets. The syntax for the target is the pillar key followed by a glob expression: "role:production*"

-S, --ipcidr

Match based on Subnet (CIDR notation) or IPv4 address.

Output Options
--out

Pass in an alternative outputter to display the return of data. This outputter can be any of the available outputters:

grains, highstate, json, key, overstatestage, pprint, raw, txt, yaml

Some outputters are formatted only for data returned from specific functions; for instance, the grains outputter will not work for non-grains data.

If an outputter is used that does not support the data passed into it, then Salt will fall back on the pprint outputter and display the return data using the Python pprint standard library module.

Note

If using --out=json, you will probably want --static as well. Without the static option, you will get a JSON string for each minion. This is due to using an iterative outputter. So if you want to feed it to a JSON parser, use --static as well.

--out-indent OUTPUT_INDENT, --output-indent OUTPUT_INDENT

Print the output indented by the provided value in spaces. Negative values disable indentation. Only applicable in outputters that support indentation.

--out-file=OUTPUT_FILE, --output-file=OUTPUT_FILE

Write the output to the specified file.

--no-color

Disable all colored output

--force-color

Force colored output

Note

When using colored output the color codes are as follows:

green denotes success, red denotes failure, blue denotes changes and success and yellow denotes a expected future change in configuration.

See also

salt(7) salt-master(1) salt-minion(1)

salt-cloud

salt-cloud

Provision virtual machines in the cloud with Salt

Synopsis
salt-cloud -m /etc/salt/cloud.map

salt-cloud -m /etc/salt/cloud.map NAME

salt-cloud -m /etc/salt/cloud.map NAME1 NAME2

salt-cloud -p PROFILE NAME

salt-cloud -p PROFILE NAME1 NAME2 NAME3 NAME4 NAME5 NAME6
Description

Salt Cloud is the system used to provision virtual machines on various public clouds via a cleanly controlled profile and mapping system.

Options
--version

Print the version of Salt that is running.

--versions-report

Show program's dependencies and version number, and then exit

-h, --help

Show the help message and exit

-c CONFIG_DIR, --config-dir=CONFIG_dir

The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is /etc/salt.

Execution Options
-L LOCATION, --location=LOCATION

Specify which region to connect to.

-a ACTION, --action=ACTION

Perform an action that may be specific to this cloud provider. This argument requires one or more instance names to be specified.

-f <FUNC-NAME> <PROVIDER>, --function=<FUNC-NAME> <PROVIDER>

Perform an function that may be specific to this cloud provider, that does not apply to an instance. This argument requires a provider to be specified (i.e.: nova).

-p PROFILE, --profile=PROFILE

Select a single profile to build the named cloud VMs from. The profile must be defined in the specified profiles file.

-m MAP, --map=MAP

Specify a map file to use. If used without any other options, this option will ensure that all of the mapped VMs are created. If the named VM already exists then it will be skipped.

-H, --hard

When specifying a map file, the default behavior is to ensure that all of the VMs specified in the map file are created. If the --hard option is set, then any VMs that exist on configured cloud providers that are not specified in the map file will be destroyed. Be advised that this can be a destructive operation and should be used with care.

-d, --destroy

Pass in the name(s) of VMs to destroy, salt-cloud will search the configured cloud providers for the specified names and destroy the VMs. Be advised that this is a destructive operation and should be used with care. Can be used in conjunction with the -m option to specify a map of VMs to be deleted.

-P, --parallel

Normally when building many cloud VMs they are executed serially. The -P option will run each cloud vm build in a separate process allowing for large groups of VMs to be build at once.

Be advised that some cloud provider's systems don't seem to be well suited for this influx of vm creation. When creating large groups of VMs watch the cloud provider carefully.

-Q, --query

Execute a query and print out information about all cloud VMs. Can be used in conjunction with -m to display only information about the specified map.

-u, --update-bootstrap

Update salt-bootstrap to the latest develop version on GitHub.

-y, --assume-yes

Default yes in answer to all confirmation questions.

-k, --keep-tmp

Do not remove files from /tmp/ after deploy.sh finishes.

--show-deploy-args

Include the options used to deploy the minion in the data returned.

--script-args=SCRIPT_ARGS

Script arguments to be fed to the bootstrap script when deploying the VM.

Query Options
-Q, --query

Execute a query and return some information about the nodes running on configured cloud providers

-F, --full-query

Execute a query and print out all available information about all cloud VMs. Can be used in conjunction with -m to display only information about the specified map.

-S, --select-query

Execute a query and print out selected information about all cloud VMs. Can be used in conjunction with -m to display only information about the specified map.

--list-providers

Display a list of configured providers.

--list-profiles

New in version 2014.7.0.

Display a list of configured profiles. Pass in a cloud provider to view the provider's associated profiles, such as digital_ocean, or pass in all to list all the configured profiles.

Cloud Providers Listings
--list-locations=LIST_LOCATIONS

Display a list of locations available in configured cloud providers. Pass the cloud provider that available locations are desired on, aka "linode", or pass "all" to list locations for all configured cloud providers

--list-images=LIST_IMAGES

Display a list of images available in configured cloud providers. Pass the cloud provider that available images are desired on, aka "linode", or pass "all" to list images for all configured cloud providers

--list-sizes=LIST_SIZES

Display a list of sizes available in configured cloud providers. Pass the cloud provider that available sizes are desired on, aka "AWS", or pass "all" to list sizes for all configured cloud providers

Cloud Credentials
--set-password=<USERNAME> <PROVIDER>

Configure password for a cloud provider and save it to the keyring. PROVIDER can be specified with or without a driver, for example: "--set-password bob rackspace" or more specific "--set-password bob rackspace:openstack" DEPRECATED!

Output Options
--out

Pass in an alternative outputter to display the return of data. This outputter can be any of the available outputters:

grains, highstate, json, key, overstatestage, pprint, raw, txt, yaml

Some outputters are formatted only for data returned from specific functions; for instance, the grains outputter will not work for non-grains data.

If an outputter is used that does not support the data passed into it, then Salt will fall back on the pprint outputter and display the return data using the Python pprint standard library module.

Note

If using --out=json, you will probably want --static as well. Without the static option, you will get a JSON string for each minion. This is due to using an iterative outputter. So if you want to feed it to a JSON parser, use --static as well.

--out-indent OUTPUT_INDENT, --output-indent OUTPUT_INDENT

Print the output indented by the provided value in spaces. Negative values disable indentation. Only applicable in outputters that support indentation.

--out-file=OUTPUT_FILE, --output-file=OUTPUT_FILE

Write the output to the specified file.

--no-color

Disable all colored output

--force-color

Force colored output

Note

When using colored output the color codes are as follows:

green denotes success, red denotes failure, blue denotes changes and success and yellow denotes a expected future change in configuration.

Examples

To create 4 VMs named web1, web2, db1, and db2 from specified profiles:

salt-cloud -p fedora_rackspace web1 web2 db1 db2

To read in a map file and create all VMs specified therein:

salt-cloud -m /path/to/cloud.map

To read in a map file and create all VMs specified therein in parallel:

salt-cloud -m /path/to/cloud.map -P

To delete any VMs specified in the map file:

salt-cloud -m /path/to/cloud.map -d

To delete any VMs NOT specified in the map file:

salt-cloud -m /path/to/cloud.map -H

To display the status of all VMs specified in the map file:

salt-cloud -m /path/to/cloud.map -Q
See also

salt-cloud(7) salt(7) salt-master(1) salt-minion(1)

salt-cp

salt-cp

Copy a file to a set of systems

Synopsis
salt-cp '*' [ options ] SOURCE DEST

salt-cp -E '.*' [ options ] SOURCE DEST

salt-cp -G 'os:Arch.*' [ options ] SOURCE DEST
Description

Salt copy copies a local file out to all of the Salt minions matched by the given target.

Note: salt-cp uses salt's publishing mechanism. This means the privacy of the contents of the file on the wire are completely dependant upon the transport in use. In addition, if the salt-master is running with debug logging it is possible that the contents of the file will be logged to disk.

Options
--version

Print the version of Salt that is running.

--versions-report

Show program's dependencies and version number, and then exit

-h, --help

Show the help message and exit

-c CONFIG_DIR, --config-dir=CONFIG_dir

The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is /etc/salt.

-t TIMEOUT, --timeout=TIMEOUT

The timeout in seconds to wait for replies from the Salt minions. The timeout number specifies how long the command line client will wait to query the minions and check on running jobs. Default: 5

Logging Options

Logging options which override any settings defined on the configuration files.

-l LOG_LEVEL, --log-level=LOG_LEVEL

Console logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.

--log-file=LOG_FILE

Log file path. Default: /var/log/salt/master.

--log-file-level=LOG_LEVEL_LOGFILE

Logfile logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.

Target Selection
-E, --pcre

The target expression will be interpreted as a PCRE regular expression rather than a shell glob.

-L, --list

The target expression will be interpreted as a comma-delimited list; example: server1.foo.bar,server2.foo.bar,example7.quo.qux

-G, --grain

The target expression matches values returned by the Salt grains system on the minions. The target expression is in the format of '<grain value>:<glob expression>'; example: 'os:Arch*'

This was changed in version 0.9.8 to accept glob expressions instead of regular expression. To use regular expression matching with grains, use the --grain-pcre option.

--grain-pcre

The target expression matches values returned by the Salt grains system on the minions. The target expression is in the format of '<grain value>:< regular expression>'; example: 'os:Arch.*'

-N, --nodegroup

Use a predefined compound target defined in the Salt master configuration file.

-R, --range

Instead of using shell globs to evaluate the target, use a range expression to identify targets. Range expressions look like %cluster.

Using the Range option requires that a range server is set up and the location of the range server is referenced in the master configuration file.

See also

salt(1) salt-master(1) salt-minion(1)

salt-key

salt-key
Synopsis
salt-key [ options ]
Description

Salt-key executes simple management of Salt server public keys used for authentication.

Options
--version

Print the version of Salt that is running.

--versions-report

Show program's dependencies and version number, and then exit

-h, --help

Show the help message and exit

-c CONFIG_DIR, --config-dir=CONFIG_dir

The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is /etc/salt.

-u USER, --user=USER

Specify user to run salt-key

--hard-crash

Raise any original exception rather than exiting gracefully. Default is False.

-q, --quiet

Suppress output

-y, --yes

Answer 'Yes' to all questions presented, defaults to False

--rotate-aes-key=ROTATE_AES_KEY

Setting this to False prevents the master from refreshing the key session when keys are deleted or rejected, this lowers the security of the key deletion/rejection operation. Default is True.

Logging Options

Logging options which override any settings defined on the configuration files.

--log-file=LOG_FILE

Log file path. Default: /var/log/salt/minion.

--log-file-level=LOG_LEVEL_LOGFILE

Logfile logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.

Output Options
--out

Pass in an alternative outputter to display the return of data. This outputter can be any of the available outputters:

grains, highstate, json, key, overstatestage, pprint, raw, txt, yaml

Some outputters are formatted only for data returned from specific functions; for instance, the grains outputter will not work for non-grains data.

If an outputter is used that does not support the data passed into it, then Salt will fall back on the pprint outputter and display the return data using the Python pprint standard library module.

Note

If using --out=json, you will probably want --static as well. Without the static option, you will get a JSON string for each minion. This is due to using an iterative outputter. So if you want to feed it to a JSON parser, use --static as well.

--out-indent OUTPUT_INDENT, --output-indent OUTPUT_INDENT

Print the output indented by the provided value in spaces. Negative values disable indentation. Only applicable in outputters that support indentation.

--out-file=OUTPUT_FILE, --output-file=OUTPUT_FILE

Write the output to the specified file.

--no-color

Disable all colored output

--force-color

Force colored output

Note

When using colored output the color codes are as follows:

green denotes success, red denotes failure, blue denotes changes and success and yellow denotes a expected future change in configuration.

Actions
-l ARG, --list=ARG

List the public keys. The args pre, un, and unaccepted will list unaccepted/unsigned keys. acc or accepted will list accepted/signed keys. rej or rejected will list rejected keys. Finally, all will list all keys.

-L, --list-all

List all public keys. (Deprecated: use --list all)

-a ACCEPT, --accept=ACCEPT

Accept the specified public key (use --include-all to match rejected keys in addition to pending keys). Globs are supported.

-A, --accept-all

Accepts all pending keys.

-r REJECT, --reject=REJECT

Reject the specified public key (use --include-all to match accepted keys in addition to pending keys). Globs are supported.

-R, --reject-all

Rejects all pending keys.

--include-all

Include non-pending keys when accepting/rejecting.

-p PRINT, --print=PRINT

Print the specified public key.

-P, --print-all

Print all public keys

-d DELETE, --delete=DELETE

Delete the specified key. Globs are supported.

-D, --delete-all

Delete all keys.

-f FINGER, --finger=FINGER

Print the specified key's fingerprint.

-F, --finger-all

Print all keys' fingerprints.

Key Generation Options
--gen-keys=GEN_KEYS

Set a name to generate a keypair for use with salt

--gen-keys-dir=GEN_KEYS_DIR

Set the directory to save the generated keypair. Only works with 'gen_keys_dir' option; default is the current directory.

--keysize=KEYSIZE

Set the keysize for the generated key, only works with the '--gen-keys' option, the key size must be 2048 or higher, otherwise it will be rounded up to 2048. The default is 2048.

--gen-signature

Create a signature file of the masters public-key named master_pubkey_signature. The signature can be send to a minion in the masters auth-reply and enables the minion to verify the masters public-key cryptographically. This requires a new signing-key- pair which can be auto-created with the --auto-create parameter.

--priv=PRIV

The private-key file to create a signature with

--signature-path=SIGNATURE_PATH

The path where the signature file should be written

--pub=PUB

The public-key file to create a signature for

--auto-create

Auto-create a signing key-pair if it does not yet exist

See also

salt(7) salt-master(1) salt-minion(1)

salt-master

salt-master

The Salt master daemon, used to control the Salt minions

Synopsis
salt-master [ options ]
Description

The master daemon controls the Salt minions

Options
--version

Print the version of Salt that is running.

--versions-report

Show program's dependencies and version number, and then exit

-h, --help

Show the help message and exit

-c CONFIG_DIR, --config-dir=CONFIG_dir

The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is /etc/salt.

-u USER, --user=USER

Specify user to run salt-master

-d, --daemon

Run salt-master as a daemon

--pid-file PIDFILE

Specify the location of the pidfile. Default: /var/run/salt-master.pid

Logging Options

Logging options which override any settings defined on the configuration files.

-l LOG_LEVEL, --log-level=LOG_LEVEL

Console logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.

--log-file=LOG_FILE

Log file path. Default: /var/log/salt/master.

--log-file-level=LOG_LEVEL_LOGFILE

Logfile logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.

See also

salt(1) salt(7) salt-minion(1)

salt-minion

salt-minion

The Salt minion daemon, receives commands from a remote Salt master.

Synopsis
salt-minion [ options ]
Description

The Salt minion receives commands from the central Salt master and replies with the results of said commands.

Options
--version

Print the version of Salt that is running.

--versions-report

Show program's dependencies and version number, and then exit

-h, --help

Show the help message and exit

-c CONFIG_DIR, --config-dir=CONFIG_dir

The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is /etc/salt.

-u USER, --user=USER

Specify user to run salt-minion

-d, --daemon

Run salt-minion as a daemon

--pid-file PIDFILE

Specify the location of the pidfile. Default: /var/run/salt-minion.pid

Logging Options

Logging options which override any settings defined on the configuration files.

-l LOG_LEVEL, --log-level=LOG_LEVEL

Console logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.

--log-file=LOG_FILE

Log file path. Default: /var/log/salt/minion.

--log-file-level=LOG_LEVEL_LOGFILE

Logfile logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.

See also

salt(1) salt(7) salt-master(1)

salt-run

salt-run

Execute a Salt runner

Synopsis
salt-run RUNNER
Description

salt-run is the frontend command for executing Salt Runners. Salt runners are simple modules used to execute convenience functions on the master

Options
--version

Print the version of Salt that is running.

--versions-report

Show program's dependencies and version number, and then exit

-h, --help

Show the help message and exit

-c CONFIG_DIR, --config-dir=CONFIG_dir

The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is /etc/salt.

-t TIMEOUT, --timeout=TIMEOUT

The timeout in seconds to wait for replies from the Salt minions. The timeout number specifies how long the command line client will wait to query the minions and check on running jobs. Default: 1

--hard-crash

Raise any original exception rather than exiting gracefully. Default is False.

-d, --doc, --documentation

Display documentation for runners, pass a module or a runner to see documentation on only that module/runner.

Logging Options

Logging options which override any settings defined on the configuration files.

-l LOG_LEVEL, --log-level=LOG_LEVEL

Console logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.

--log-file=LOG_FILE

Log file path. Default: /var/log/salt/master.

--log-file-level=LOG_LEVEL_LOGFILE

Logfile logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.

See also

salt(1) salt-master(1) salt-minion(1)

salt-ssh

salt-ssh
Synopsis
salt-ssh '*' [ options ] sys.doc

salt-ssh -E '.*' [ options ] sys.doc cmd
Description

Salt SSH allows for salt routines to be executed using only SSH for transport

Options
-r, --raw, --raw-shell

Execute a raw shell command.

--priv

Specify the SSH private key file to be used for authentication.

--roster

Define which roster system to use, this defines if a database backend, scanner, or custom roster system is used. Default is the flat file roster.

--roster-file

Define an alternative location for the default roster file location. The default roster file is called roster and is found in the same directory as the master config file.

New in version 2014.1.0.

--refresh, --refresh-cache

Force a refresh of the master side data cache of the target's data. This is needed if a target's grains have been changed and the auto refresh timeframe has not been reached.

--max-procs

Set the number of concurrent minions to communicate with. This value defines how many processes are opened up at a time to manage connections, the more running process the faster communication should be, default is 25.

-i, --ignore-host-keys

Ignore the ssh host keys which by default are honored and connections would ask for approval.

--passwd

Set the default password to attempt to use when authenticating.

--key-deploy

Set this flag to attempt to deploy the authorized ssh key with all minions. This combined with --passwd can make initial deployment of keys very fast and easy.

--version

Print the version of Salt that is running.

--versions-report

Show program's dependencies and version number, and then exit

-h, --help

Show the help message and exit

-c CONFIG_DIR, --config-dir=CONFIG_dir

The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is /etc/salt.

Target Selection
-E, --pcre

The target expression will be interpreted as a PCRE regular expression rather than a shell glob.

-L, --list

The target expression will be interpreted as a comma-delimited list; example: server1.foo.bar,server2.foo.bar,example7.quo.qux

-G, --grain

The target expression matches values returned by the Salt grains system on the minions. The target expression is in the format of '<grain value>:<glob expression>'; example: 'os:Arch*'

This was changed in version 0.9.8 to accept glob expressions instead of regular expression. To use regular expression matching with grains, use the --grain-pcre option.

--grain-pcre

The target expression matches values returned by the Salt grains system on the minions. The target expression is in the format of '<grain value>:< regular expression>'; example: 'os:Arch.*'

-N, --nodegroup

Use a predefined compound target defined in the Salt master configuration file.

-R, --range

Instead of using shell globs to evaluate the target, use a range expression to identify targets. Range expressions look like %cluster.

Using the Range option requires that a range server is set up and the location of the range server is referenced in the master configuration file.

Logging Options

Logging options which override any settings defined on the configuration files.

-l LOG_LEVEL, --log-level=LOG_LEVEL

Console logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.

--log-file=LOG_FILE

Log file path. Default: /var/log/salt/ssh.

--log-file-level=LOG_LEVEL_LOGFILE

Logfile logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.

Output Options
--out

Pass in an alternative outputter to display the return of data. This outputter can be any of the available outputters:

grains, highstate, json, key, overstatestage, pprint, raw, txt, yaml

Some outputters are formatted only for data returned from specific functions; for instance, the grains outputter will not work for non-grains data.

If an outputter is used that does not support the data passed into it, then Salt will fall back on the pprint outputter and display the return data using the Python pprint standard library module.

Note

If using --out=json, you will probably want --static as well. Without the static option, you will get a JSON string for each minion. This is due to using an iterative outputter. So if you want to feed it to a JSON parser, use --static as well.

--out-indent OUTPUT_INDENT, --output-indent OUTPUT_INDENT

Print the output indented by the provided value in spaces. Negative values disable indentation. Only applicable in outputters that support indentation.

--out-file=OUTPUT_FILE, --output-file=OUTPUT_FILE

Write the output to the specified file.

--no-color

Disable all colored output

--force-color

Force colored output

Note

When using colored output the color codes are as follows:

green denotes success, red denotes failure, blue denotes changes and success and yellow denotes a expected future change in configuration.

See also

salt(7) salt-master(1) salt-minion(1)

salt-syndic

salt-syndic

The Salt syndic daemon, a special minion that passes through commands from a higher master

Synopsis
salt-syndic [ options ]
Description

The Salt syndic daemon, a special minion that passes through commands from a higher master.

Options
--version

Print the version of Salt that is running.

--versions-report

Show program's dependencies and version number, and then exit

-h, --help

Show the help message and exit

-c CONFIG_DIR, --config-dir=CONFIG_dir

The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is /etc/salt.

-u USER, --user=USER

Specify user to run salt-syndic

-d, --daemon

Run salt-syndic as a daemon

--pid-file PIDFILE

Specify the location of the pidfile. Default: /var/run/salt-syndic.pid

Logging Options

Logging options which override any settings defined on the configuration files.

-l LOG_LEVEL, --log-level=LOG_LEVEL

Console logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.

--log-file=LOG_FILE

Log file path. Default: /var/log/salt/master.

--log-file-level=LOG_LEVEL_LOGFILE

Logfile logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.

See also

salt(1) salt-master(1) salt-minion(1)

salt-api

salt-api

Start interfaces used to remotely connect to the salt master

Synopsis
salt-api
Description

The Salt API system manages network api connectors for the Salt Master

Options
--version

Print the version of Salt that is running.

--versions-report

Show program's dependencies and version number, and then exit

-h, --help

Show the help message and exit

-c CONFIG_DIR, --config-dir=CONFIG_dir

The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is /etc/salt.

-d, --daemon

Run the salt-api as a daemon

--pid-file=PIDFILE

Specify the location of the pidfile. Default: /var/run/salt-api.pid

Logging Options

Logging options which override any settings defined on the configuration files.

-l LOG_LEVEL, --log-level=LOG_LEVEL

Console logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.

--log-file=LOG_FILE

Log file path. Default: /var/log/salt/api.

--log-file-level=LOG_LEVEL_LOGFILE

Logfile logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.

See also

salt-api(7) salt(7) salt-master(1)

Client ACL system

The salt client ACL system is a means to allow system users other than root to have access to execute select salt commands on minions from the master.

The client ACL system is configured in the master configuration file via the client_acl configuration option. Under the client_acl configuration option the users open to send commands are specified and then a list of regular expressions which specify the minion functions which will be made available to specified user. This configuration is much like the peer configuration:

# Allow thatch to execute anything and allow fred to use ping and pkg
client_acl:
  thatch:
    - .*
  fred:
    - test.*
    - pkg.*

Permission Issues

Directories required for client_acl must be modified to be readable by the users specified:

chmod 755 /var/cache/salt /var/cache/salt/master /var/cache/salt/master/jobs /var/run/salt /var/run/salt/master

Note

In addition to the changes above you will also need to modify the permissions of /var/log/salt and the existing log file to be writable by the user(s) which will be running the commands. If you do not wish to do this then you must disable logging or Salt will generate errors as it cannot write to the logs as the system users.

If you are upgrading from earlier versions of salt you must also remove any existing user keys and re-start the Salt master:

rm /var/cache/salt/.*key
service salt-master restart

Python client API

Salt provides several entry points for interfacing with Python applications. These entry points are often referred to as *Client() APIs. Each client accesses different parts of Salt, either from the master or from a minion. Each client is detailed below.

See also

There are many ways to access Salt programmatically.

Salt can be used from CLI scripts as well as via a REST interface.

See Salt's outputter system to retrieve structured data from Salt as JSON, or as shell-friendly text, or many other formats.

See the state.event runner to utilize Salt's event bus from shell scripts.

Salt's netapi module provides access to Salt externally via a REST interface. Review the netapi module documentation for more information.

Salt's opts dictionary

Some clients require access to Salt's opts dictionary. (The dictionary representation of the master or minion config files.)

A common pattern for fetching the opts dictionary is to defer to environment variables if they exist or otherwise fetch the config from the default location.

salt.config.client_config(path, env_var='SALT_CLIENT_CONFIG', defaults=None)

Load Master configuration data

Usage:

import salt.config
master_opts = salt.config.client_config('/etc/salt/master')

Returns a dictionary of the Salt Master configuration file with necessary options needed to communicate with a locally-running Salt Master daemon. This function searches for client specific configurations and adds them to the data from the master configuration.

This is useful for master-side operations like LocalClient.

salt.config.minion_config(path, env_var='SALT_MINION_CONFIG', defaults=None, cache_minion_id=False)

Reads in the minion configuration file and sets up special options

This is useful for Minion-side operations, such as the Caller class, and manually running the loader interface.

import salt.client
minion_opts = salt.config.minion_config('/etc/salt/minion')

Salt's Loader Interface

Modules in the Salt ecosystem are loaded into memory using a custom loader system. This allows modules to have conditional requirements (OS, OS version, installed libraries, etc) and allows Salt to inject special variables (__salt__, __opts, etc).

Most modules can be manually loaded. This is often useful in third-party Python apps or when writing tests. However some modules require and expect a full, running Salt system underneath. Notably modules that facilitate master-to-minion communication such as the mine, publish, and peer execution modules. The error KeyError: 'master_uri' is a likely indicator for this situation. In those instances use the Caller class to execute those modules instead.

Each module type has a corresponding loader function.

salt.loader.minion_mods(opts, context=None, utils=None, whitelist=None, include_errors=False, initial_load=False, loaded_base_name=None, notify=False)

Load execution modules

Returns a dictionary of execution modules appropriate for the current system by evaluating the __virtual__() function in each module.

Parameters:
  • opts (dict) -- The Salt options dictionary
  • context (dict) -- A Salt context that should be made present inside generated modules in __context__
  • utils (dict) -- Utility functions which should be made available to Salt modules in __utils__. See utils_dir in salt.config for additional information about configuration.
  • whitelist (list) -- A list of modules which should be whitelisted.
  • include_errors (bool) -- Deprecated flag! Unused.
  • initial_load (bool) -- Deprecated flag! Unused.
  • loaded_base_name (str) -- A string marker for the loaded base name.
  • notify (bool) -- Flag indicating that an event should be fired upon completion of module loading.
import salt.config
import salt.loader

__opts__ = salt.config.minion_config('/etc/salt/minion')
__grains__ = salt.loader.grains(__opts__)
__opts__['grains'] = __grains__
__salt__ = salt.loader.minion_mods(__opts__)
__salt__['test.ping']()
salt.loader.raw_mod(opts, name, functions, mod='modules')

Returns a single module loaded raw and bypassing the __virtual__ function

import salt.config
import salt.loader

__opts__ = salt.config.minion_config('/etc/salt/minion')
testmod = salt.loader.raw_mod(__opts__, 'test', None)
testmod['test.ping']()
salt.loader.states(opts, functions, whitelist=None)

Returns the state modules

Parameters:
  • opts (dict) -- The Salt options dictionary
  • functions (dict) -- A dictionary of minion modules, with module names as keys and funcs as values.
import salt.config
import salt.loader

__opts__ = salt.config.minion_config('/etc/salt/minion')
statemods = salt.loader.states(__opts__, None)
salt.loader.grains(opts, force_refresh=False)

Return the functions for the dynamic grains and the values for the static grains.

import salt.config
import salt.loader

__opts__ = salt.config.minion_config('/etc/salt/minion')
__grains__ = salt.loader.grains(__opts__)
print __grains__['id']
salt.loader.grain_funcs(opts)

Returns the grain functions

import salt.config
import salt.loader

__opts__ = salt.config.minion_config('/etc/salt/minion')
grainfuncs = salt.loader.grain_funcs(__opts__)

Salt's Client Interfaces

LocalClient
class salt.client.LocalClient(c_path='/etc/salt/master', mopts=None, skip_perm_errors=False)

The interface used by the salt CLI tool on the Salt Master

LocalClient is used to send a command to Salt minions to execute execution modules and return the results to the Salt Master.

Importing and using LocalClient must be done on the same machine as the Salt Master and it must be done using the same user that the Salt Master is running as. (Unless external_auth is configured and authentication credentials are included in the execution).

import salt.client

local = salt.client.LocalClient()
local.cmd('*', 'test.fib', [10])
cmd(tgt, fun, arg=(), timeout=None, expr_form='glob', ret='', jid='', kwarg=None, **kwargs)

Synchronously execute a command on targeted minions

The cmd method will execute and wait for the timeout period for all minions to reply, then it will return all minion data at once.

>>> import salt.client
>>> local = salt.client.LocalClient()
>>> local.cmd('*', 'cmd.run', ['whoami'])
{'jerry': 'root'}

With extra keyword arguments for the command function to be run:

local.cmd('*', 'test.arg', ['arg1', 'arg2'], kwarg={'foo': 'bar'})

Compound commands can be used for multiple executions in a single publish. Function names and function arguments are provided in separate lists but the index values must correlate and an empty list must be used if no arguments are required.

>>> local.cmd('*', [
        'grains.items',
        'sys.doc',
        'cmd.run',
    ],
    [
        [],
        [],
        ['uptime'],
    ])
Parameters:
  • tgt (string or list) -- Which minions to target for the execution. Default is shell glob. Modified by the expr_form option.
  • fun (string or list of strings) --

    The module and function to call on the specified minions of the form module.function. For example test.ping or grains.items.

    Compound commands
    Multiple functions may be called in a single publish by passing a list of commands. This can dramatically lower overhead and speed up the application communicating with Salt.

    This requires that the arg param is a list of lists. The fun list and the arg list must correlate by index meaning a function that does not take arguments must still have a corresponding empty list at the expected index.

  • arg (list or list-of-lists) -- A list of arguments to pass to the remote function. If the function takes no arguments arg may be omitted except when executing a compound command.
  • timeout -- Seconds to wait after the last minion returns but before all minions return.
  • expr_form --

    The type of tgt. Allowed values:

    • glob - Bash glob completion - Default
    • pcre - Perl style regular expression
    • list - Python list of hosts
    • grain - Match based on a grain comparison
    • grain_pcre - Grain comparison with a regex
    • pillar - Pillar data comparison
    • pillar_pcre - Pillar data comparison with a regex
    • nodegroup - Match on nodegroup
    • range - Use a Range server for matching
    • compound - Pass a compound match string
  • ret -- The returner to use. The value passed can be single returner, or a comma delimited list of returners to call in order on the minions
  • kwarg -- A dictionary with keyword arguments for the function.
  • kwargs --

    Optional keyword arguments. Authentication credentials may be passed when using external_auth.

    For example: local.cmd('*', 'test.ping', username='saltdev', password='saltdev', eauth='pam'). Or: local.cmd('*', 'test.ping', token='5871821ea51754fdcea8153c1c745433')

Returns:

A dictionary with the result of the execution, keyed by minion ID. A compound command will return a sub-dictionary keyed by function name.

cmd_async(tgt, fun, arg=(), expr_form='glob', ret='', jid='', kwarg=None, **kwargs)

Asynchronously send a command to connected minions

The function signature is the same as cmd() with the following exceptions.

Returns:A job ID or 0 on failure.
>>> local.cmd_async('*', 'test.sleep', [300])
'20131219215921857715'
cmd_batch(tgt, fun, arg=(), expr_form='glob', ret='', kwarg=None, batch='10%', **kwargs)

Iteratively execute a command on subsets of minions at a time

The function signature is the same as cmd() with the following exceptions.

Parameters:batch -- The batch identifier of systems to execute on
Returns:A generator of minion returns
>>> returns = local.cmd_batch('*', 'state.highstate', bat='10%')
>>> for ret in returns:
...     print(ret)
{'jerry': {...}}
{'dave': {...}}
{'stewart': {...}}
cmd_iter(tgt, fun, arg=(), timeout=None, expr_form='glob', ret='', kwarg=None, **kwargs)

Yields the individual minion returns as they come in

The function signature is the same as cmd() with the following exceptions.

Returns:A generator yielding the individual minion returns
>>> ret = local.cmd_iter('*', 'test.ping')
>>> for i in ret:
...     print(i)
{'jerry': {'ret': True}}
{'dave': {'ret': True}}
{'stewart': {'ret': True}}
cmd_iter_no_block(tgt, fun, arg=(), timeout=None, expr_form='glob', ret='', kwarg=None, **kwargs)
Yields the individual minion returns as they come in, or None
when no returns are available.

The function signature is the same as cmd() with the following exceptions.

Returns:A generator yielding the individual minion returns, or None when no returns are available. This allows for actions to be injected in between minion returns.
>>> ret = local.cmd_iter_no_block('*', 'test.ping')
>>> for i in ret:
...     print(i)
None
{'jerry': {'ret': True}}
{'dave': {'ret': True}}
None
{'stewart': {'ret': True}}
cmd_subset(tgt, fun, arg=(), expr_form='glob', ret='', kwarg=None, sub=3, cli=False, progress=False, **kwargs)

Execute a command on a random subset of the targeted systems

The function signature is the same as cmd() with the following exceptions.

Parameters:sub -- The number of systems to execute on
>>> SLC.cmd_subset('*', 'test.ping', sub=1)
{'jerry': True}
get_cli_returns(jid, minions, timeout=None, tgt='*', tgt_type='glob', verbose=False, show_jid=False, **kwargs)

Starts a watcher looking at the return data for a specified JID

Returns:all of the information for the JID
get_event_iter_returns(jid, minions, timeout=None)

Gather the return data from the event system, break hard when timeout is reached.

run_job(tgt, fun, arg=(), expr_form='glob', ret='', timeout=None, jid='', kwarg=None, **kwargs)

Asynchronously send a command to connected minions

Prep the job directory and publish a command to any targeted minions.

Returns:A dictionary of (validated) pub_data or an empty dictionary on failure. The pub_data contains the job ID and a list of all minions that are expected to return data.
>>> local.run_job('*', 'test.sleep', [300])
{'jid': '20131219215650131543', 'minions': ['jerry']}
Salt Caller
class salt.client.Caller(c_path='/etc/salt/minion', mopts=None)

Caller is the same interface used by the salt-call command-line tool on the Salt Minion.

Changed in version Beryllium: Added the cmd method for consistency with the other Salt clients. The existing function and sminion.functions interfaces still exist but have been removed from the docs.

Importing and using Caller must be done on the same machine as a Salt Minion and it must be done using the same user that the Salt Minion is running as.

Usage:

import salt.client
caller = salt.client.Caller()
caller.cmd('test.ping')

Note, a running master or minion daemon is not required to use this class. Running salt-call --local simply sets file_client to 'local'. The same can be achieved at the Python level by including that setting in a minion config file.

New in version 2014.7.0: Pass the minion config as the mopts dictionary.

import salt.client
import salt.config
__opts__ = salt.config.minion_config('/etc/salt/minion')
__opts__['file_client'] = 'local'
caller = salt.client.Caller(mopts=__opts__)
cmd(fun, *args, **kwargs)

Call an execution module with the given arguments and keword arguments

Changed in version Beryllium: Added the cmd method for consistency with the other Salt clients. The existing function and sminion.functions interfaces still exist but have been removed from the docs.

caller.cmd('test.arg', 'Foo', 'Bar', baz='Baz')

caller.cmd('event.send', 'myco/myevent/something',
    data={'foo': 'Foo'}, with_env=['GIT_COMMIT'], with_grains=True)
RunnerClient
class salt.runner.RunnerClient(opts)

The interface used by the salt-run CLI tool on the Salt Master

It executes runner modules which run on the Salt Master.

Importing and using RunnerClient must be done on the same machine as the Salt Master and it must be done using the same user that the Salt Master is running as.

Salt's external_auth can be used to authenticate calls. The eauth user must be authorized to execute runner modules: (@runner). Only the master_call() below supports eauth.

async(fun, low, user='UNKNOWN')

Execute the function in a multiprocess and return the event tag to use to watch for the return

cmd(fun, arg=None, pub_data=None, kwarg=None)

Execute a function

>>> opts = salt.config.master_config('/etc/salt/master')
>>> runner = salt.runner.RunnerClient(opts)
>>> runner.cmd('jobs.list_jobs', [])
{
    '20131219215650131543': {
        'Arguments': [300],
        'Function': 'test.sleep',
        'StartTime': '2013, Dec 19 21:56:50.131543',
        'Target': '*',
        'Target-type': 'glob',
        'User': 'saltdev'
    },
    '20131219215921857715': {
        'Arguments': [300],
        'Function': 'test.sleep',
        'StartTime': '2013, Dec 19 21:59:21.857715',
        'Target': '*',
        'Target-type': 'glob',
        'User': 'saltdev'
    },
}
cmd_async(low)

Execute a runner function asynchronously; eauth is respected

This function requires that external_auth is configured and the user is authorized to execute runner functions: (@runner).

runner.eauth_async({
    'fun': 'jobs.list_jobs',
    'username': 'saltdev',
    'password': 'saltdev',
    'eauth': 'pam',
})
cmd_sync(low, timeout=None)

Execute a runner function synchronously; eauth is respected

This function requires that external_auth is configured and the user is authorized to execute runner functions: (@runner).

runner.eauth_sync({
    'fun': 'jobs.list_jobs',
    'username': 'saltdev',
    'password': 'saltdev',
    'eauth': 'pam',
})
WheelClient
class salt.wheel.WheelClient(opts=None)

An interface to Salt's wheel modules

Wheel modules interact with various parts of the Salt Master.

Importing and using WheelClient must be done on the same machine as the Salt Master and it must be done using the same user that the Salt Master is running as. Unless external_auth is configured and the user is authorized to execute wheel functions: (@wheel).

Usage:

import salt.config
import salt.wheel
opts = salt.config.master_config('/etc/salt/master')
wheel = salt.wheel.WheelClient(opts)
async(fun, low, user='UNKNOWN')

Execute the function in a multiprocess and return the event tag to use to watch for the return

cmd(fun, arg=None, pub_data=None, kwarg=None)

Execute a function

>>> wheel.cmd('key.finger', ['jerry'])
{'minions': {'jerry': '5d:f6:79:43:5e:d4:42:3f:57:b8:45:a8:7e:a4:6e:ca'}}
cmd_async(low)

Execute a function asynchronously; eauth is respected

This function requires that external_auth is configured and the user is authorized

>>> wheel.cmd_async({
    'fun': 'key.finger',
    'match': 'jerry',
    'eauth': 'auto',
    'username': 'saltdev',
    'password': 'saltdev',
})
{'jid': '20131219224744416681', 'tag': 'salt/wheel/20131219224744416681'}
cmd_sync(low, timeout=None)

Execute a wheel function synchronously; eauth is respected

This function requires that external_auth is configured and the user is authorized to execute runner functions: (@wheel).

>>> wheel.cmd_sync({
'fun': 'key.finger',
'match': 'jerry',
'eauth': 'auto',
'username': 'saltdev',
'password': 'saltdev',
})
{'minions': {'jerry': '5d:f6:79:43:5e:d4:42:3f:57:b8:45:a8:7e:a4:6e:ca'}}
CloudClient
class salt.cloud.CloudClient(path=None, opts=None, config_dir=None, pillars=None)

The client class to wrap cloud interactions

action(fun=None, cloudmap=None, names=None, provider=None, instance=None, kwargs=None)

Execute a single action via the cloud plugin backend

Examples:

client.action(fun='show_instance', names=['myinstance'])
client.action(fun='show_image', provider='my-ec2-config',
    kwargs={'image': 'ami-10314d79'}
)
create(provider, names, **kwargs)

Create the named VMs, without using a profile

Example:

client.create(names=['myinstance'], provider='my-ec2-config',
    kwargs={'image': 'ami-1624987f', 'size': 't1.micro',
            'ssh_username': 'ec2-user', 'securitygroup': 'default',
            'delvol_on_destroy': True})
destroy(names)

Destroy the named VMs

extra_action(names, provider, action, **kwargs)

Perform actions with block storage devices

Example:

client.extra_action(names=['myblock'], action='volume_create',
    provider='my-nova', kwargs={'voltype': 'SSD', 'size': 1000}
)
client.extra_action(names=['salt-net'], action='network_create',
    provider='my-nova', kwargs={'cidr': '192.168.100.0/24'}
)
full_query(query_type='list_nodes_full')

Query all instance information

list_images(provider=None)

List all available images in configured cloud systems

list_locations(provider=None)

List all available locations in configured cloud systems

list_sizes(provider=None)

List all available sizes in configured cloud systems

low(fun, low)

Pass the cloud function and low data structure to run

map_run(path, **kwargs)

Pass in a location for a map to execute

min_query(query_type='list_nodes_min')

Query select instance information

profile(profile, names, vm_overrides=None, **kwargs)

Pass in a profile to create, names is a list of vm names to allocate

vm_overrides is a special dict that will be per node options overrides

Example:

>>> client= salt.cloud.CloudClient(path='/etc/salt/cloud')
>>> client.profile('do_512_git', names=['minion01',])
{'minion01': {u'backups_active': 'False',
        u'created_at': '2014-09-04T18:10:15Z',
        u'droplet': {u'event_id': 31000502,
                     u'id': 2530006,
                     u'image_id': 5140006,
                     u'name': u'minion01',
                     u'size_id': 66},
        u'id': '2530006',
        u'image_id': '5140006',
        u'ip_address': '107.XXX.XXX.XXX',
        u'locked': 'True',
        u'name': 'minion01',
        u'private_ip_address': None,
        u'region_id': '4',
        u'size_id': '66',
        u'status': 'new'}}
query(query_type='list_nodes')

Query basic instance information

select_query(query_type='list_nodes_select')

Query select instance information

SSHClient
class salt.client.ssh.client.SSHClient(c_path='/etc/salt/master', mopts=None)

Create a client object for executing routines via the salt-ssh backend

New in version 2015.5.0.

cmd(tgt, fun, arg=(), timeout=None, expr_form='glob', kwarg=None, **kwargs)

Execute a single command via the salt-ssh subsystem and return all routines at once

New in version 2015.5.0.

cmd_iter(tgt, fun, arg=(), timeout=None, expr_form='glob', ret='', kwarg=None, **kwargs)

Execute a single command via the salt-ssh subsystem and return a generator

New in version 2015.5.0.

Full list of Salt Cloud modules

aliyun AliYun ECS Cloud Module
botocore_aws The AWS Cloud Module
cloudstack CloudStack Cloud Module
digital_ocean DigitalOcean Cloud Module
digital_ocean_v2
ec2 The EC2 Cloud Module
gce Copyright 2013 Google Inc.
gogrid GoGrid Cloud Module
joyent Joyent Cloud Module
libcloud_aws The AWS Cloud Module
linode Linode Cloud Module using Apache Libcloud OR linode-python bindings
lxc Install Salt on an LXC Container
msazure Azure Cloud Module
nova OpenStack Nova Cloud Module
opennebula OpenNebula Cloud Module
openstack OpenStack Cloud Module
parallels Parallels Cloud Module
proxmox Proxmox Cloud Module
pyrax Pyrax Cloud Module
rackspace Rackspace Cloud Module
saltify Saltify Module ============== The Saltify module is designed to install Salt on a remote machine, virtual or bare metal, using SSH.
softlayer SoftLayer Cloud Module
softlayer_hw SoftLayer HW Cloud Module
vmware VMware Cloud Module
vsphere vSphere Cloud Module

Configuration file examples

Example master configuration file

##### Primary configuration settings #####
##########################################
# This configuration file is used to manage the behavior of the Salt Master.
# Values that are commented out but have an empty line after the comment are
# defaults that do not need to be set in the config. If there is no blank line
# after the comment then the value is presented as an example and is not the
# default.

# Per default, the master will automatically include all config files
# from master.d/*.conf (master.d is a directory in the same directory
# as the main master config file).
#default_include: master.d/*.conf

# The address of the interface to bind to:
#interface: 0.0.0.0

# Whether the master should listen for IPv6 connections. If this is set to True,
# the interface option must be adjusted, too. (For example: "interface: '::'")
#ipv6: False

# The tcp port used by the publisher:
#publish_port: 4505

# The user under which the salt master will run. Salt will update all
# permissions to allow the specified user to run the master. The exception is
# the job cache, which must be deleted if this user is changed. If the
# modified files cause conflicts, set verify_env to False.
#user: root

# Max open files
#
# Each minion connecting to the master uses AT LEAST one file descriptor, the
# master subscription connection. If enough minions connect you might start
# seeing on the console (and then salt-master crashes):
#   Too many open files (tcp_listener.cpp:335)
#   Aborted (core dumped)
#
# By default this value will be the one of `ulimit -Hn`, ie, the hard limit for
# max open files.
#
# If you wish to set a different value than the default one, uncomment and
# configure this setting. Remember that this value CANNOT be higher than the
# hard limit. Raising the hard limit depends on your OS and/or distribution,
# a good way to find the limit is to search the internet. For example:
#   raise max open files hard limit debian
#
#max_open_files: 100000

# The number of worker threads to start. These threads are used to manage
# return calls made from minions to the master. If the master seems to be
# running slowly, increase the number of threads.
#worker_threads: 5

# The port used by the communication interface. The ret (return) port is the
# interface used for the file server, authentication, job returns, etc.
#ret_port: 4506

# Specify the location of the daemon process ID file:
#pidfile: /var/run/salt-master.pid

# The root directory prepended to these options: pki_dir, cachedir,
# sock_dir, log_file, autosign_file, autoreject_file, extension_modules,
# key_logfile, pidfile:
#root_dir: /

# Directory used to store public key data:
#pki_dir: /etc/salt/pki/master

# Directory to store job and cache data:
# This directory may contain sensitive data and should be protected accordingly.
# 
#cachedir: /var/cache/salt/master

# Directory for custom modules. This directory can contain subdirectories for
# each of Salt's module types such as "runners", "output", "wheel", "modules",
# "states", "returners", etc.
#extension_modules: <no default>

# Directory for custom modules. This directory can contain subdirectories for
# each of Salt's module types such as "runners", "output", "wheel", "modules",
# "states", "returners", etc.
# Like 'extension_modules' but can take an array of paths
#module_dirs: <no default>
#   - /var/cache/salt/minion/extmods

# Verify and set permissions on configuration directories at startup:
#verify_env: True

# Set the number of hours to keep old job information in the job cache:
#keep_jobs: 24

# Set the default timeout for the salt command and api. The default is 5
# seconds.
#timeout: 5

# The loop_interval option controls the seconds for the master's maintenance
# process check cycle. This process updates file server backends, cleans the
# job cache and executes the scheduler.
#loop_interval: 60

# Set the default outputter used by the salt command. The default is "nested".
#output: nested

# Return minions that timeout when running commands like test.ping
#show_timeout: True

# By default, output is colored. To disable colored output, set the color value
# to False.
#color: True

# Do not strip off the colored output from nested results and state outputs
# (true by default).
# strip_colors: False

# Set the directory used to hold unix sockets:
#sock_dir: /var/run/salt/master

# The master can take a while to start up when lspci and/or dmidecode is used
# to populate the grains for the master. Enable if you want to see GPU hardware
# data for your master.
# enable_gpu_grains: False

# The master maintains a job cache. While this is a great addition, it can be
# a burden on the master for larger deployments (over 5000 minions).
# Disabling the job cache will make previously executed jobs unavailable to
# the jobs system and is not generally recommended.
#job_cache: True

# Cache minion grains and pillar data in the cachedir.
#minion_data_cache: True

# Store all returns in the given returner.
# Setting this option requires that any returner-specific configuration also 
# be set. See various returners in salt/returners for details on required
# configuration values. (See also, event_return_queue below.)
#
#event_return: mysql

# On busy systems, enabling event_returns can cause a considerable load on
# the storage system for returners. Events can be queued on the master and
# stored in a batched fashion using a single transaction for multiple events.
# By default, events are not queued.
#event_return_queue: 0

# Only events returns matching tags in a whitelist
# event_return_whitelist:
#   - salt/master/a_tag
#   - salt/master/another_tag

# Store all event returns _except_ the tags in a blacklist
# event_return_blacklist:
#   - salt/master/not_this_tag
#   - salt/master/or_this_one

# Passing very large events can cause the minion to consume large amounts of
# memory. This value tunes the maximum size of a message allowed onto the
# master event bus. The value is expressed in bytes.
#max_event_size: 1048576

# By default, the master AES key rotates every 24 hours. The next command
# following a key rotation will trigger a key refresh from the minion which may
# result in minions which do not respond to the first command after a key refresh.
#
# To tell the master to ping all minions immediately after an AES key refresh, set
# ping_on_rotate to True. This should mitigate the issue where a minion does not
# appear to initially respond after a key is rotated.
#
# Note that ping_on_rotate may cause high load on the master immediately after
# the key rotation event as minions reconnect. Consider this carefully if this
# salt master is managing a large number of minions.
#
# If disabled, it is recommended to handle this event by listening for the 
# 'aes_key_rotate' event with the 'key' tag and acting appropriately.
# ping_on_rotate: False

# By default, the master deletes its cache of minion data when the key for that
# minion is removed. To preserve the cache after key deletion, set 
# 'preserve_minion_cache' to True.
#
# WARNING: This may have security implications if compromised minions auth with
# a previous deleted minion ID.
#preserve_minion_cache: False

# If max_minions is used in large installations, the master might experience
# high-load situations because of having to check the number of connected
# minions for every authentication. This cache provides the minion-ids of
# all connected minions to all MWorker-processes and greatly improves the
# performance of max_minions.
# con_cache: False

# The master can include configuration from other files. To enable this,
# pass a list of paths to this option. The paths can be either relative or
# absolute; if relative, they are considered to be relative to the directory
# the main master configuration file lives in (this file). Paths can make use
# of shell-style globbing. If no files are matched by a path passed to this
# option, then the master will log a warning message.
#
# Include a config file from some other path:
# include: /etc/salt/extra_config
#
# Include config from several files and directories:
# include:
#   - /etc/salt/extra_config


#####        Security settings       #####
##########################################
# Enable "open mode", this mode still maintains encryption, but turns off
# authentication, this is only intended for highly secure environments or for
# the situation where your keys end up in a bad state. If you run in open mode
# you do so at your own risk!
#open_mode: False

# Enable auto_accept, this setting will automatically accept all incoming
# public keys from the minions. Note that this is insecure.
#auto_accept: False

# Time in minutes that a incoming public key with a matching name found in
# pki_dir/minion_autosign/keyid is automatically accepted. Expired autosign keys
# are removed when the master checks the minion_autosign directory.
# 0 equals no timeout
# autosign_timeout: 120

# If the autosign_file is specified, incoming keys specified in the
# autosign_file will be automatically accepted. This is insecure.  Regular
# expressions as well as globing lines are supported.
#autosign_file: /etc/salt/autosign.conf

# Works like autosign_file, but instead allows you to specify minion IDs for
# which keys will automatically be rejected. Will override both membership in
# the autosign_file and the auto_accept setting.
#autoreject_file: /etc/salt/autoreject.conf

# Enable permissive access to the salt keys. This allows you to run the
# master or minion as root, but have a non-root group be given access to
# your pki_dir. To make the access explicit, root must belong to the group
# you've given access to. This is potentially quite insecure. If an autosign_file
# is specified, enabling permissive_pki_access will allow group access to that
# specific file.
#permissive_pki_access: False

# Allow users on the master access to execute specific commands on minions.
# This setting should be treated with care since it opens up execution
# capabilities to non root users. By default this capability is completely
# disabled.
#client_acl:
#  larry:
#    - test.ping
#    - network.*
#
# Blacklist any of the following users or modules
#
# This example would blacklist all non sudo users, including root from
# running any commands. It would also blacklist any use of the "cmd"
# module. This is completely disabled by default.
#
#client_acl_blacklist:
#  users:
#    - root
#    - '^(?!sudo_).*$'   #  all non sudo users
#  modules:
#    - cmd

# Enforce client_acl & client_acl_blacklist when users have sudo
# access to the salt command. 
#
#sudo_acl: False

# The external auth system uses the Salt auth modules to authenticate and
# validate users to access areas of the Salt system.
#external_auth:
#  pam:
#    fred:
#      - test.*
#
# Time (in seconds) for a newly generated token to live. Default: 12 hours
#token_expire: 43200

# Allow minions to push files to the master. This is disabled by default, for
# security purposes.
#file_recv: False

# Set a hard-limit on the size of the files that can be pushed to the master.
# It will be interpreted as megabytes. Default: 100
#file_recv_max_size: 100

# Signature verification on messages published from the master.
# This causes the master to cryptographically sign all messages published to its event
# bus, and minions then verify that signature before acting on the message.
#
# This is False by default.
#
# Note that to facilitate interoperability with masters and minions that are different
# versions, if sign_pub_messages is True but a message is received by a minion with
# no signature, it will still be accepted, and a warning message will be logged.
# Conversely, if sign_pub_messages is False, but a minion receives a signed
# message it will be accepted, the signature will not be checked, and a warning message
# will be logged. This behavior went away in Salt 2014.1.0 and these two situations
# will cause minion to throw an exception and drop the message.
# sign_pub_messages: False

#####     Salt-SSH Configuration     #####
##########################################

# Pass in an alternative location for the salt-ssh roster file
#roster_file: /etc/salt/roster

# Pass in minion option overrides that will be inserted into the SHIM for
# salt-ssh calls. The local minion config is not used for salt-ssh. Can be
# overridden on a per-minion basis in the roster (`minion_opts`)
#ssh_minion_opts:
#  gpg_keydir: /root/gpg

#####    Master Module Management    #####
##########################################
# Manage how master side modules are loaded.

# Add any additional locations to look for master runners:
#runner_dirs: []

# Enable Cython for master side modules:
#cython_enable: False


#####      State System settings     #####
##########################################
# The state system uses a "top" file to tell the minions what environment to
# use and what modules to use. The state_top file is defined relative to the
# root of the base environment as defined in "File Server settings" below.
#state_top: top.sls

# The master_tops option replaces the external_nodes option by creating
# a plugable system for the generation of external top data. The external_nodes
# option is deprecated by the master_tops option.
#
# To gain the capabilities of the classic external_nodes system, use the
# following configuration:
# master_tops:
#   ext_nodes: <Shell command which returns yaml>
#
#master_tops: {}

# The external_nodes option allows Salt to gather data that would normally be
# placed in a top file. The external_nodes option is the executable that will
# return the ENC data. Remember that Salt will look for external nodes AND top
# files and combine the results if both are enabled!
#external_nodes: None

# The renderer to use on the minions to render the state data
#renderer: yaml_jinja

# The Jinja renderer can strip extra carriage returns and whitespace
# See http://jinja.pocoo.org/docs/api/#high-level-api
#
# If this is set to True the first newline after a Jinja block is removed
# (block, not variable tag!). Defaults to False, corresponds to the Jinja
# environment init variable "trim_blocks".
# jinja_trim_blocks: False
#
# If this is set to True leading spaces and tabs are stripped from the start
# of a line to a block. Defaults to False, corresponds to the Jinja
# environment init variable "lstrip_blocks".
# jinja_lstrip_blocks: False

# The failhard option tells the minions to stop immediately after the first
# failure detected in the state execution, defaults to False
#failhard: False

# The state_verbose and state_output settings can be used to change the way
# state system data is printed to the display. By default all data is printed.
# The state_verbose setting can be set to True or False, when set to False
# all data that has a result of True and no changes will be suppressed.
#state_verbose: True

# The state_output setting changes if the output is the full multi line
# output for each changed state if set to 'full', but if set to 'terse'
# the output will be shortened to a single line.  If set to 'mixed', the output
# will be terse unless a state failed, in which case that output will be full.
# If set to 'changes', the output will be full unless the state didn't change.
#state_output: full

# Automatically aggregate all states that have support for mod_aggregate by
# setting to True. Or pass a list of state module names to automatically
# aggregate just those types.
#
# state_aggregate:
#   - pkg
#
#state_aggregate: False

#####      File Server settings      #####
##########################################
# Salt runs a lightweight file server written in zeromq to deliver files to
# minions. This file server is built into the master daemon and does not
# require a dedicated port.

# The file server works on environments passed to the master, each environment
# can have multiple root directories, the subdirectories in the multiple file
# roots cannot match, otherwise the downloaded files will not be able to be
# reliably ensured. A base environment is required to house the top file.
# Example:
# file_roots:
#   base:
#     - /srv/salt/
#   dev:
#     - /srv/salt/dev/services
#     - /srv/salt/dev/states
#   prod:
#     - /srv/salt/prod/services
#     - /srv/salt/prod/states
#
#file_roots:
#  base:
#    - /srv/salt

# The hash_type is the hash to use when discovering the hash of a file on
# the master server. The default is md5, but sha1, sha224, sha256, sha384
# and sha512 are also supported.
#
# Prior to changing this value, the master should be stopped and all Salt 
# caches should be cleared.
#hash_type: md5

# The buffer size in the file server can be adjusted here:
#file_buffer_size: 1048576

# A regular expression (or a list of expressions) that will be matched
# against the file path before syncing the modules and states to the minions.
# This includes files affected by the file.recurse state.
# For example, if you manage your custom modules and states in subversion
# and don't want all the '.svn' folders and content synced to your minions,
# you could set this to '/\.svn($|/)'. By default nothing is ignored.
#file_ignore_regex:
#  - '/\.svn($|/)'
#  - '/\.git($|/)'

# A file glob (or list of file globs) that will be matched against the file
# path before syncing the modules and states to the minions. This is similar
# to file_ignore_regex above, but works on globs instead of regex. By default
# nothing is ignored.
# file_ignore_glob:
#  - '*.pyc'
#  - '*/somefolder/*.bak'
#  - '*.swp'

# File Server Backend
#
# Salt supports a modular fileserver backend system, this system allows
# the salt master to link directly to third party systems to gather and
# manage the files available to minions. Multiple backends can be
# configured and will be searched for the requested file in the order in which
# they are defined here. The default setting only enables the standard backend
# "roots" which uses the "file_roots" option.
#fileserver_backend:
#  - roots
#
# To use multiple backends list them in the order they are searched:
#fileserver_backend:
#  - git
#  - roots
#
# Uncomment the line below if you do not want the file_server to follow
# symlinks when walking the filesystem tree. This is set to True
# by default. Currently this only applies to the default roots
# fileserver_backend.
#fileserver_followsymlinks: False
#
# Uncomment the line below if you do not want symlinks to be
# treated as the files they are pointing to. By default this is set to
# False. By uncommenting the line below, any detected symlink while listing
# files on the Master will not be returned to the Minion.
#fileserver_ignoresymlinks: True
#
# By default, the Salt fileserver recurses fully into all defined environments
# to attempt to find files. To limit this behavior so that the fileserver only
# traverses directories with SLS files and special Salt directories like _modules,
# enable the option below. This might be useful for installations where a file root
# has a very large number of files and performance is impacted. Default is False.
# fileserver_limit_traversal: False
#
# The fileserver can fire events off every time the fileserver is updated,
# these are disabled by default, but can be easily turned on by setting this
# flag to True
#fileserver_events: False

# Git File Server Backend Configuration
#
# Gitfs can be provided by one of two python modules: GitPython or pygit2. If
# using pygit2, both libgit2 and git must also be installed.
#gitfs_provider: gitpython
#
# When using the git fileserver backend at least one git remote needs to be
# defined. The user running the salt master will need read access to the repo.
#
# The repos will be searched in order to find the file requested by a client
# and the first repo to have the file will return it.
# When using the git backend branches and tags are translated into salt
# environments.
# Note:  file:// repos will be treated as a remote, so refs you want used must
# exist in that repo as *local* refs.
#gitfs_remotes:
#  - git://github.com/saltstack/salt-states.git
#  - file:///var/git/saltmaster
#
# The gitfs_ssl_verify option specifies whether to ignore ssl certificate
# errors when contacting the gitfs backend. You might want to set this to
# false if you're using a git backend that uses a self-signed certificate but
# keep in mind that setting this flag to anything other than the default of True
# is a security concern, you may want to try using the ssh transport.
#gitfs_ssl_verify: True
#
# The gitfs_root option gives the ability to serve files from a subdirectory
# within the repository. The path is defined relative to the root of the
# repository and defaults to the repository root.
#gitfs_root: somefolder/otherfolder
#
#
#####         Pillar settings        #####
##########################################
# Salt Pillars allow for the building of global data that can be made selectively
# available to different minions based on minion grain filtering. The Salt
# Pillar is laid out in the same fashion as the file server, with environments,
# a top file and sls files. However, pillar data does not need to be in the
# highstate format, and is generally just key/value pairs.
#pillar_roots:
#  base:
#    - /srv/pillar
#
#ext_pillar:
#  - hiera: /etc/hiera.yaml
#  - cmd_yaml: cat /etc/salt/yaml

# The ext_pillar_first option allows for external pillar sources to populate
# before file system pillar. This allows for targeting file system pillar from
# ext_pillar.
#ext_pillar_first: False

# The pillar_gitfs_ssl_verify option specifies whether to ignore ssl certificate
# errors when contacting the pillar gitfs backend. You might want to set this to
# false if you're using a git backend that uses a self-signed certificate but
# keep in mind that setting this flag to anything other than the default of True
# is a security concern, you may want to try using the ssh transport.
#pillar_gitfs_ssl_verify: True

# The pillar_opts option adds the master configuration file data to a dict in
# the pillar called "master". This is used to set simple configurations in the
# master config file that can then be used on minions.
#pillar_opts: False

# The pillar_safe_render_error option prevents the master from passing piller
# render errors to the minion. This is set on by default because the error could
# contain templating data which would give that minion information it shouldn't
# have, like a password! When set true the error message will only show:
#   Rendering SLS 'my.sls' failed. Please see master log for details.
#pillar_safe_render_error: True

# The pillar_source_merging_strategy option allows you to configure merging strategy
# between different sources. It accepts four values: recurse, aggregate, overwrite,
# or smart. Recurse will merge recursively mapping of data. Aggregate instructs
# aggregation of elements between sources that use the #!yamlex renderer. Overwrite
# will verwrite elements according the order in which they are processed. This is
# behavior of the 2014.1 branch and earlier. Smart guesses the best strategy based
# on the "renderer" setting and is the default value.
#pillar_source_merging_strategy: smart


#####          Syndic settings       #####
##########################################
# The Salt syndic is used to pass commands through a master from a higher
# master. Using the syndic is simple. If this is a master that will have
# syndic servers(s) below it, then set the "order_masters" setting to True.
#
# If this is a master that will be running a syndic daemon for passthrough, then
# the "syndic_master" setting needs to be set to the location of the master server
# to receive commands from.

# Set the order_masters setting to True if this master will command lower
# masters' syndic interfaces.
#order_masters: False

# If this master will be running a salt syndic daemon, syndic_master tells
# this master where to receive commands from.
#syndic_master: masterofmaster

# This is the 'ret_port' of the MasterOfMaster:
#syndic_master_port: 4506

# PID file of the syndic daemon:
#syndic_pidfile: /var/run/salt-syndic.pid

# LOG file of the syndic daemon:
#syndic_log_file: syndic.log


#####      Peer Publish settings     #####
##########################################
# Salt minions can send commands to other minions, but only if the minion is
# allowed to. By default "Peer Publication" is disabled, and when enabled it
# is enabled for specific minions and specific commands. This allows secure
# compartmentalization of commands based on individual minions.

# The configuration uses regular expressions to match minions and then a list
# of regular expressions to match functions. The following will allow the
# minion authenticated as foo.example.com to execute functions from the test
# and pkg modules.
#peer:
#  foo.example.com:
#    - test.*
#    - pkg.*
#
# This will allow all minions to execute all commands:
#peer:
#  .*:
#    - .*
#
# This is not recommended, since it would allow anyone who gets root on any
# single minion to instantly have root on all of the minions!

# Minions can also be allowed to execute runners from the salt master.
# Since executing a runner from the minion could be considered a security risk,
# it needs to be enabled. This setting functions just like the peer setting
# except that it opens up runners instead of module functions.
#
# All peer runner support is turned off by default and must be enabled before
# using. This will enable all peer runners for all minions:
#peer_run:
#  .*:
#    - .*
#
# To enable just the manage.up runner for the minion foo.example.com:
#peer_run:
#  foo.example.com:
#    - manage.up
#
#
#####         Mine settings     #####
##########################################
# Restrict mine.get access from minions. By default any minion has a full access
# to get all mine data from master cache. In acl definion below, only pcre matches
# are allowed.
# mine_get:
#   .*:
#     - .*
#
# The example below enables minion foo.example.com to get 'network.interfaces' mine
# data only, minions web* to get all network.* and disk.* mine data and all other
# minions won't get any mine data.
# mine_get:
#   foo.example.com:
#     - network.interfaces
#   web.*:
#     - network.*
#     - disk.*


#####         Logging settings       #####
##########################################
# The location of the master log file
# The master log can be sent to a regular file, local path name, or network
# location. Remote logging works best when configured to use rsyslogd(8) (e.g.:
# ``file:///dev/log``), with rsyslogd(8) configured for network logging. The URI
# format is: <file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility>
#log_file: /var/log/salt/master
#log_file: file:///dev/log
#log_file: udp://loghost:10514

#log_file: /var/log/salt/master
#key_logfile: /var/log/salt/key

# The level of messages to send to the console.
# One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
#log_level: warning

# The level of messages to send to the log file.
# One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
# If using 'log_granular_levels' this must be set to the highest desired level.
#log_level_logfile: warning

# The date and time format used in log messages. Allowed date/time formating
# can be seen here: http://docs.python.org/library/time.html#time.strftime
#log_datefmt: '%H:%M:%S'
#log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'

# The format of the console logging messages. Allowed formatting options can
# be seen here: http://docs.python.org/library/logging.html#logrecord-attributes
#
# Console log colors are specified by these additional formatters:
#
# %(colorlevel)s
# %(colorname)s
# %(colorprocess)s
# %(colormsg)s
#
# Since it is desirable to include the surrounding brackets, '[' and ']', in
# the coloring of the messages, these color formatters also include padding as
# well.  Color LogRecord attributes are only available for console logging.
#
#log_fmt_console: '[%(levelname)-8s] %(message)s'
#log_fmt_logfile: '%(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s'

# This can be used to control logging levels more specificically.  This
# example sets the main salt library at the 'warning' level, but sets
# 'salt.modules' to log at the 'debug' level:
#   log_granular_levels:
#     'salt': 'warning'
#     'salt.modules': 'debug'
#
#log_granular_levels: {}


#####         Node Groups           #####
##########################################
# Node groups allow for logical groupings of minion nodes. A group consists of a group
# name and a compound target.
#nodegroups:
#  group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com and bl*.domain.com'
#  group2: 'G@os:Debian and foo.domain.com'


#####     Range Cluster settings     #####
##########################################
# The range server (and optional port) that serves your cluster information
# https://github.com/ytoolshed/range/wiki/%22yamlfile%22-module-file-spec
#
#range_server: range:80


#####     Windows Software Repo settings #####
##############################################
# Location of the repo on the master:
#win_repo: '/srv/salt/win/repo'
#
# Location of the master's repo cache file:
#win_repo_mastercachefile: '/srv/salt/win/repo/winrepo.p'
#
# List of git repositories to include with the local repo:
#win_gitrepos:
#  - 'https://github.com/saltstack/salt-winrepo.git'

#####      Returner settings          ######
############################################
# Which returner(s) will be used for minion's result:
#return: mysql

Example minion configuration file

##### Primary configuration settings #####
########################################## 
# This configuration file is used to manage the behavior of the Salt Minion.
# With the exception of the location of the Salt Master Server, values that are
# commented out but have an empty line after the comment are defaults that need
# not be set in the config. If there is no blank line after the comment, the
# value is presented as an example and is not the default.

# Per default the minion will automatically include all config files
# from minion.d/*.conf (minion.d is a directory in the same directory
# as the main minion config file).
#default_include: minion.d/*.conf

# Set the location of the salt master server. If the master server cannot be
# resolved, then the minion will fail to start.
#master: salt

# If multiple masters are specified in the 'master' setting, the default behavior
# is to always try to connect to them in the order they are listed. If random_master is
# set to True, the order will be randomized instead. This can be helpful in distributing
# the load of many minions executing salt-call requests, for example, from a cron job.
# If only one master is listed, this setting is ignored and a warning will be logged.
#random_master: False

# Set whether the minion should connect to the master via IPv6:
#ipv6: False

# Set the number of seconds to wait before attempting to resolve
# the master hostname if name resolution fails. Defaults to 30 seconds.
# Set to zero if the minion should shutdown and not retry.
# retry_dns: 30

# Set the port used by the master reply and authentication server.
#master_port: 4506

# The user to run salt.
#user: root

# Specify the location of the daemon process ID file.
#pidfile: /var/run/salt-minion.pid

# The root directory prepended to these options: pki_dir, cachedir, log_file,
# sock_dir, pidfile.
#root_dir: /

# The directory to store the pki information in
#pki_dir: /etc/salt/pki/minion

# Explicitly declare the id for this minion to use, if left commented the id
# will be the hostname as returned by the python call: socket.getfqdn()
# Since salt uses detached ids it is possible to run multiple minions on the
# same machine but with different ids, this can be useful for salt compute
# clusters.
#id:

# Append a domain to a hostname in the event that it does not exist.  This is
# useful for systems where socket.getfqdn() does not actually result in a
# FQDN (for instance, Solaris).
#append_domain:

# Custom static grains for this minion can be specified here and used in SLS
# files just like all other grains. This example sets 4 custom grains, with
# the 'roles' grain having two values that can be matched against.
#grains:
#  roles:
#    - webserver
#    - memcache
#  deployment: datacenter4
#  cabinet: 13
#  cab_u: 14-15
#
# Where cache data goes.
# This data may contain sensitive data and should be protected accordingly.
#cachedir: /var/cache/salt/minion

# Verify and set permissions on configuration directories at startup.
#verify_env: True

# The minion can locally cache the return data from jobs sent to it, this
# can be a good way to keep track of jobs the minion has executed
# (on the minion side). By default this feature is disabled, to enable, set
# cache_jobs to True.
#cache_jobs: False

# Set the directory used to hold unix sockets.
#sock_dir: /var/run/salt/minion

# Set the default outputter used by the salt-call command. The default is
# "nested".
#output: nested
#
# By default output is colored. To disable colored output, set the color value
# to False.
#color: True

# Do not strip off the colored output from nested results and state outputs
# (true by default).
# strip_colors: False

# Backup files that are replaced by file.managed and file.recurse under
# 'cachedir'/file_backups relative to their original location and appended
# with a timestamp. The only valid setting is "minion". Disabled by default.
#
# Alternatively this can be specified for each file in state files:
# /etc/ssh/sshd_config:
#   file.managed:
#     - source: salt://ssh/sshd_config
#     - backup: minion
#
#backup_mode: minion

# When waiting for a master to accept the minion's public key, salt will
# continuously attempt to reconnect until successful. This is the time, in
# seconds, between those reconnection attempts.
#acceptance_wait_time: 10

# If this is nonzero, the time between reconnection attempts will increase by
# acceptance_wait_time seconds per iteration, up to this maximum. If this is
# set to zero, the time between reconnection attempts will stay constant.
#acceptance_wait_time_max: 0

# If the master rejects the minion's public key, retry instead of exiting.
# Rejected keys will be handled the same as waiting on acceptance.
#rejected_retry: False

# When the master key changes, the minion will try to re-auth itself to receive
# the new master key. In larger environments this can cause a SYN flood on the
# master because all minions try to re-auth immediately. To prevent this and
# have a minion wait for a random amount of time, use this optional parameter.
# The wait-time will be a random number of seconds between 0 and the defined value.
#random_reauth_delay: 60

# When waiting for a master to accept the minion's public key, salt will
# continuously attempt to reconnect until successful. This is the timeout value,
# in seconds, for each individual attempt. After this timeout expires, the minion
# will wait for acceptance_wait_time seconds before trying again. Unless your master
# is under unusually heavy load, this should be left at the default.
#auth_timeout: 60

# Number of consecutive SaltReqTimeoutError that are acceptable when trying to
# authenticate.
#auth_tries: 7

# If authentication fails due to SaltReqTimeoutError during a ping_interval,
# cause sub minion process to restart.
#auth_safemode: False

# Ping Master to ensure connection is alive (minutes).
#ping_interval: 0

# To auto recover minions if master changes IP address (DDNS)
#    auth_tries: 10
#    auth_safemode: False
#    ping_interval: 90
#
# Minions won't know master is missing until a ping fails. After the ping fail,
# the minion will attempt authentication and likely fails out and cause a restart.
# When the minion restarts it will resolve the masters IP and attempt to reconnect.

# If you don't have any problems with syn-floods, don't bother with the
# three recon_* settings described below, just leave the defaults!
#
# The ZeroMQ pull-socket that binds to the masters publishing interface tries
# to reconnect immediately, if the socket is disconnected (for example if
# the master processes are restarted). In large setups this will have all
# minions reconnect immediately which might flood the master (the ZeroMQ-default
# is usually a 100ms delay). To prevent this, these three recon_* settings
# can be used.
# recon_default: the interval in milliseconds that the socket should wait before
#                trying to reconnect to the master (1000ms = 1 second)
#
# recon_max: the maximum time a socket should wait. each interval the time to wait
#            is calculated by doubling the previous time. if recon_max is reached,
#            it starts again at recon_default. Short example:
#
#            reconnect 1: the socket will wait 'recon_default' milliseconds
#            reconnect 2: 'recon_default' * 2
#            reconnect 3: ('recon_default' * 2) * 2
#            reconnect 4: value from previous interval * 2
#            reconnect 5: value from previous interval * 2
#            reconnect x: if value >= recon_max, it starts again with recon_default
#
# recon_randomize: generate a random wait time on minion start. The wait time will
#                  be a random value between recon_default and recon_default +
#                  recon_max. Having all minions reconnect with the same recon_default
#                  and recon_max value kind of defeats the purpose of being able to
#                  change these settings. If all minions have the same values and your
#                  setup is quite large (several thousand minions), they will still
#                  flood the master. The desired behavior is to have timeframe within
#                  all minions try to reconnect.
#
# Example on how to use these settings. The goal: have all minions reconnect within a
# 60 second timeframe on a disconnect.
# recon_default: 1000
# recon_max: 59000
# recon_randomize: True
#
# Each minion will have a randomized reconnect value between 'recon_default'
# and 'recon_default + recon_max', which in this example means between 1000ms
# 60000ms (or between 1 and 60 seconds). The generated random-value will be
# doubled after each attempt to reconnect. Lets say the generated random
# value is 11 seconds (or 11000ms).
# reconnect 1: wait 11 seconds
# reconnect 2: wait 22 seconds
# reconnect 3: wait 33 seconds
# reconnect 4: wait 44 seconds
# reconnect 5: wait 55 seconds
# reconnect 6: wait time is bigger than 60 seconds (recon_default + recon_max)
# reconnect 7: wait 11 seconds
# reconnect 8: wait 22 seconds
# reconnect 9: wait 33 seconds
# reconnect x: etc.
#
# In a setup with ~6000 thousand hosts these settings would average the reconnects
# to about 100 per second and all hosts would be reconnected within 60 seconds.
# recon_default: 100
# recon_max: 5000
# recon_randomize: False
#
#
# The loop_interval sets how long in seconds the minion will wait between
# evaluating the scheduler and running cleanup tasks. This defaults to a
# sane 60 seconds, but if the minion scheduler needs to be evaluated more
# often lower this value
#loop_interval: 60

# The grains_refresh_every setting allows for a minion to periodically check
# its grains to see if they have changed and, if so, to inform the master
# of the new grains. This operation is moderately expensive, therefore
# care should be taken not to set this value too low.
#
# Note: This value is expressed in __minutes__!
#
# A value of 10 minutes is a reasonable default.
#
# If the value is set to zero, this check is disabled.
#grains_refresh_every: 1

# Cache grains on the minion. Default is False.
#grains_cache: False

# Grains cache expiration, in seconds. If the cache file is older than this
# number of seconds then the grains cache will be dumped and fully re-populated
# with fresh data. Defaults to 5 minutes. Will have no effect if 'grains_cache'
# is not enabled.
# grains_cache_expiration: 300

# Windows platforms lack posix IPC and must rely on slower TCP based inter-
# process communications. Set ipc_mode to 'tcp' on such systems
#ipc_mode: ipc

# Overwrite the default tcp ports used by the minion when in tcp mode
#tcp_pub_port: 4510
#tcp_pull_port: 4511

# Passing very large events can cause the minion to consume large amounts of
# memory. This value tunes the maximum size of a message allowed onto the
# minion event bus. The value is expressed in bytes.
#max_event_size: 1048576

# To detect failed master(s) and fire events on connect/disconnect, set
# master_alive_interval to the number of seconds to poll the masters for
# connection events.
#
#master_alive_interval: 30

# The minion can include configuration from other files. To enable this,
# pass a list of paths to this option. The paths can be either relative or
# absolute; if relative, they are considered to be relative to the directory
# the main minion configuration file lives in (this file). Paths can make use
# of shell-style globbing. If no files are matched by a path passed to this
# option then the minion will log a warning message.
#
# Include a config file from some other path:
# include: /etc/salt/extra_config
#
# Include config from several files and directories:
#include:
#  - /etc/salt/extra_config
#  - /etc/roles/webserver
#
#
#
#####   Minion module management     #####
##########################################
# Disable specific modules. This allows the admin to limit the level of
# access the master has to the minion.
#disable_modules: [cmd,test]
#disable_returners: []
#
# Modules can be loaded from arbitrary paths. This enables the easy deployment
# of third party modules. Modules for returners and minions can be loaded.
# Specify a list of extra directories to search for minion modules and
# returners. These paths must be fully qualified!
#module_dirs: []
#returner_dirs: []
#states_dirs: []
#render_dirs: []
#utils_dirs: []
#
# A module provider can be statically overwritten or extended for the minion
# via the providers option, in this case the default module will be
# overwritten by the specified module. In this example the pkg module will
# be provided by the yumpkg5 module instead of the system default.
#providers:
#  pkg: yumpkg5
#
# Enable Cython modules searching and loading. (Default: False)
#cython_enable: False
#
# Specify a max size (in bytes) for modules on import. This feature is currently
# only supported on *nix operating systems and requires psutil.
# modules_max_memory: -1


#####    State Management Settings    #####
###########################################
# The state management system executes all of the state templates on the minion
# to enable more granular control of system state management. The type of
# template and serialization used for state management needs to be configured
# on the minion, the default renderer is yaml_jinja. This is a yaml file
# rendered from a jinja template, the available options are:
# yaml_jinja
# yaml_mako
# yaml_wempy
# json_jinja
# json_mako
# json_wempy
#
#renderer: yaml_jinja
#
# The failhard option tells the minions to stop immediately after the first
# failure detected in the state execution. Defaults to False.
#failhard: False
#
# Reload the modules prior to a highstate run.
#autoload_dynamic_modules: True
#
# clean_dynamic_modules keeps the dynamic modules on the minion in sync with
# the dynamic modules on the master, this means that if a dynamic module is
# not on the master it will be deleted from the minion. By default, this is
# enabled and can be disabled by changing this value to False.
#clean_dynamic_modules: True
#
# Normally, the minion is not isolated to any single environment on the master
# when running states, but the environment can be isolated on the minion side
# by statically setting it. Remember that the recommended way to manage
# environments is to isolate via the top file.
#environment: None
#
# If using the local file directory, then the state top file name needs to be
# defined, by default this is top.sls.
#state_top: top.sls
#
# Run states when the minion daemon starts. To enable, set startup_states to:
# 'highstate' -- Execute state.highstate
# 'sls' -- Read in the sls_list option and execute the named sls files
# 'top' -- Read top_file option and execute based on that file on the Master
#startup_states: ''
#
# List of states to run when the minion starts up if startup_states is 'sls':
#sls_list:
#  - edit.vim
#  - hyper
#
# Top file to execute if startup_states is 'top':
#top_file: ''

# Automatically aggregate all states that have support for mod_aggregate by
# setting to True. Or pass a list of state module names to automatically
# aggregate just those types.
#
# state_aggregate:
#   - pkg
#
#state_aggregate: False

#####     File Directory Settings    #####
##########################################
# The Salt Minion can redirect all file server operations to a local directory,
# this allows for the same state tree that is on the master to be used if
# copied completely onto the minion. This is a literal copy of the settings on
# the master but used to reference a local directory on the minion.

# Set the file client. The client defaults to looking on the master server for
# files, but can be directed to look at the local file directory setting
# defined below by setting it to local.
#file_client: remote

# The file directory works on environments passed to the minion, each environment
# can have multiple root directories, the subdirectories in the multiple file
# roots cannot match, otherwise the downloaded files will not be able to be
# reliably ensured. A base environment is required to house the top file.
# Example:
# file_roots:
#   base:
#     - /srv/salt/
#   dev:
#     - /srv/salt/dev/services
#     - /srv/salt/dev/states
#   prod:
#     - /srv/salt/prod/services
#     - /srv/salt/prod/states
#
#file_roots:
#  base:
#    - /srv/salt

# By default, the Salt fileserver recurses fully into all defined environments
# to attempt to find files. To limit this behavior so that the fileserver only
# traverses directories with SLS files and special Salt directories like _modules,
# enable the option below. This might be useful for installations where a file root
# has a very large number of files and performance is negatively impacted. Default
# is False.
#fileserver_limit_traversal: False

# The hash_type is the hash to use when discovering the hash of a file in
# the local fileserver. The default is md5, but sha1, sha224, sha256, sha384
# and sha512 are also supported.
#
# Warning: Prior to changing this value, the minion should be stopped and all
# Salt caches should be cleared.
#hash_type: md5

# The Salt pillar is searched for locally if file_client is set to local. If
# this is the case, and pillar data is defined, then the pillar_roots need to
# also be configured on the minion:
#pillar_roots:
#  base:
#    - /srv/pillar
#
#
######        Security settings       #####
###########################################
# Enable "open mode", this mode still maintains encryption, but turns off
# authentication, this is only intended for highly secure environments or for
# the situation where your keys end up in a bad state. If you run in open mode
# you do so at your own risk!
#open_mode: False

# Enable permissive access to the salt keys.  This allows you to run the
# master or minion as root, but have a non-root group be given access to
# your pki_dir.  To make the access explicit, root must belong to the group
# you've given access to. This is potentially quite insecure.
#permissive_pki_access: False

# The state_verbose and state_output settings can be used to change the way
# state system data is printed to the display. By default all data is printed.
# The state_verbose setting can be set to True or False, when set to False
# all data that has a result of True and no changes will be suppressed.
#state_verbose: True

# The state_output setting changes if the output is the full multi line
# output for each changed state if set to 'full', but if set to 'terse'
# the output will be shortened to a single line.
#state_output: full

# The state_output_diff setting changes whether or not the output from
# successful states is returned. Useful when even the terse output of these
# states is cluttering the logs. Set it to True to ignore them.
#state_output_diff: False

# The state_output_profile setting changes whether profile information
# will be shown for each state run.
#state_output_profile: True

# Fingerprint of the master public key to double verify the master is valid,
# the master fingerprint can be found by running "salt-key -f master.pub" on the
# salt master.
#master_finger: ''


######         Thread settings        #####
###########################################
# Disable multiprocessing support, by default when a minion receives a
# publication a new process is spawned and the command is executed therein.
#multiprocessing: True


#####         Logging settings       #####
##########################################
# The location of the minion log file
# The minion log can be sent to a regular file, local path name, or network
# location. Remote logging works best when configured to use rsyslogd(8) (e.g.:
# ``file:///dev/log``), with rsyslogd(8) configured for network logging. The URI
# format is: <file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility>
#log_file: /var/log/salt/minion
#log_file: file:///dev/log
#log_file: udp://loghost:10514
#
#log_file: /var/log/salt/minion
#key_logfile: /var/log/salt/key

# The level of messages to send to the console.
# One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
# Default: 'warning'
#log_level: warning

# The level of messages to send to the log file.
# One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
# If using 'log_granular_levels' this must be set to the highest desired level.
# Default: 'warning'
#log_level_logfile:

# The date and time format used in log messages. Allowed date/time formating
# can be seen here: http://docs.python.org/library/time.html#time.strftime
#log_datefmt: '%H:%M:%S'
#log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'

# The format of the console logging messages. Allowed formatting options can
# be seen here: http://docs.python.org/library/logging.html#logrecord-attributes
#
# Console log colors are specified by these additional formatters:
#
# %(colorlevel)s
# %(colorname)s
# %(colorprocess)s
# %(colormsg)s
#
# Since it is desirable to include the surrounding brackets, '[' and ']', in
# the coloring of the messages, these color formatters also include padding as
# well.  Color LogRecord attributes are only available for console logging.
#
#log_fmt_console: '[%(levelname)-8s] %(message)s'
#log_fmt_logfile: '%(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s'

# This can be used to control logging levels more specificically.  This
# example sets the main salt library at the 'warning' level, but sets
# 'salt.modules' to log at the 'debug' level:
#   log_granular_levels:
#     'salt': 'warning'
#     'salt.modules': 'debug'
#
#log_granular_levels: {}

# To diagnose issues with minions disconnecting or missing returns, ZeroMQ
# supports the use of monitor sockets # to log connection events. This
# feature requires ZeroMQ 4.0 or higher.
#
# To enable ZeroMQ monitor sockets, set 'zmq_monitor' to 'True' and log at a
# debug level or higher.
#
# A sample log event is as follows:
#
# [DEBUG   ] ZeroMQ event: {'endpoint': 'tcp://127.0.0.1:4505', 'event': 512,
# 'value': 27, 'description': 'EVENT_DISCONNECTED'}
#
# All events logged will include the string 'ZeroMQ event'. A connection event
# should be logged on the as the minion starts up and initially connects to the
# master. If not, check for debug log level and that the necessary version of
# ZeroMQ is installed.
#
#zmq_monitor: False

######      Module configuration      #####
###########################################
# Salt allows for modules to be passed arbitrary configuration data, any data
# passed here in valid yaml format will be passed on to the salt minion modules
# for use. It is STRONGLY recommended that a naming convention be used in which
# the module name is followed by a . and then the value. Also, all top level
# data must be applied via the yaml dict construct, some examples:
#
# You can specify that all modules should run in test mode:
#test: True
#
# A simple value for the test module:
#test.foo: foo
#
# A list for the test module:
#test.bar: [baz,quo]
#
# A dict for the test module:
#test.baz: {spam: sausage, cheese: bread}
#
#
######      Update settings          ######
###########################################
# Using the features in Esky, a salt minion can both run as a frozen app and
# be updated on the fly. These options control how the update process
# (saltutil.update()) behaves.
#
# The url for finding and downloading updates. Disabled by default.
#update_url: False
#
# The list of services to restart after a successful update. Empty by default.
#update_restart_services: []


######      Keepalive settings        ######
############################################
# ZeroMQ now includes support for configuring SO_KEEPALIVE if supported by
# the OS. If connections between the minion and the master pass through
# a state tracking device such as a firewall or VPN gateway, there is
# the risk that it could tear down the connection the master and minion
# without informing either party that their connection has been taken away.
# Enabling TCP Keepalives prevents this from happening.

# Overall state of TCP Keepalives, enable (1 or True), disable (0 or False)
# or leave to the OS defaults (-1), on Linux, typically disabled. Default True, enabled.
#tcp_keepalive: True

# How long before the first keepalive should be sent in seconds. Default 300
# to send the first keepalive after 5 minutes, OS default (-1) is typically 7200 seconds
# on Linux see /proc/sys/net/ipv4/tcp_keepalive_time.
#tcp_keepalive_idle: 300

# How many lost probes are needed to consider the connection lost. Default -1
# to use OS defaults, typically 9 on Linux, see /proc/sys/net/ipv4/tcp_keepalive_probes.
#tcp_keepalive_cnt: -1

# How often, in seconds, to send keepalives after the first one. Default -1 to
# use OS defaults, typically 75 seconds on Linux, see
# /proc/sys/net/ipv4/tcp_keepalive_intvl.
#tcp_keepalive_intvl: -1


######      Windows Software settings ######
############################################
# Location of the repository cache file on the master:
#win_repo_cachefile: 'salt://win/repo/winrepo.p'


######      Returner  settings        ######
############################################
# Which returner(s) will be used for minion's result:
#return: mysql

Configuring Salt

Salt configuration is very simple. The default configuration for the master will work for most installations and the only requirement for setting up a minion is to set the location of the master in the minion configuration file.

The configuration files will be installed to /etc/salt and are named after the respective components, /etc/salt/master, and /etc/salt/minion.

Master Configuration

By default the Salt master listens on ports 4505 and 4506 on all interfaces (0.0.0.0). To bind Salt to a specific IP, redefine the "interface" directive in the master configuration file, typically /etc/salt/master, as follows:

- #interface: 0.0.0.0
+ interface: 10.0.0.1

After updating the configuration file, restart the Salt master. See the master configuration reference for more details about other configurable options.

Minion Configuration

Although there are many Salt Minion configuration options, configuring a Salt Minion is very simple. By default a Salt Minion will try to connect to the DNS name "salt"; if the Minion is able to resolve that name correctly, no configuration is needed.

If the DNS name "salt" does not resolve to point to the correct location of the Master, redefine the "master" directive in the minion configuration file, typically /etc/salt/minion, as follows:

- #master: salt
+ master: 10.0.0.1

After updating the configuration file, restart the Salt minion. See the minion configuration reference for more details about other configurable options.

Running Salt

  1. Start the master in the foreground (to daemonize the process, pass the -d flag):

    salt-master
    
  2. Start the minion in the foreground (to daemonize the process, pass the -d flag):

    salt-minion
    

Having trouble?

The simplest way to troubleshoot Salt is to run the master and minion in the foreground with log level set to debug:

salt-master --log-level=debug

For information on salt's logging system please see the logging document.

Run as an unprivileged (non-root) user

To run Salt as another user, set the user parameter in the master config file.

Additionally, ownership, and permissions need to be set such that the desired user can read from and write to the following directories (and their subdirectories, where applicable):

  • /etc/salt
  • /var/cache/salt
  • /var/log/salt
  • /var/run/salt

More information about running salt as a non-privileged user can be found here.

There is also a full troubleshooting guide available.

Key Management

Salt uses AES encryption for all communication between the Master and the Minion. This ensures that the commands sent to the Minions cannot be tampered with, and that communication between Master and Minion is authenticated through trusted, accepted keys.

Before commands can be sent to a Minion, its key must be accepted on the Master. Run the salt-key command to list the keys known to the Salt Master:

[root@master ~]# salt-key -L
Unaccepted Keys:
alpha
bravo
charlie
delta
Accepted Keys:

This example shows that the Salt Master is aware of four Minions, but none of the keys has been accepted. To accept the keys and allow the Minions to be controlled by the Master, again use the salt-key command:

[root@master ~]# salt-key -A
[root@master ~]# salt-key -L
Unaccepted Keys:
Accepted Keys:
alpha
bravo
charlie
delta

The salt-key command allows for signing keys individually or in bulk. The example above, using -A bulk-accepts all pending keys. To accept keys individually use the lowercase of the same option, -a keyname.

See also

salt-key manpage

Sending Commands

Communication between the Master and a Minion may be verified by running the test.ping command:

[root@master ~]# salt alpha test.ping
alpha:
    True

Communication between the Master and all Minions may be tested in a similar way:

[root@master ~]# salt '*' test.ping
alpha:
    True
bravo:
    True
charlie:
    True
delta:
    True

Each of the Minions should send a True response as shown above.

What's Next?

Understanding targeting is important. From there, depending on the way you wish to use Salt, you should also proceed to learn about States and Execution Modules.

Configuring the Salt Master

The Salt system is amazingly simple and easy to configure, the two components of the Salt system each have a respective configuration file. The salt-master is configured via the master configuration file, and the salt-minion is configured via the minion configuration file.

The configuration file for the salt-master is located at /etc/salt/master by default. A notable exception is FreeBSD, where the configuration file is located at /usr/local/etc/salt. The available options are as follows:

Primary Master Configuration

interface

Default: 0.0.0.0 (all interfaces)

The local interface to bind to.

interface: 192.168.0.1
ipv6

Default: False

Whether the master should listen for IPv6 connections. If this is set to True, the interface option must be adjusted too (for example: "interface: '::'")

ipv6: True
publish_port

Default: 4505

The network port to set up the publication interface.

publish_port: 4505
master_id

Default: None

The id to be passed in the publish job to minions. This is used for MultiSyndics to return the job to the requesting master.

Note

This must be the same string as the syndic is configured with.

master_id: MasterOfMaster
user

Default: root

The user to run the Salt processes

user: root
max_open_files

Default: 100000

Each minion connecting to the master uses AT LEAST one file descriptor, the master subscription connection. If enough minions connect you might start seeing on the console(and then salt-master crashes):

Too many open files (tcp_listener.cpp:335)
Aborted (core dumped)
max_open_files: 100000

By default this value will be the one of ulimit -Hn, i.e., the hard limit for max open files.

To set a different value than the default one, uncomment, and configure this setting. Remember that this value CANNOT be higher than the hard limit. Raising the hard limit depends on the OS and/or distribution, a good way to find the limit is to search the internet for something like this:

raise max open files hard limit debian
worker_threads

Default: 5

The number of threads to start for receiving commands and replies from minions. If minions are stalling on replies because you have many minions, raise the worker_threads value.

Worker threads should not be put below 3 when using the peer system, but can drop down to 1 worker otherwise.

Note

When the master daemon starts, it is expected behaviour to see multiple salt-master processes, even if 'worker_threads' is set to '1'. At a minimum, a controlling process will start along with a Publisher, an EventPublisher, and a number of MWorker processes will be started. The number of MWorker processes is tuneable by the 'worker_threads' configuration value while the others are not.

worker_threads: 5
ret_port

Default: 4506

The port used by the return server, this is the server used by Salt to receive execution returns and command executions.

ret_port: 4506
pidfile

Default: /var/run/salt-master.pid

Specify the location of the master pidfile.

pidfile: /var/run/salt-master.pid
root_dir

Default: /

The system root directory to operate from, change this to make Salt run from an alternative root.

root_dir: /

Note

This directory is prepended to the following options: pki_dir, cachedir, sock_dir, log_file, autosign_file, autoreject_file, pidfile.

pki_dir

Default: /etc/salt/pki

The directory to store the pki authentication keys.

pki_dir: /etc/salt/pki
extension_modules

Directory for custom modules. This directory can contain subdirectories for each of Salt's module types such as "runners", "output", "wheel", "modules", "states", "returners", etc. This path is appended to root_dir.

extension_modules: srv/modules
module_dirs

Default: []

Like extension_modules, but a list of extra directories to search for Salt modules.

module_dirs:
  - /var/cache/salt/minion/extmods
cachedir

Default: /var/cache/salt

The location used to store cache information, particularly the job information for executed salt commands.

This directory may contain sensitive data and should be protected accordingly.

cachedir: /var/cache/salt
verify_env

Default: True

Verify and set permissions on configuration directories at startup.

verify_env: True
keep_jobs

Default: 24

Set the number of hours to keep old job information.

timeout

Default: 5

Set the default timeout for the salt command and api.

loop_interval

Default: 60

The loop_interval option controls the seconds for the master's maintenance process check cycle. This process updates file server backends, cleans the job cache and executes the scheduler.

output

Default: nested

Set the default outputter used by the salt command.

color

Default: True

By default output is colored, to disable colored output set the color value to False.

color: False
sock_dir

Default: /var/run/salt/master

Set the location to use for creating Unix sockets for master process communication.

sock_dir: /var/run/salt/master
enable_gpu_grains

Default: False

The master can take a while to start up when lspci and/or dmidecode is used to populate the grains for the master. Enable if you want to see GPU hardware data for your master.

job_cache

Default: True

The master maintains a job cache, while this is a great addition it can be a burden on the master for larger deployments (over 5000 minions). Disabling the job cache will make previously executed jobs unavailable to the jobs system and is not generally recommended. Normally it is wise to make sure the master has access to a faster IO system or a tmpfs is mounted to the jobs dir.

minion_data_cache

Default: True

The minion data cache is a cache of information about the minions stored on the master, this information is primarily the pillar and grains data. The data is cached in the Master cachedir under the name of the minion and used to predetermine what minions are expected to reply from executions.

minion_data_cache: True
ext_job_cache

Default: ''

Used to specify a default returner for all minions, when this option is set the specified returner needs to be properly configured and the minions will always default to sending returns to this returner. This will also disable the local job cache on the master.

ext_job_cache: redis
event_return

New in version 2015.5.0.

Default: ''

Specify the returner to use to log events. A returner may have installation and configuration requirements. Read the returner's documentation.

Note

Not all returners support event returns. Verify that a returner has an event_return() function before configuring this option with a returner.

event_return: cassandra_cql
master_job_cache

New in version 2014.7.

Default: 'local_cache'

Specify the returner to use for the job cache. The job cache will only be interacted with from the salt master and therefore does not need to be accessible from the minions.

master_job_cache: redis
enforce_mine_cache

Default: False

By-default when disabling the minion_data_cache mine will stop working since it is based on cached data, by enabling this option we explicitly enabling only the cache for the mine system.

enforce_mine_cache: False
max_minions

Default: 0

The number of minions the master should allow to connect. Use this to accommodate the number of minions per master if you have different types of hardware serving your minions. The default of 0 means unlimited connections. Please note, that this can slow down the authentication process a bit in large setups.

max_minions: 100
con_cache

Default: False

If max_minions is used in large installations, the master might experience high-load situations because of having to check the number of connected minions for every authentication. This cache provides the minion-ids of all connected minions to all MWorker-processes and greatly improves the performance of max_minions.

con_cache: True
presence_events

Default: False

Causes the master to periodically look for actively connected minions. Presence events are fired on the event bus on a regular interval with a list of connected minions, as well as events with lists of newly connected or disconnected minions. This is a master-only operation that does not send executions to minions. Note, this does not detect minions that connect to a master via localhost.

presence_events: False

Salt-SSH Configuration

roster_file

Default: '/etc/salt/roster'

Pass in an alternative location for the salt-ssh roster file.

roster_file: /root/roster
ssh_minion_opts

Default: None

Pass in minion option overrides that will be inserted into the SHIM for salt-ssh calls. The local minion config is not used for salt-ssh. Can be overridden on a per-minion basis in the roster (minion_opts)

minion_opts:
  gpg_keydir: /root/gpg

Master Security Settings

open_mode

Default: False

Open mode is a dangerous security feature. One problem encountered with pki authentication systems is that keys can become "mixed up" and authentication begins to fail. Open mode turns off authentication and tells the master to accept all authentication. This will clean up the pki keys received from the minions. Open mode should not be turned on for general use. Open mode should only be used for a short period of time to clean up pki keys. To turn on open mode set this value to True.

open_mode: False
auto_accept

Default: False

Enable auto_accept. This setting will automatically accept all incoming public keys from minions.

auto_accept: False
autosign_timeout

New in version 2014.7.0.

Default: 120

Time in minutes that a incoming public key with a matching name found in pki_dir/minion_autosign/keyid is automatically accepted. Expired autosign keys are removed when the master checks the minion_autosign directory. This method to auto accept minions can be safer than an autosign_file because the keyid record can expire and is limited to being an exact name match. This should still be considered a less than secure option, due to the fact that trust is based on just the requesting minion id.

autosign_file

Default: not defined

If the autosign_file is specified incoming keys specified in the autosign_file will be automatically accepted. Matches will be searched for first by string comparison, then by globbing, then by full-string regex matching. This should still be considered a less than secure option, due to the fact that trust is based on just the requesting minion id.

autoreject_file

New in version 2014.1.0.

Default: not defined

Works like autosign_file, but instead allows you to specify minion IDs for which keys will automatically be rejected. Will override both membership in the autosign_file and the auto_accept setting.

client_acl

Default: {}

Enable user accounts on the master to execute specific modules. These modules can be expressed as regular expressions.

client_acl:
  fred:
    - test.ping
    - pkg.*
client_acl_blacklist

Default: {}

Blacklist users or modules

This example would blacklist all non sudo users, including root from running any commands. It would also blacklist any use of the "cmd" module.

This is completely disabled by default.

client_acl_blacklist:
  users:
    - root
    - '^(?!sudo_).*$'   #  all non sudo users
  modules:
    - cmd
external_auth

Default: {}

The external auth system uses the Salt auth modules to authenticate and validate users to access areas of the Salt system.

external_auth:
  pam:
    fred:
      - test.*
token_expire

Default: 43200

Time (in seconds) for a newly generated token to live.

Default: 12 hours

token_expire: 43200
file_recv

Default: False

Allow minions to push files to the master. This is disabled by default, for security purposes.

file_recv: False
master_sign_pubkey

Default: False

Sign the master auth-replies with a cryptographic signature of the masters public key. Please see the tutorial how to use these settings in the Multimaster-PKI with Failover Tutorial

master_sign_pubkey: True
master_sign_key_name

Default: master_sign

The customizable name of the signing-key-pair without suffix.

master_sign_key_name: <filename_without_suffix>
master_pubkey_signature

Default: master_pubkey_signature

The name of the file in the masters pki-directory that holds the pre-calculated signature of the masters public-key.

master_pubkey_signature: <filename>
master_use_pubkey_signature

Default: False

Instead of computing the signature for each auth-reply, use a pre-calculated signature. The master_pubkey_signature must also be set for this.

master_use_pubkey_signature: True
rotate_aes_key

Default: True

Rotate the salt-masters AES-key when a minion-public is deleted with salt-key. This is a very important security-setting. Disabling it will enable deleted minions to still listen in on the messages published by the salt-master. Do not disable this unless it is absolutely clear what this does.

rotate_aes_key: True

Master Module Management

runner_dirs

Default: []

Set additional directories to search for runner modules.

cython_enable

Default: False

Set to true to enable Cython modules (.pyx files) to be compiled on the fly on the Salt master.

cython_enable: False

Master State System Settings

state_top

Default: top.sls

The state system uses a "top" file to tell the minions what environment to use and what modules to use. The state_top file is defined relative to the root of the base environment.

state_top: top.sls
master_tops

Default: {}

The master_tops option replaces the external_nodes option by creating a pluggable system for the generation of external top data. The external_nodes option is deprecated by the master_tops option. To gain the capabilities of the classic external_nodes system, use the following configuration:

master_tops:
  ext_nodes: <Shell command which returns yaml>
external_nodes

Default: None

The external_nodes option allows Salt to gather data that would normally be placed in a top file from and external node controller. The external_nodes option is the executable that will return the ENC data. Remember that Salt will look for external nodes AND top files and combine the results if both are enabled and available!

external_nodes: cobbler-ext-nodes
renderer

Default: yaml_jinja

The renderer to use on the minions to render the state data.

renderer: yaml_jinja
failhard

Default: False

Set the global failhard flag, this informs all states to stop running states at the moment a single state fails.

failhard: False
state_verbose

Default: True

Controls the verbosity of state runs. By default, the results of all states are returned, but setting this value to False will cause salt to only display output for states which either failed, or succeeded without making any changes to the minion.

state_verbose: False
state_output

Default: full

The state_output setting changes if the output is the full multi line output for each changed state if set to 'full', but if set to 'terse' the output will be shortened to a single line. If set to 'mixed', the output will be terse unless a state failed, in which case that output will be full. If set to 'changes', the output will be full unless the state didn't change.

state_output: full
yaml_utf8

Default: False

Enable extra routines for YAML renderer used states containing UTF characters.

yaml_utf8: False
test

Default: False

Set all state calls to only test if they are going to actually make changes or just post what changes are going to be made.

test: False

Master File Server Settings

fileserver_backend

Default: ['roots']

Salt supports a modular fileserver backend system, this system allows the salt master to link directly to third party systems to gather and manage the files available to minions. Multiple backends can be configured and will be searched for the requested file in the order in which they are defined here. The default setting only enables the standard backend roots, which is configured using the file_roots option.

Example:

fileserver_backend:
  - roots
  - git
hash_type

Default: md5

The hash_type is the hash to use when discovering the hash of a file on the master server. The default is md5, but sha1, sha224, sha256, sha384, and sha512 are also supported.

hash_type: md5
file_buffer_size

Default: 1048576

The buffer size in the file server in bytes.

file_buffer_size: 1048576
file_ignore_regex

Default: ''

A regular expression (or a list of expressions) that will be matched against the file path before syncing the modules and states to the minions. This includes files affected by the file.recurse state. For example, if you manage your custom modules and states in subversion and don't want all the '.svn' folders and content synced to your minions, you could set this to '/.svn($|/)'. By default nothing is ignored.

file_ignore_regex:
  - '/\.svn($|/)'
  - '/\.git($|/)'
file_ignore_glob

Default ''

A file glob (or list of file globs) that will be matched against the file path before syncing the modules and states to the minions. This is similar to file_ignore_regex above, but works on globs instead of regex. By default nothing is ignored.

file_ignore_glob:
  - '\*.pyc'
  - '\*/somefolder/\*.bak'
  - '\*.swp'
roots: Master's Local File Server
file_roots

Default:

base:
  - /srv/salt

Salt runs a lightweight file server written in ZeroMQ to deliver files to minions. This file server is built into the master daemon and does not require a dedicated port.

The file server works on environments passed to the master. Each environment can have multiple root directories. The subdirectories in the multiple file roots cannot match, otherwise the downloaded files will not be able to be reliably ensured. A base environment is required to house the top file.

Example:

file_roots:
  base:
    - /srv/salt
  dev:
    - /srv/salt/dev/services
    - /srv/salt/dev/states
  prod:
    - /srv/salt/prod/services
    - /srv/salt/prod/states
git: Git Remote File Server Backend
gitfs_remotes

Default: []

When using the git fileserver backend at least one git remote needs to be defined. The user running the salt master will need read access to the repo.

The repos will be searched in order to find the file requested by a client and the first repo to have the file will return it. Branches and tags are translated into salt environments.

gitfs_remotes:
  - git://github.com/saltstack/salt-states.git
  - file:///var/git/saltmaster

Note

file:// repos will be treated as a remote and copied into the master's gitfs cache, so only the local refs for those repos will be exposed as fileserver environments.

As of 2014.7.0, it is possible to have per-repo versions of several of the gitfs configuration parameters. For more information, see the GitFS Walkthrough.

gitfs_provider

New in version 2014.7.0.

Specify the provider to be used for gitfs. More information can be found in the GitFS Walkthrough.

Specify one value among valid values: gitpython, pygit2, dulwich

gitfs_provider: dulwich
gitfs_ssl_verify

Default: True

The gitfs_ssl_verify option specifies whether to ignore SSL certificate errors when contacting the gitfs backend. You might want to set this to false if you're using a git backend that uses a self-signed certificate but keep in mind that setting this flag to anything other than the default of True is a security concern, you may want to try using the ssh transport.

gitfs_ssl_verify: True
gitfs_mountpoint

New in version 2014.7.0.

Default: ''

Specifies a path on the salt fileserver from which gitfs remotes are served. Can be used in conjunction with gitfs_root. Can also be configured on a per-remote basis, see here for more info.

gitfs_mountpoint: salt://foo/bar

Note

The salt:// protocol designation can be left off (in other words, foo/bar and salt://foo/bar are equivalent).

gitfs_root

Default: ''

Serve files from a subdirectory within the repository, instead of the root. This is useful when there are files in the repository that should not be available to the Salt fileserver. Can be used in conjunction with gitfs_mountpoint.

gitfs_root: somefolder/otherfolder

Changed in version 2014.7.0: Ability to specify gitfs roots on a per-remote basis was added. See here for more info.

gitfs_base

Default: master

Defines which branch/tag should be used as the base environment.

gitfs_base: salt

Changed in version 2014.7.0: Ability to specify the base on a per-remote basis was added. See here for more info.

gitfs_env_whitelist

New in version 2014.7.0.

Default: []

Used to restrict which environments are made available. Can speed up state runs if the repos in gitfs_remotes contain many branches/tags. More information can be found in the GitFS Walkthrough.

gitfs_env_whitelist:
  - base
  - v1.*
  - 'mybranch\d+'
gitfs_env_blacklist

New in version 2014.7.0.

Default: []

Used to restrict which environments are made available. Can speed up state runs if the repos in gitfs_remotes contain many branches/tags. More information can be found in the GitFS Walkthrough.

gitfs_env_blacklist:
  - base
  - v1.*
  - 'mybranch\d+'
GitFS Authentication Options

These parameters only currently apply to the pygit2 gitfs provider. Examples of how to use these can be found in the GitFS Walkthrough.

gitfs_user

New in version 2014.7.0.

Default: ''

Along with gitfs_password, is used to authenticate to HTTPS remotes.

gitfs_user: git
gitfs_password

New in version 2014.7.0.

Default: ''

Along with gitfs_user, is used to authenticate to HTTPS remotes. This parameter is not required if the repository does not use authentication.

gitfs_password: mypassword
gitfs_insecure_auth

New in version 2014.7.0.

Default: False

By default, Salt will not authenticate to an HTTP (non-HTTPS) remote. This parameter enables authentication over HTTP. Enable this at your own risk.

gitfs_insecure_auth: True
gitfs_pubkey

New in version 2014.7.0.

Default: ''

Along with gitfs_privkey (and optionally gitfs_passphrase), is used to authenticate to SSH remotes. This parameter (or its per-remote counterpart) is required for SSH remotes.

gitfs_pubkey: /path/to/key.pub
gitfs_privkey

New in version 2014.7.0.

Default: ''

Along with gitfs_pubkey (and optionally gitfs_passphrase), is used to authenticate to SSH remotes. This parameter (or its per-remote counterpart) is required for SSH remotes.

gitfs_privkey: /path/to/key
gitfs_passphrase

New in version 2014.7.0.

Default: ''

This parameter is optional, required only when the SSH key being used to authenticate is protected by a passphrase.

gitfs_passphrase: mypassphrase
hg: Mercurial Remote File Server Backend
hgfs_remotes

New in version 0.17.0.

Default: []

When using the hg fileserver backend at least one mercurial remote needs to be defined. The user running the salt master will need read access to the repo.

The repos will be searched in order to find the file requested by a client and the first repo to have the file will return it. Branches and/or bookmarks are translated into salt environments, as defined by the hgfs_branch_method parameter.

hgfs_remotes:
  - https://username@bitbucket.org/username/reponame

Note

As of 2014.7.0, it is possible to have per-repo versions of the hgfs_root, hgfs_mountpoint, hgfs_base, and hgfs_branch_method parameters. For example:

hgfs_remotes:
  - https://username@bitbucket.org/username/repo1
    - base: saltstates
  - https://username@bitbucket.org/username/repo2:
    - root: salt
    - mountpoint: salt://foo/bar/baz
  - https://username@bitbucket.org/username/repo3:
    - root: salt/states
    - branch_method: mixed
hgfs_branch_method

New in version 0.17.0.

Default: branches

Defines the objects that will be used as fileserver environments.

  • branches - Only branches and tags will be used
  • bookmarks - Only bookmarks and tags will be used
  • mixed - Branches, bookmarks, and tags will be used
hgfs_branch_method: mixed

Note

Starting in version 2014.1.0, the value of the hgfs_base parameter defines which branch is used as the base environment, allowing for a base environment to be used with an hgfs_branch_method of bookmarks.

Prior to this release, the default branch will be used as the base environment.

hgfs_mountpoint

New in version 2014.7.0.

Default: ''

Specifies a path on the salt fileserver from which hgfs remotes are served. Can be used in conjunction with hgfs_root. Can also be configured on a per-remote basis, see here for more info.

hgfs_mountpoint: salt://foo/bar

Note

The salt:// protocol designation can be left off (in other words, foo/bar and salt://foo/bar are equivalent).

hgfs_root

New in version 0.17.0.

Default: ''

Serve files from a subdirectory within the repository, instead of the root. This is useful when there are files in the repository that should not be available to the Salt fileserver. Can be used in conjunction with hgfs_mountpoint.

hgfs_root: somefolder/otherfolder

Changed in version 2014.7.0: Ability to specify hgfs roots on a per-remote basis was added. See here for more info.

hgfs_base

New in version 2014.1.0.

Default: default

Defines which branch should be used as the base environment. Change this if hgfs_branch_method is set to bookmarks to specify which bookmark should be used as the base environment.

hgfs_base: salt
hgfs_env_whitelist

New in version 2014.7.0.

Default: []

Used to restrict which environments are made available. Can speed up state runs if your hgfs remotes contain many branches/bookmarks/tags. Full names, globs, and regular expressions are supported. If using a regular expression, the expression must match the entire minion ID.

If used, only branches/bookmarks/tags which match one of the specified expressions will be exposed as fileserver environments.

If used in conjunction with hgfs_env_blacklist, then the subset of branches/bookmarks/tags which match the whitelist but do not match the blacklist will be exposed as fileserver environments.

hgfs_env_whitelist:
  - base
  - v1.*
  - 'mybranch\d+'
hgfs_env_blacklist

New in version 2014.7.0.

Default: []

Used to restrict which environments are made available. Can speed up state runs if your hgfs remotes contain many branches/bookmarks/tags. Full names, globs, and regular expressions are supported. If using a regular expression, the expression must match the entire minion ID.

If used, branches/bookmarks/tags which match one of the specified expressions will not be exposed as fileserver environments.

If used in conjunction with hgfs_env_whitelist, then the subset of branches/bookmarks/tags which match the whitelist but do not match the blacklist will be exposed as fileserver environments.

hgfs_env_blacklist:
  - base
  - v1.*
  - 'mybranch\d+'
svn: Subversion Remote File Server Backend
svnfs_remotes

New in version 0.17.0.

Default: []

When using the svn fileserver backend at least one subversion remote needs to be defined. The user running the salt master will need read access to the repo.

The repos will be searched in order to find the file requested by a client and the first repo to have the file will return it. The trunk, branches, and tags become environments, with the trunk being the base environment.

svnfs_remotes:
  - svn://foo.com/svn/myproject

Note

As of 2014.7.0, it is possible to have per-repo versions of the following configuration parameters:

For example:

svnfs_remotes:
  - svn://foo.com/svn/project1
  - svn://foo.com/svn/project2:
    - root: salt
    - mountpoint: salt://foo/bar/baz
  - svn//foo.com/svn/project3:
    - root: salt/states
    - branches: branch
    - tags: tag
svnfs_mountpoint

New in version 2014.7.0.

Default: ''

Specifies a path on the salt fileserver from which svnfs remotes are served. Can be used in conjunction with svnfs_root. Can also be configured on a per-remote basis, see here for more info.

svnfs_mountpoint: salt://foo/bar

Note

The salt:// protocol designation can be left off (in other words, foo/bar and salt://foo/bar are equivalent).

svnfs_root

New in version 0.17.0.

Default: ''

Serve files from a subdirectory within the repository, instead of the root. This is useful when there are files in the repository that should not be available to the Salt fileserver. Can be used in conjunction with svnfs_mountpoint.

svnfs_root: somefolder/otherfolder

Changed in version 2014.7.0: Ability to specify svnfs roots on a per-remote basis was added. See here for more info.

svnfs_trunk

New in version 2014.7.0.

Default: trunk

Path relative to the root of the repository where the trunk is located. Can also be configured on a per-remote basis, see here for more info.

svnfs_trunk: trunk
svnfs_branches

New in version 2014.7.0.

Default: branches

Path relative to the root of the repository where the branches are located. Can also be configured on a per-remote basis, see here for more info.

svnfs_branches: branches
svnfs_tags

New in version 2014.7.0.

Default: tags

Path relative to the root of the repository where the tags are located. Can also be configured on a per-remote basis, see here for more info.

svnfs_tags: tags
svnfs_env_whitelist

New in version 2014.7.0.

Default: []

Used to restrict which environments are made available. Can speed up state runs if your svnfs remotes contain many branches/tags. Full names, globs, and regular expressions are supported. If using a regular expression, the expression must match the entire minion ID.

If used, only branches/tags which match one of the specified expressions will be exposed as fileserver environments.

If used in conjunction with svnfs_env_blacklist, then the subset of branches/tags which match the whitelist but do not match the blacklist will be exposed as fileserver environments.

svnfs_env_whitelist:
  - base
  - v1.*
  - 'mybranch\d+'
svnfs_env_blacklist

New in version 2014.7.0.

Default: []

Used to restrict which environments are made available. Can speed up state runs if your svnfs remotes contain many branches/tags. Full names, globs, and regular expressions are supported. If using a regular expression, the expression must match the entire minion ID.

If used, branches/tags which match one of the specified expressions will not be exposed as fileserver environments.

If used in conjunction with svnfs_env_whitelist, then the subset of branches/tags which match the whitelist but do not match the blacklist will be exposed as fileserver environments.

svnfs_env_blacklist:
  - base
  - v1.*
  - 'mybranch\d+'
minion: MinionFS Remote File Server Backend
minionfs_env

New in version 2014.7.0.

Default: base

Environment from which MinionFS files are made available.

minionfs_env: minionfs
minionfs_mountpoint

New in version 2014.7.0.

Default: ''

Specifies a path on the salt fileserver from which minionfs files are served.

minionfs_mountpoint: salt://foo/bar

Note

The salt:// protocol designation can be left off (in other words, foo/bar and salt://foo/bar are equivalent).

minionfs_whitelist

New in version 2014.7.0.

Default: []

Used to restrict which minions' pushed files are exposed via minionfs. If using a regular expression, the expression must match the entire minion ID.

If used, only the pushed files from minions which match one of the specified expressions will be exposed.

If used in conjunction with minionfs_blacklist, then the subset of hosts which match the whitelist but do not match the blacklist will be exposed.

minionfs_whitelist:
  - base
  - v1.*
  - 'mybranch\d+'
minionfs_blacklist

New in version 2014.7.0.

Default: []

Used to restrict which minions' pushed files are exposed via minionfs. If using a regular expression, the expression must match the entire minion ID.

If used, only the pushed files from minions which match one of the specified expressions will not be exposed.

If used in conjunction with minionfs_whitelist, then the subset of hosts which match the whitelist but do not match the blacklist will be exposed.

minionfs_blacklist:
  - base
  - v1.*
  - 'mybranch\d+'

Pillar Configuration

pillar_roots

Default:

base:
  - /srv/pillar

Set the environments and directories used to hold pillar sls data. This configuration is the same as file_roots:

pillar_roots:
  base:
    - /srv/pillar
  dev:
    - /srv/pillar/dev
  prod:
    - /srv/pillar/prod
ext_pillar

The ext_pillar option allows for any number of external pillar interfaces to be called when populating pillar data. The configuration is based on ext_pillar functions. The available ext_pillar functions can be found herein:

https://github.com/saltstack/salt/blob/develop/salt/pillar

By default, the ext_pillar interface is not configured to run.

Default: None

ext_pillar:
  - hiera: /etc/hiera.yaml
  - cmd_yaml: cat /etc/salt/yaml
  - reclass:
      inventory_base_uri: /etc/reclass

There are additional details at Pillars

ext_pillar_first

New in version 2015.5.0.

The ext_pillar_first option allows for external pillar sources to populate before file system pillar. This allows for targeting file system pillar from ext_pillar.

Default: False

ext_pillar_first: False
pillar_source_merging_strategy

New in version 2014.7.0.

Default: smart

The pillar_source_merging_strategy option allows you to configure merging strategy between different sources. It accepts 4 values:

  • recurse:

    it will merge recursively mapping of data. For example, theses 2 sources:

    foo: 42
    bar:
        element1: True
    
    bar:
        element2: True
    baz: quux
    

    will be merged as:

    foo: 42
    bar:
        element1: True
        element2: True
    baz: quux
    
  • aggregate:

    instructs aggregation of elements between sources that use the #!yamlex renderer.

    For example, these two documents:

    #!yamlex
    foo: 42
    bar: !aggregate {
      element1: True
    }
    baz: !aggregate quux
    
    #!yamlex
    bar: !aggregate {
      element2: True
    }
    baz: !aggregate quux2
    

    will be merged as:

    foo: 42
    bar:
      element1: True
      element2: True
    baz:
      - quux
      - quux2
    
  • overwrite:

    Will use the behaviour of the 2014.1 branch and earlier.

    Overwrites elements according the order in which they are processed.

    First pillar processed:

    A:
      first_key: blah
      second_key: blah
    

    Second pillar processed:

    A:
      third_key: blah
      fourth_key: blah
    

    will be merged as:

    A:
      third_key: blah
      fourth_key: blah
    
  • smart (default):

    Guesses the best strategy based on the "renderer" setting.

Syndic Server Settings

A Salt syndic is a Salt master used to pass commands from a higher Salt master to minions below the syndic. Using the syndic is simple. If this is a master that will have syndic servers(s) below it, set the "order_masters" setting to True.

If this is a master that will be running a syndic daemon for passthrough the "syndic_master" setting needs to be set to the location of the master server.

Do not not forget that, in other words, it means that it shares with the local minion its ID and PKI_DIR.

order_masters

Default: False

Extra data needs to be sent with publications if the master is controlling a lower level master via a syndic minion. If this is the case the order_masters value must be set to True

order_masters: False
syndic_master

Default: None

If this master will be running a salt-syndic to connect to a higher level master, specify the higher level master with this configuration value.

syndic_master: masterofmasters
syndic_master_port

Default: 4506

If this master will be running a salt-syndic to connect to a higher level master, specify the higher level master port with this configuration value.

syndic_master_port: 4506
syndic_pidfile

Default: salt-syndic.pid

If this master will be running a salt-syndic to connect to a higher level master, specify the pidfile of the syndic daemon.

syndic_pidfile: syndic.pid
syndic_log_file

Default: syndic.log

If this master will be running a salt-syndic to connect to a higher level master, specify the log_file of the syndic daemon.

syndic_log_file: salt-syndic.log

Peer Publish Settings

Salt minions can send commands to other minions, but only if the minion is allowed to. By default "Peer Publication" is disabled, and when enabled it is enabled for specific minions and specific commands. This allows secure compartmentalization of commands based on individual minions.

peer

Default: {}

The configuration uses regular expressions to match minions and then a list of regular expressions to match functions. The following will allow the minion authenticated as foo.example.com to execute functions from the test and pkg modules.

peer:
  foo.example.com:
      - test.*
      - pkg.*

This will allow all minions to execute all commands:

peer:
  .*:
      - .*

This is not recommended, since it would allow anyone who gets root on any single minion to instantly have root on all of the minions!

By adding an additional layer you can limit the target hosts in addition to the accessible commands:

peer:
  foo.example.com:
    'db*':
      - test.*
      - pkg.*
peer_run

Default: {}

The peer_run option is used to open up runners on the master to access from the minions. The peer_run configuration matches the format of the peer configuration.

The following example would allow foo.example.com to execute the manage.up runner:

peer_run:
  foo.example.com:
      - manage.up

Master Logging Settings

log_file

Default: /var/log/salt/master

The master log can be sent to a regular file, local path name, or network location. See also log_file.

Examples:

log_file: /var/log/salt/master
log_file: file:///dev/log
log_file: udp://loghost:10514
log_level

Default: warning

The level of messages to send to the console. See also log_level.

log_level: warning
log_level_logfile

Default: warning

The level of messages to send to the log file. See also log_level_logfile. When it is not set explicitly it will inherit the level set by log_level option.

log_level_logfile: warning
log_datefmt

Default: %H:%M:%S

The date and time format used in console log messages. See also log_datefmt.

log_datefmt: '%H:%M:%S'
log_datefmt_logfile

Default: %Y-%m-%d %H:%M:%S

The date and time format used in log file messages. See also log_datefmt_logfile.

log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'
log_fmt_console

Default: [%(levelname)-8s] %(message)s

The format of the console logging messages. See also log_fmt_console.

log_fmt_console: '[%(levelname)-8s] %(message)s'
log_fmt_logfile

Default: %(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s

The format of the log file logging messages. See also log_fmt_logfile.

log_fmt_logfile: '%(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s'
log_granular_levels

Default: {}

This can be used to control logging levels more specifically. See also log_granular_levels.

Node Groups

Default: {}

Node groups allow for logical groupings of minion nodes. A group consists of a group name and a compound target.

nodegroups:
  group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com or bl*.domain.com'
  group2: 'G@os:Debian and foo.domain.com'
  group3: 'G@os:Debian and N@group1'

More information on using nodegroups can be found here.

Range Cluster Settings

range_server

Default: ''

The range server (and optional port) that serves your cluster information https://github.com/ytoolshed/range/wiki/%22yamlfile%22-module-file-spec

range_server: range:80

Include Configuration

default_include

Default: master.d/*.conf

The master can include configuration from other files. Per default the master will automatically include all config files from master.d/*.conf where master.d is relative to the directory of the master configuration file.

include

Default: not defined

The master can include configuration from other files. To enable this, pass a list of paths to this option. The paths can be either relative or absolute; if relative, they are considered to be relative to the directory the main minion configuration file lives in. Paths can make use of shell-style globbing. If no files are matched by a path passed to this option then the master will log a warning message.

# Include files from a master.d directory in the same
# directory as the master config file
include: master.d/*

# Include a single extra file into the configuration
include: /etc/roles/webserver

# Include several files and the master.d directory
include:
  - extra_config
  - master.d/*
  - /etc/roles/webserver

Windows Software Repo Settings

win_repo

Default: /srv/salt/win/repo

Location of the repo on the master

win_repo: '/srv/salt/win/repo'
win_repo_mastercachefile

Default: /srv/salt/win/repo/winrepo.p

win_repo_mastercachefile: '/srv/salt/win/repo/winrepo.p'
win_gitrepos

Default: ''

List of git repositories to include with the local repo.

win_gitrepos:
  - 'https://github.com/saltstack/salt-winrepo.git'

Configuring the Salt Minion

The Salt system is amazingly simple and easy to configure. The two components of the Salt system each have a respective configuration file. The salt-master is configured via the master configuration file, and the salt-minion is configured via the minion configuration file.

The Salt Minion configuration is very simple. Typically, the only value that needs to be set is the master value so the minion knows where to locate its master.

By default, the salt-minion configuration will be in /etc/salt/minion. A notable exception is FreeBSD, where the configuration will be in /usr/local/etc/salt/minion.

Minion Primary Configuration

master

Default: salt

The hostname or ipv4 of the master.

Default: salt

master: salt

The option can can also be set to a list of masters, enabling multi-master mode.

master:
  - address1
  - address2

Changed in version 2014.7.0: The master can be dynamically configured. The master value can be set to an module function which will be executed and will assume that the returning value is the ip or hostname of the desired master. If a function is being specified, then the master_type option must be set to func, to tell the minion that the value is a function to be run and not a fully-qualified domain name.

master: module.function
master_type: func

In addition, instead of using multi-master mode, the minion can be configured to use the list of master addresses as a failover list, trying the first address, then the second, etc. until the minion successfully connects. To enable this behavior, set master_type to failover:

master:
  - address1
  - address2
master_type: failover
master_type

New in version 2014.7.0.

Default: str

The type of the master variable. Can be either func or failover.

If the master needs to be dynamically assigned by executing a function instead of reading in the static master value, set this to func. This can be used to manage the minion's master setting from an execution module. By simply changing the algorithm in the module to return a new master ip/fqdn, restart the minion and it will connect to the new master.

master_type: func

If this option is set to failover, master must be a list of master addresses. The minion will then try each master in the order specified in the list until it successfully connects.

master_type: failover
master_shuffle

New in version 2014.7.0.

Default: False

If master is a list of addresses, shuffle them before trying to connect to distribute the minions over all available masters. This uses Python's random.shuffle method.

master_shuffle: True
retry_dns

Default: 30

Set the number of seconds to wait before attempting to resolve the master hostname if name resolution fails. Defaults to 30 seconds. Set to zero if the minion should shutdown and not retry.

retry_dns: 30
master_port

Default: 4506

The port of the master ret server, this needs to coincide with the ret_port option on the Salt master.

master_port: 4506
user

Default: root

The user to run the Salt processes

user: root
sudo_runas

Default: None

The user to run salt remote execution commands as via sudo. If this option is enabled then sudo will be used to change the active user executing the remote command. If enabled the user will need to be allowed access via the sudoers file for the user that the salt minion is configured to run as. The most common option would be to use the root user. If this option is set the user option should also be set to a non-root user. If migrating from a root minion to a non root minion the minion cache should be cleared and the minion pki directory will need to be changed to the ownership of the new user.

sudo_user: root
pidfile

Default: /var/run/salt-minion.pid

The location of the daemon's process ID file

pidfile: /var/run/salt-minion.pid
root_dir

Default: /

This directory is prepended to the following options: pki_dir, cachedir, log_file, sock_dir, and pidfile.

root_dir: /
pki_dir

Default: /etc/salt/pki

The directory used to store the minion's public and private keys.

pki_dir: /etc/salt/pki
id

Default: the system's hostname

See also

Salt Walkthrough

The Setting up a Salt Minion section contains detailed information on how the hostname is determined.

Explicitly declare the id for this minion to use. Since Salt uses detached ids it is possible to run multiple minions on the same machine but with different ids.

id: foo.bar.com
append_domain

Default: None

Append a domain to a hostname in the event that it does not exist. This is useful for systems where socket.getfqdn() does not actually result in a FQDN (for instance, Solaris).

append_domain: foo.org
cachedir

Default: /var/cache/salt

The location for minion cache data.

This directory may contain sensitive data and should be protected accordingly.

cachedir: /var/cache/salt
verify_env

Default: True

Verify and set permissions on configuration directories at startup.

verify_env: True

Note

When marked as True the verify_env option requires WRITE access to the configuration directory (/etc/salt/). In certain situations such as mounting /etc/salt/ as read-only for templating this will create a stack trace when state.highstate is called.

cache_jobs

Default: False

The minion can locally cache the return data from jobs sent to it, this can be a good way to keep track of the minion side of the jobs the minion has executed. By default this feature is disabled, to enable set cache_jobs to True.

cache_jobs: False
sock_dir

Default: /var/run/salt/minion

The directory where Unix sockets will be kept.

sock_dir: /var/run/salt/minion
backup_mode

Default: []

Backup files replaced by file.managed and file.recurse under cachedir.

backup_mode: minion
acceptance_wait_time

Default: 10

The number of seconds to wait until attempting to re-authenticate with the master.

acceptance_wait_time: 10
random_reauth_delay

When the master key changes, the minion will try to re-auth itself to receive the new master key. In larger environments this can cause a syn-flood on the master because all minions try to re-auth immediately. To prevent this and have a minion wait for a random amount of time, use this optional parameter. The wait-time will be a random number of seconds between 0 and the defined value.

random_reauth_delay: 60
acceptance_wait_time_max

Default: None

The maximum number of seconds to wait until attempting to re-authenticate with the master. If set, the wait will increase by acceptance_wait_time seconds each iteration.

acceptance_wait_time_max: None
recon_default

Default: 1000

The interval in milliseconds that the socket should wait before trying to reconnect to the master (1000ms = 1 second).

recon_default: 1000
recon_max

Default: 10000

The maximum time a socket should wait. Each interval the time to wait is calculated by doubling the previous time. If recon_max is reached, it starts again at the recon_default.

Short example:
  • reconnect 1: the socket will wait 'recon_default' milliseconds
  • reconnect 2: 'recon_default' * 2
  • reconnect 3: ('recon_default' * 2) * 2
  • reconnect 4: value from previous interval * 2
  • reconnect 5: value from previous interval * 2
  • reconnect x: if value >= recon_max, it starts again with recon_default
recon_max: 10000
recon_randomize

Default: True

Generate a random wait time on minion start. The wait time will be a random value between recon_default and recon_default and recon_max. Having all minions reconnect with the same recon_default and recon_max value kind of defeats the purpose of being able to change these settings. If all minions have the same values and the setup is quite large (several thousand minions), they will still flood the master. The desired behavior is to have time-frame within all minions try to reconnect.

recon_randomize: True
dns_check

Default: True

When healing, a dns_check is run. This is to make sure that the originally resolved dns has not changed. If this is something that does not happen in your environment, set this value to False.

dns_check: True
cache_sreqs

Default: True

The connection to the master ret_port is kept open. When set to False, the minion creates a new connection for every return to the master. environment, set this value to False.

cache_sreqs: True
ipc_mode

Default: ipc

Windows platforms lack POSIX IPC and must rely on slower TCP based inter- process communications. Set ipc_mode to tcp on such systems.

ipc_mode: ipc
tcp_pub_port

Default: 4510

Publish port used when ipc_mode is set to tcp.

tcp_pub_port: 4510
tcp_pull_port

Default: 4511

Pull port used when ipc_mode is set to tcp.

tcp_pull_port: 4511

Minion Module Management

disable_modules

Default: [] (all modules are enabled by default)

The event may occur in which the administrator desires that a minion should not be able to execute a certain module. The sys module is built into the minion and cannot be disabled.

This setting can also tune the minion, as all modules are loaded into ram disabling modules will lover the minion's ram footprint.

disable_modules:
  - test
  - solr
disable_returners

Default: [] (all returners are enabled by default)

If certain returners should be disabled, this is the place

disable_returners:
  - mongo_return
module_dirs

Default: []

A list of extra directories to search for Salt modules

module_dirs:
  - /var/lib/salt/modules
returner_dirs

Default: []

A list of extra directories to search for Salt returners

returners_dirs:
  - /var/lib/salt/returners
states_dirs

Default: []

A list of extra directories to search for Salt states

states_dirs:
  - /var/lib/salt/states
grains_dirs

Default: []

A list of extra directories to search for Salt grains

grains_dirs:
  - /var/lib/salt/grains
render_dirs

Default: []

A list of extra directories to search for Salt renderers

render_dirs:
  - /var/lib/salt/renderers
cython_enable

Default: False

Set this value to true to enable auto-loading and compiling of .pyx modules, This setting requires that gcc and cython are installed on the minion

cython_enable: False
providers

Default: (empty)

A module provider can be statically overwritten or extended for the minion via the providers option. This can be done on an individual basis in an SLS file, or globally here in the minion config, like below.

providers:
  service: systemd

State Management Settings

renderer

Default: yaml_jinja

The default renderer used for local state executions

renderer: yaml_jinja
state_verbose

Default: False

state_verbose allows for the data returned from the minion to be more verbose. Normally only states that fail or states that have changes are returned, but setting state_verbose to True will return all states that were checked

state_verbose: True
state_output

Default: full

The state_output setting changes if the output is the full multi line output for each changed state if set to 'full', but if set to 'terse' the output will be shortened to a single line.

state_output: full
autoload_dynamic_modules

Default: True

autoload_dynamic_modules Turns on automatic loading of modules found in the environments on the master. This is turned on by default, to turn of auto-loading modules when states run set this value to False

autoload_dynamic_modules: True

Default: True

clean_dynamic_modules keeps the dynamic modules on the minion in sync with the dynamic modules on the master, this means that if a dynamic module is not on the master it will be deleted from the minion. By default this is enabled and can be disabled by changing this value to False

clean_dynamic_modules: True
environment

Default: None

Normally the minion is not isolated to any single environment on the master when running states, but the environment can be isolated on the minion side by statically setting it. Remember that the recommended way to manage environments is to isolate via the top file.

environment: None

File Directory Settings

file_client

Default: remote

The client defaults to looking on the master server for files, but can be directed to look on the minion by setting this parameter to local.

file_client: remote
use_master_when_local

Default: False

When using a local file_client, this parameter is used to allow the client to connect to a master for remote execution.

use_master_when_local: False
file_roots

Default:

base:
  - /srv/salt

When using a local file_client, this parameter is used to setup the fileserver's environments. This parameter operates identically to the master config parameter of the same name.

file_roots:
  base:
    - /srv/salt
  dev:
    - /srv/salt/dev/services
    - /srv/salt/dev/states
  prod:
    - /srv/salt/prod/services
    - /srv/salt/prod/states
hash_type

Default: md5

The hash_type is the hash to use when discovering the hash of a file on the local fileserver. The default is md5, but sha1, sha224, sha256, sha384, and sha512 are also supported.

hash_type: md5
pillar_roots

Default:

base:
  - /srv/pillar

When using a local file_client, this parameter is used to setup the pillar environments.

pillar_roots:
  base:
    - /srv/pillar
  dev:
    - /srv/pillar/dev
  prod:
    - /srv/pillar/prod

Security Settings

open_mode

Default: False

Open mode can be used to clean out the PKI key received from the Salt master, turn on open mode, restart the minion, then turn off open mode and restart the minion to clean the keys.

open_mode: False
verify_master_pubkey_sign

Default: False

Enables verification of the master-public-signature returned by the master in auth-replies. Please see the tutorial on how to configure this properly Multimaster-PKI with Failover Tutorial

New in version 2014.7.0.

verify_master_pubkey_sign: True

If this is set to True, master_sign_pubkey must be also set to True in the master configuration file.

master_sign_key_name

Default: master_sign

The filename without the .pub suffix of the public key that should be used for verifying the signature from the master. The file must be located in the minion's pki directory.

New in version 2014.7.0.

master_sign_key_name: <filename_without_suffix>
always_verify_signature

Default: False

If verify_master_pubkey_sign is enabled, the signature is only verified, if the public-key of the master changes. If the signature should always be verified, this can be set to True.

New in version 2014.7.0.

always_verify_signature: True

Thread Settings

Default: True

Disable multiprocessing support by default when a minion receives a publication a new process is spawned and the command is executed therein.

multiprocessing: True

Minion Logging Settings

log_file

Default: /var/log/salt/minion

The minion log can be sent to a regular file, local path name, or network location. See also log_file.

Examples:

log_file: /var/log/salt/minion
log_file: file:///dev/log
log_file: udp://loghost:10514
log_level

Default: warning

The level of messages to send to the console. See also log_level.

log_level: warning
log_level_logfile

Default: warning

The level of messages to send to the log file. See also log_level_logfile. When it is not set explicitly it will inherit the level set by log_level option.

log_level_logfile: warning
log_datefmt

Default: %H:%M:%S

The date and time format used in console log messages. See also log_datefmt.

log_datefmt: '%H:%M:%S'
log_datefmt_logfile

Default: %Y-%m-%d %H:%M:%S

The date and time format used in log file messages. See also log_datefmt_logfile.

log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'
log_fmt_console

Default: [%(levelname)-8s] %(message)s

The format of the console logging messages. See also log_fmt_console.

log_fmt_console: '[%(levelname)-8s] %(message)s'
log_fmt_logfile

Default: %(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s

The format of the log file logging messages. See also log_fmt_logfile.

log_fmt_logfile: '%(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s'
log_granular_levels

Default: {}

This can be used to control logging levels more specifically. See also log_granular_levels.

failhard

Default: False

Set the global failhard flag, this informs all states to stop running states at the moment a single state fails

failhard: False

Include Configuration

default_include

Default: minion.d/*.conf

The minion can include configuration from other files. Per default the minion will automatically include all config files from minion.d/*.conf where minion.d is relative to the directory of the minion configuration file.

include

Default: not defined

The minion can include configuration from other files. To enable this, pass a list of paths to this option. The paths can be either relative or absolute; if relative, they are considered to be relative to the directory the main minion configuration file lives in. Paths can make use of shell-style globbing. If no files are matched by a path passed to this option then the minion will log a warning message.

# Include files from a minion.d directory in the same
# directory as the minion config file
include: minion.d/*.conf

# Include a single extra file into the configuration
include: /etc/roles/webserver

# Include several files and the minion.d directory
include:
  - extra_config
  - minion.d/*
  - /etc/roles/webserver

Frozen Build Update Settings

These options control how salt.modules.saltutil.update() works with esky frozen apps. For more information look at https://github.com/cloudmatrix/esky/.

update_url

Default: False (Update feature is disabled)

The url to use when looking for application updates. Esky depends on directory listings to search for new versions. A webserver running on your Master is a good starting point for most setups.

update_url: 'http://salt.example.com/minion-updates'
update_restart_services

Default: [] (service restarting on update is disabled)

A list of services to restart when the minion software is updated. This would typically just be a list containing the minion's service name, but you may have other services that need to go with it.

update_restart_services: ['salt-minion']

Running the Salt Master/Minion as an Unprivileged User

While the default setup runs the master and minion as the root user, some may consider it an extra measure of security to run the master as a non-root user. Keep in mind that doing so does not change the master's capability to access minions as the user they are running as. Due to this many feel that running the master as a non-root user does not grant any real security advantage which is why the master has remained as root by default.

Note

Some of Salt's operations cannot execute correctly when the master is not running as root, specifically the pam external auth system, as this system needs root access to check authentication.

As of Salt 0.9.10 it is possible to run Salt as a non-root user. This can be done by setting the user parameter in the master configuration file. and restarting the salt-master service.

The minion has it's own user parameter as well, but running the minion as an unprivileged user will keep it from making changes to things like users, installed packages, etc. unless access controls (sudo, etc.) are setup on the minion to permit the non-root user to make the needed changes.

In order to allow Salt to successfully run as a non-root user, ownership, and permissions need to be set such that the desired user can read from and write to the following directories (and their subdirectories, where applicable):

  • /etc/salt
  • /var/cache/salt
  • /var/log/salt
  • /var/run/salt

Ownership can be easily changed with chown, like so:

# chown -R user /etc/salt /var/cache/salt /var/log/salt /var/run/salt

Warning

Running either the master or minion with the root_dir parameter specified will affect these paths, as will setting options like pki_dir, cachedir, log_file, and other options that normally live in the above directories.

Logging

The salt project tries to get the logging to work for you and help us solve any issues you might find along the way.

If you want to get some more information on the nitty-gritty of salt's logging system, please head over to the logging development document, if all you're after is salt's logging configurations, please continue reading.

Available Configuration Settings

log_file

The log records can be sent to a regular file, local path name, or network location. Remote logging works best when configured to use rsyslogd(8) (e.g.: file:///dev/log), with rsyslogd(8) configured for network logging. The format for remote addresses is: <file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility>.

Default: Dependent of the binary being executed, for example, for salt-master, /var/log/salt/master.

Examples:

log_file: /var/log/salt/master
log_file: /var/log/salt/minion
log_file: file:///dev/log
log_file: udp://loghost:10514
log_level

Default: warning

The level of log record messages to send to the console. One of all, garbage, trace, debug, info, warning, error, critical, quiet.

log_level: warning
log_level_logfile

Default: warning

The level of messages to send to the log file. One of all, garbage, trace, debug, info, warning, error, critical, quiet.

log_level_logfile: warning
log_datefmt

Default: %H:%M:%S

The date and time format used in console log messages. Allowed date/time formatting can be seen on time.strftime.

log_datefmt: '%H:%M:%S'
log_datefmt_logfile

Default: %Y-%m-%d %H:%M:%S

The date and time format used in log file messages. Allowed date/time formatting can be seen on time.strftime.

log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'
log_fmt_console

Default: [%(levelname)-8s] %(message)s

The format of the console logging messages. All standard python logging LogRecord attributes can be used. Salt also provides these custom LogRecord attributes to colorize console log output:

'%(colorlevel)s'   # log level name colorized by level
'%(colorname)s'    # colorized module name
'%(colorprocess)s' # colorized process number
'%(colormsg)s'     # log message colorized by level

Note

The %(colorlevel)s, %(colorname)s, and %(colorprocess) LogRecord attributes also include padding and enclosing brackets, [ and ] to match the default values of their collateral non-colorized LogRecord attributes.

log_fmt_console: '[%(levelname)-8s] %(message)s'
log_fmt_logfile

Default: %(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s

The format of the log file logging messages. All standard python logging LogRecord attributes can be used. Salt also provides these custom LogRecord attributes that include padding and enclosing brackets [ and ]:

'%(bracketlevel)s'   # equivalent to [%(levelname)-8s]
'%(bracketname)s'    # equivalent to [%(name)-17s]
'%(bracketprocess)s' # equivalent to [%(process)5s]
log_fmt_logfile: '%(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s'
log_granular_levels

Default: {}

This can be used to control logging levels more specifically. The example sets the main salt library at the 'warning' level, but sets salt.modules to log at the debug level:

log_granular_levels:
  'salt': 'warning'
  'salt.modules': 'debug'
External Logging Handlers

Besides the internal logging handlers used by salt, there are some external which can be used, see the external logging handlers document.

External Logging Handlers

logstash_mod Logstash Logging Handler
sentry_mod Sentry Logging Handler

Salt File Server

Salt comes with a simple file server suitable for distributing files to the Salt minions. The file server is a stateless ZeroMQ server that is built into the Salt master.

The main intent of the Salt file server is to present files for use in the Salt state system. With this said, the Salt file server can be used for any general file transfer from the master to the minions.

File Server Backends

In Salt 0.12.0, the modular fileserver was introduced. This feature added the ability for the Salt Master to integrate different file server backends. File server backends allow the Salt file server to act as a transparent bridge to external resources. A good example of this is the git backend, which allows Salt to serve files sourced from one or more git repositories, but there are several others as well. Click here for a full list of Salt's fileserver backends.

Enabling a Fileserver Backend

Fileserver backends can be enabled with the fileserver_backend option.

fileserver_backend:
  - git

See the documentation for each backend to find the correct value to add to fileserver_backend in order to enable them.

Using Multiple Backends

If fileserver_backend is not defined in the Master config file, Salt will use the roots backend, but the fileserver_backend option supports multiple backends. When more than one backend is in use, the files from the enabled backends are merged into a single virtual filesystem. When a file is requested, the backends will be searched in order for that file, and the first backend to match will be the one which returns the file.

fileserver_backend:
  - roots
  - git

With this configuration, the environments and files defined in the file_roots parameter will be searched first, and if the file is not found then the git repositories defined in gitfs_remotes will be searched.

Environments

Just as the order of the values in fileserver_backend matters, so too does the order in which different sources are defined within a fileserver environment. For example, given the below file_roots configuration, if both /srv/salt/dev/foo.txt and /srv/salt/prod/foo.txt exist on the Master, then salt://foo.txt would point to /srv/salt/dev/foo.txt in the dev environment, but it would point to /srv/salt/prod/foo.txt in the base environment.

file_roots:
  base:
    - /srv/salt/prod
  qa:
    - /srv/salt/qa
    - /srv/salt/prod
  dev:
    - /srv/salt/dev
    - /srv/salt/qa
    - /srv/salt/prod

Similarly, when using the git backend, if both repositories defined below have a hotfix23 branch/tag, and both of them also contain the file bar.txt in the root of the repository at that branch/tag, then salt://bar.txt in the hotfix23 environment would be served from the first repository.

gitfs_remotes:
  - https://mydomain.tld/repos/first.git
  - https://mydomain.tld/repos/second.git

Note

Environments map differently based on the fileserver backend. For instance, the mappings are explicitly defined in roots backend, while in the VCS backends (git, hg, svn) the environments are created from branches/tags/bookmarks/etc. For the minion backend, the files are all in a single environment, which is specified by the minionfs_env option.

See the documentation for each backend for a more detailed explanation of how environments are mapped.

Dynamic Module Distribution

New in version 0.9.5.

Salt Python modules can be distributed automatically via the Salt file server. Under the root of any environment defined via the file_roots option on the master server directories corresponding to the type of module can be used.

The directories are prepended with an underscore:

  1. _modules
  2. _grains
  3. _renderers
  4. _returners
  5. _states

The contents of these directories need to be synced over to the minions after Python modules have been created in them. There are a number of ways to sync the modules.

Sync Via States

The minion configuration contains an option autoload_dynamic_modules which defaults to True. This option makes the state system refresh all dynamic modules when states are run. To disable this behavior set autoload_dynamic_modules to False in the minion config.

When dynamic modules are autoloaded via states, modules only pertinent to the environments matched in the master's top file are downloaded.

This is important to remember, because modules can be manually loaded from any specific environment that environment specific modules will be loaded when a state run is executed.

Sync Via the saltutil Module

The saltutil module has a number of functions that can be used to sync all or specific dynamic modules. The saltutil module function saltutil.sync_all will sync all module types over to a minion. For more information see: salt.modules.saltutil

File Server Configuration

The Salt file server is a high performance file server written in ZeroMQ. It manages large files quickly and with little overhead, and has been optimized to handle small files in an extremely efficient manner.

The Salt file server is an environment aware file server. This means that files can be allocated within many root directories and accessed by specifying both the file path and the environment to search. The individual environments can span across multiple directory roots to create overlays and to allow for files to be organized in many flexible ways.

Environments

The Salt file server defaults to the mandatory base environment. This environment MUST be defined and is used to download files when no environment is specified.

Environments allow for files and sls data to be logically separated, but environments are not isolated from each other. This allows for logical isolation of environments by the engineer using Salt, but also allows for information to be used in multiple environments.

Directory Overlay

The environment setting is a list of directories to publish files from. These directories are searched in order to find the specified file and the first file found is returned.

This means that directory data is prioritized based on the order in which they are listed. In the case of this file_roots configuration:

file_roots:
  base:
    - /srv/salt/base
    - /srv/salt/failover

If a file's URI is salt://httpd/httpd.conf, it will first search for the file at /srv/salt/base/httpd/httpd.conf. If the file is found there it will be returned. If the file is not found there, then /srv/salt/failover/httpd/httpd.conf will be used for the source.

This allows for directories to be overlaid and prioritized based on the order they are defined in the configuration.

It is also possible to have file_roots which supports multiple environments:

file_roots:
  base:
    - /srv/salt/base
  dev:
    - /srv/salt/dev
    - /srv/salt/base
  prod:
    - /srv/salt/prod
    - /srv/salt/base

This example ensures that each environment will check the associated environment directory for files first. If a file is not found in the appropriate directory, the system will default to using the base directory.

Local File Server

New in version 0.9.8.

The file server can be rerouted to run from the minion. This is primarily to enable running Salt states without a Salt master. To use the local file server interface, copy the file server data to the minion and set the file_roots option on the minion to point to the directories copied from the master. Once the minion file_roots option has been set, change the file_client option to local to make sure that the local file server interface is used.

The cp Module

The cp module is the home of minion side file server operations. The cp module is used by the Salt state system, salt-cp, and can be used to distribute files presented by the Salt file server.

Environments

Since the file server is made to work with the Salt state system, it supports environments. The environments are defined in the master config file and when referencing an environment the file specified will be based on the root directory of the environment.

get_file

The cp.get_file function can be used on the minion to download a file from the master, the syntax looks like this:

# salt '*' cp.get_file salt://vimrc /etc/vimrc

This will instruct all Salt minions to download the vimrc file and copy it to /etc/vimrc

Template rendering can be enabled on both the source and destination file names like so:

# salt '*' cp.get_file "salt://{{grains.os}}/vimrc" /etc/vimrc template=jinja

This example would instruct all Salt minions to download the vimrc from a directory with the same name as their OS grain and copy it to /etc/vimrc

For larger files, the cp.get_file module also supports gzip compression. Because gzip is CPU-intensive, this should only be used in scenarios where the compression ratio is very high (e.g. pretty-printed JSON or YAML files).

To use compression, use the gzip named argument. Valid values are integers from 1 to 9, where 1 is the lightest compression and 9 the heaviest. In other words, 1 uses the least CPU on the master (and minion), while 9 uses the most.

# salt '*' cp.get_file salt://vimrc /etc/vimrc gzip=5

Finally, note that by default cp.get_file does not create new destination directories if they do not exist. To change this, use the makedirs argument:

# salt '*' cp.get_file salt://vimrc /etc/vim/vimrc makedirs=True

In this example, /etc/vim/ would be created if it didn't already exist.

get_dir

The cp.get_dir function can be used on the minion to download an entire directory from the master. The syntax is very similar to get_file:

# salt '*' cp.get_dir salt://etc/apache2 /etc

cp.get_dir supports template rendering and gzip compression arguments just like get_file:

# salt '*' cp.get_dir salt://etc/{{pillar.webserver}} /etc gzip=5 template=jinja

File Server Client API

A client API is available which allows for modules and applications to be written which make use of the Salt file server.

The file server uses the same authentication and encryption used by the rest of the Salt system for network communication.

FileClient Class

The FileClient class is used to set up the communication from the minion to the master. When creating a FileClient object the minion configuration needs to be passed in. When using the FileClient from within a minion module the built in __opts__ data can be passed:

import salt.minion

def get_file(path, dest, env='base'):
    '''
    Used to get a single file from the Salt master

    CLI Example:
    salt '*' cp.get_file salt://vimrc /etc/vimrc
    '''
    # Create the FileClient object
    client = salt.minion.FileClient(__opts__)
    # Call get_file
    return client.get_file(path, dest, False, env)

Using the FileClient class outside of a minion module where the __opts__ data is not available, it needs to be generated:

import salt.minion
import salt.config

def get_file(path, dest, env='base'):
    '''
    Used to get a single file from the Salt master
    '''
    # Get the configuration data
    opts = salt.config.minion_config('/etc/salt/minion')
    # Create the FileClient object
    client = salt.minion.FileClient(opts)
    # Call get_file
    return client.get_file(path, dest, False, env)

Full list of builtin fileserver modules

azurefs The backend for serving files from the Azure blob storage service.
gitfs Git Fileserver Backend
hgfs Mercurial Fileserver Backend
minionfs Fileserver backend which serves files pushed to the Master
roots The default file server backend
s3fs Amazon S3 Fileserver Backend
svnfs Subversion Fileserver Backend

Salt code and internals

Reference documentation on Salt's internal code.

Contents

salt.aggregation
salt.utils.aggregation

This library makes it possible to introspect dataset and aggregate nodes when it is instructed.

Note

The following examples with be expressed in YAML for convenience's sake:

  • !aggr-scalar will refer to Scalar python function
  • !aggr-map will refer to Map python object
  • !aggr-seq will refer for Sequence python object
How to instructs merging

This yaml document has duplicate keys:

foo: !aggr-scalar first
foo: !aggr-scalar second
bar: !aggr-map {first: foo}
bar: !aggr-map {second: bar}
baz: !aggr-scalar 42

but tagged values instruct Salt that overlapping values they can be merged together:

foo: !aggr-seq [first, second]
bar: !aggr-map {first: foo, second: bar}
baz: !aggr-seq [42]
Default merge strategy is keep untouched

For example, this yaml document still has duplicate keys, but does not instruct aggregation:

foo: first
foo: second
bar: {first: foo}
bar: {second: bar}
baz: 42

So the late found values prevail:

foo: second
bar: {second: bar}
baz: 42
Limitations

Aggregation is permitted between tagged objects that share the same type. If not, the default merge strategy prevails.

For example, these examples:

foo: {first: value}
foo: !aggr-map {second: value}

bar: !aggr-map {first: value}
bar: 42

baz: !aggr-seq [42]
baz: [fail]

qux: 42
qux: !aggr-scalar fail

are interpreted like this:

foo: !aggr-map{second: value}

bar: 42

baz: [fail]

qux: !aggr-seq [fail]
Introspection

TODO: write this part

salt.utils.aggregation.aggregate(obj_a, obj_b, level=False, map_class=<class 'salt.utils.aggregation.Map'>, sequence_class=<class 'salt.utils.aggregation.Sequence'>)

Merge obj_b into obj_a.

>>> aggregate('first', 'second', True) == ['first', 'second']
True
class salt.utils.aggregation.Aggregate

Aggregation base.

class salt.utils.aggregation.Map(*args, **kwds)

Map aggregation.

salt.utils.aggregation.Scalar(obj)

Shortcut for Sequence creation

>>> Scalar('foo') == Sequence(['foo'])
True
class salt.utils.aggregation.Sequence

Sequence aggregation.

Exceptions

Salt-specific exceptions should be thrown as often as possible so the various interfaces to Salt (CLI, API, etc) can handle those errors appropriately and display error messages appropriately.

salt.exceptions This module is a central location for all salt exceptions
Salt opts dictionary

It is very common in the Salt codebase to see opts referred to in a number of contexts.

For example, it can be seen as __opts__ in certain cases, or simply as opts as an argument to a function in others.

Simply put, this data structure is a dictionary of Salt's runtime configuration information that's passed around in order for functions to know how Salt is configured.

When writing Python code to use specific parts of Salt, it may become necessary to initialize a copy of opts from scratch in order to have it available for a given function.

To do so, use the utility functions available in salt.config.

As an example, here is how one might generate and print an options dictionary for a minion instance:

import salt.config
opts = salt.config.minion_config('/etc/salt/minion')
print(opts)

To generate and display opts for a master, the process is similar:

import salt.config
opts = salt.config.master_config('/etc/salt/master')
print(opts)
salt.exceptions

This module is a central location for all salt exceptions

exception salt.exceptions.AuthenticationError(message='')

If sha256 signature fails during decryption

exception salt.exceptions.AuthorizationError(message='')

Thrown when runner or wheel execution fails due to permissions

exception salt.exceptions.CommandExecutionError(message='')

Used when a module runs a command which returns an error and wants to show the user the output gracefully instead of dying

exception salt.exceptions.CommandNotFoundError(message='')

Used in modules or grains when a required binary is not available

exception salt.exceptions.EauthAuthenticationError(message='')

Thrown when eauth authentication fails

exception salt.exceptions.FileserverConfigError(message='')

Used when invalid fileserver settings are detected

exception salt.exceptions.LoaderError(message='')

Problems loading the right renderer

exception salt.exceptions.MasterExit

Rise when the master exits

exception salt.exceptions.MinionError(message='')

Minion problems reading uris such as salt:// or http://

exception salt.exceptions.PkgParseError(message='')

Used when of the pkg modules cannot correctly parse the output from the CLI tool (pacman, yum, apt, aptitude, etc)

exception salt.exceptions.PublishError(message='')

Problems encountered when trying to publish a command

exception salt.exceptions.SaltClientError(message='')

Problem reading the master root key

exception salt.exceptions.SaltClientTimeout(msg, jid=None, *args, **kwargs)

Thrown when a job sent through one of the Client interfaces times out

Takes the jid as a parameter

exception salt.exceptions.SaltCloudConfigError(message='')

Raised when a configuration setting is not found and should exist.

exception salt.exceptions.SaltCloudException(message='')

Generic Salt Cloud Exception

exception salt.exceptions.SaltCloudExecutionFailure(message='')

Raised when too much failures have occurred while querying/waiting for data.

exception salt.exceptions.SaltCloudExecutionTimeout(message='')

Raised when too much time has passed while querying/waiting for data.

exception salt.exceptions.SaltCloudNotFound(message='')

Raised when some cloud provider function cannot find what's being searched.

exception salt.exceptions.SaltCloudPasswordError(message='')

Raise when virtual terminal password input failed

exception salt.exceptions.SaltCloudSystemExit(message, exit_code=1)

This exception is raised when the execution should be stopped.

exception salt.exceptions.SaltDaemonNotRunning(message='')

Throw when a running master/minion/syndic is not running but is needed to perform the requested operation (e.g., eauth).

exception salt.exceptions.SaltException(message='')

Base exception class; all Salt-specific exceptions should subclass this

pack()

Pack this exception into a serializable dictionary that is safe for transport via msgpack

exception salt.exceptions.SaltInvocationError(message='')

Used when the wrong number of arguments are sent to modules or invalid arguments are specified on the command line

exception salt.exceptions.SaltMasterError(message='')

Problem reading the master root key

exception salt.exceptions.SaltNoMinionsFound(message='')

An attempt to retrieve a list of minions failed

exception salt.exceptions.SaltRenderError(message, line_num=None, buf='', marker=' <======================', trace=None)

Used when a renderer needs to raise an explicit error. If a line number and buffer string are passed, get_context will be invoked to get the location of the error.

exception salt.exceptions.SaltReqTimeoutError(message='')

Thrown when a salt master request call fails to return within the timeout

exception salt.exceptions.SaltRunnerError(message='')

Problem in runner

exception salt.exceptions.SaltSyndicMasterError(message='')

Problem while proxying a request in the syndication master

exception salt.exceptions.SaltSystemExit(code=0, msg=None)

This exception is raised when an unsolvable problem is found. There's nothing else to do, salt should just exit.

exception salt.exceptions.SaltWheelError(message='')

Problem in wheel

exception salt.exceptions.TimedProcTimeoutError(message='')

Thrown when a timed subprocess does not terminate within the timeout, or if the specified timeout is not an int or a float

exception salt.exceptions.TokenAuthenticationError(message='')

Thrown when token authentication fails

Full list of builtin execution modules

aliases Manage the information in the aliases file
alternatives Support for Alternatives system
apache Support for Apache
aptpkg Support for APT (Advanced Packaging Tool)
archive A module to wrap (non-Windows) archive calls
artifactory Module for fetching artifacts from Artifactory
at Wrapper module for at(1)
augeas_cfg Manages configuration files via augeas
aws_sqs Support for the Amazon Simple Queue Service.
blockdev Module for managing block devices
bluez Support for Bluetooth (using BlueZ in Linux).
boto_asg Connection module for Amazon Autoscale Groups
boto_cfn Connection module for Amazon Cloud Formation
boto_cloudwatch Connection module for Amazon CloudWatch
boto_dynamodb Connection module for Amazon DynamoDB
boto_ec2 Connection module for Amazon EC2
boto_elasticache Connection module for Amazon Elasticache
boto_elb Connection module for Amazon ELB
boto_iam Connection module for Amazon IAM
boto_kms Connection module for Amazon KMS
boto_rds Connection module for Amazon RDS
boto_route53 Connection module for Amazon Route53
boto_secgroup Connection module for Amazon Security Groups
boto_sns Connection module for Amazon SNS
boto_sqs Connection module for Amazon SQS
boto_vpc Connection module for Amazon VPC
bower Manage and query Bower packages
brew Homebrew for Mac OS X
bridge Module for gathering and managing bridging information
bsd_shadow Manage the password database on BSD systems
btrfs Module for managing BTRFS file systems.
cabal Manage and query Cabal packages
cassandra Cassandra NoSQL Database Module
cassandra_cql Cassandra Database Module
chef Execute chef in server or solo mode
chocolatey A dead simple module wrapping calls to the Chocolatey package manager
cloud Salt-specific interface for calling Salt Cloud directly
cmdmod A module for shelling out.
composer Use composer to install PHP dependencies for a directory
config Return config information
container_resource Common resources for LXC and systemd-nspawn containers
cp Minion side functions for salt-cp
cpan Manage Perl modules using CPAN
cron Work with cron
cyg Manage cygwin packages.
daemontools daemontools service module. This module will create daemontools type
darwin_pkgutil Installer support for OS X.
darwin_sysctl Module for viewing and modifying sysctl parameters
data Manage a local persistent data structure that can hold any arbitrary data
ddns Support for RFC 2136 dynamic DNS updates.
deb_apache Support for Apache
deb_postgres Module to provide Postgres compatibility to salt for debian family specific tools.
debconfmod Support for Debconf
debian_ip The networking module for Debian based distros
debian_service Service support for Debian systems (uses update-rc.d and /sbin/service)
defaults
devmap Device-Mapper module
dig Compendium of generic DNS utilities
disk Module for gathering disk information
djangomod Manage Django sites
dnsmasq Module for managing dnsmasq
dnsutil Compendium of generic DNS utilities
dockerio Management of Docker Containers
dockerng Management of Docker Containers
dpkg Support for DEB packages
drac Manage Dell DRAC
drbd DRBD administration module
ebuild Support for Portage
eix Support for Eix
elasticsearch Connection module for Elasticsearch
environ Support for getting and setting the environment variables of the current salt process.
eselect Support for eselect, Gentoo's configuration and management tool.
etcd_mod Execution module to work with etcd
event Use the Salt Event System to fire events from the master to the minion and vice-versa.
extfs Module for managing ext2/3/4 file systems
file Manage information about regular files, directories,
firewalld Support for firewalld.
freebsd_sysctl Module for viewing and modifying sysctl parameters
freebsdjail The jail module for FreeBSD
freebsdkmod Module to manage FreeBSD kernel modules
freebsdpkg Remote package support using pkg_add(1)
freebsdports Install software from the FreeBSD ports(7) system
freebsdservice The service module for FreeBSD
gem Manage ruby gems.
genesis Module for managing container and VM images
gentoo_service Top level package command wrapper, used to translate the os detected by grains
gentoolkitmod Support for Gentoolkit
git Support for the Git SCM
glance Module for handling openstack glance calls.
glusterfs Manage a glusterfs pool
gnomedesktop GNOME implementations
gpg Manage a GPG keychains, add keys, create keys, retrieve keys from keyservers.
grains Return/control aspects of the grains data
groupadd Manage groups on Linux, OpenBSD and NetBSD
grub_legacy Support for GRUB Legacy
guestfs Interact with virtual machine images via libguestfs
hadoop Support for hadoop
haproxyconn Support for haproxy
hashutil A collection of hashing and encoding functions
hg Support for the Mercurial SCM
hipchat Module for sending messages to hipchat.
hosts Manage the information in the hosts file
htpasswd Support for htpasswd command
http Module for making various web calls.
ilo Manage HP ILO
img Virtual machine image management tools
incron Work with incron
influx InfluxDB - A distributed time series database
ini_manage Edit ini files
introspect Functions to perform introspection on a minion, and return data in a format
ipmi Support IPMI commands over LAN.
ipset Support for ipset
iptables Support for iptables
jboss7 Module for managing JBoss AS 7 through the CLI interface.
jboss7_cli Module for low-level interaction with JbossAS7 through CLI.
junos Module for interfacing to Junos devices
kerberos Manage Kerberos KDC
key Functions to view the minion's public key information
keyboard Module for managing keyboards on supported POSIX-like systems using systemd, or such as Redhat, Debian and Gentoo.
keystone Module for handling openstack keystone calls.
kmod Module to manage Linux kernel modules
launchctl Module for the management of MacOS systems that use launchd/launchctl
layman Support for Layman
ldapmod Salt interface to LDAP commands
linux_acl Support for Linux File Access Control Lists
linux_lvm Support for Linux LVM2
linux_sysctl Module for viewing and modifying sysctl parameters
localemod Module for managing locales on POSIX-like systems.
locate Module for using the locate utilities
logadm Module for managing Solaris logadm based log rotations.
logrotate Module for managing logrotate.
lvs Support for LVS (Linux Virtual Server)
lxc Control Linux Containers via Salt
mac_group Manage groups on Mac OS 10.7+
mac_user Manage users on Mac OS 10.7+
macports Support for MacPorts under Mac OSX.
makeconf Support for modifying make.conf under Gentoo
match The match module allows for match routines to be run and determine target specs
mdadm Salt module to manage RAID arrays with mdadm
memcached Module for Management of Memcached Keys
mine The function cache system allows for data to be stored on the master so it can be easily read by other minions
mod_random

New in version 2014.7.0.

modjk Control Modjk via the Apache Tomcat "Status" worker
mongodb Module to provide MongoDB functionality to Salt
monit Monit service module.
moosefs Module for gathering and managing information about MooseFS
mount Salt module to manage unix mounts and the fstab file
mssql Module to provide MS SQL Server compatibility to salt.
munin Run munin plugins/checks from salt and format the output as data.
mysql Module to provide MySQL compatibility to salt.
nacl
requires:libnacl
nagios Run nagios plugins/checks from salt and get the return as data.
nagios_rpc Check Host & Service status from Nagios via JSON RPC.
netbsd_sysctl Module for viewing and modifying sysctl parameters
netbsdservice The service module for NetBSD
netscaler
network Module for gathering and managing network information
neutron Module for handling OpenStack Neutron calls
nfs3 Module for managing NFS version 3.
nftables Support for nftables
nginx Support for nginx
nova Module for handling OpenStack Nova calls
npm Manage and query NPM packages.
nspawn Manage nspawn containers
omapi This module interacts with an ISC DHCP Server via OMAPI.
openbsd_sysctl Module for viewing and modifying OpenBSD sysctl parameters
openbsdpkg Package support for OpenBSD
openbsdrcctl The rcctl service module for OpenBSD
openbsdservice The service module for OpenBSD
openstack_config Modify, retrieve, or delete values from OpenStack configuration files.
oracle Oracle DataBase connection module
osquery Support for OSQuery - https://osquery.io
osxdesktop Mac OS X implementations of various commands in the "desktop" interface
pacman A module to wrap pacman calls, since Arch is the best
pagerduty Module for Firing Events via PagerDuty
pam Support for pam
parted Module for managing partitions on POSIX-like systems.
pecl Manage PHP pecl extensions.
pillar Extract the pillar data for this minion
pip Install Python packages with pip to either the system or a virtualenv
pkg_resource Resources needed by pkg providers
pkgin Package support for pkgin based systems, inspired from freebsdpkg module
pkgng Support for pkgng, the new package manager for FreeBSD
pkgutil Pkgutil support for Solaris
portage_config Configure portage(5)
postfix Support for Postfix
postgres Module to provide Postgres compatibility to salt.
poudriere Support for poudriere
powerpath powerpath support.
ps
publish Publish a command from a minion to a target
puppet Execute puppet routines
pushover_notify Module for sending messages to Pushover (https://www.pushover.net)
pw_group Manage groups on FreeBSD
pw_user Manage users with the useradd command
pyenv Manage python installations with pyenv.
qemu_img Qemu-img Command Wrapper
qemu_nbd Qemu Command Wrapper
quota Module for managing quotas on POSIX-like systems.
rabbitmq Module to provide RabbitMQ compatibility to Salt.
raet_publish Publish a command from a minion to a target
random_org Module for retrieving random information from Random.org
rbenv Manage ruby installations with rbenv.
rdp Manage RDP Service on Windows servers
redismod Module to provide redis functionality to Salt
reg Manage the registry on Windows
rest_package Service support for the REST example
rest_sample Module for interfacing to the REST example
rest_service Service support for the REST example
ret Module to integrate with the returner system and retrieve data sent to a salt returner
rh_ip The networking module for RHEL/Fedora based distros
rh_service Service support for RHEL-based systems, including support for both upstart and sysvinit
riak Riak Salt Module
rpm Support for rpm
rsync Wrapper for rsync
runit runit service module
rvm Manage ruby installations and gemsets with RVM, the Ruby Version Manager.
s3 Connection module for Amazon S3
saltcloudmod Control a salt cloud system
saltutil The Saltutil module is used to manage the state of the salt minion itself.
schedule Module for managing the Salt schedule on a minion
scsi SCSI administration module
sdb Module for Manipulating Data via the Salt DB API
seed Virtual machine image management tools
selinux Execute calls on selinux
sensors Read lm-sensors
serverdensity_device Wrapper around Server Density API
service The default service module, if not otherwise specified salt will fall back
shadow Manage the shadow file
slack_notify Module for sending messages to Slack
smartos_imgadm Module for running imgadm command on SmartOS
smartos_vmadm Module for managing VMs on SmartOS
smbios Interface to SMBIOS/DMI
smf Service support for Solaris 10 and 11, should work with other systems that use SMF also.
smtp Module for Sending Messages via SMTP
softwareupdate Support for the softwareupdate command on MacOS.
solaris_group Manage groups on Solaris
solaris_shadow Manage the password database on Solaris systems
solaris_user Manage users with the useradd command
solarisips IPS pkg support for Solaris
solarispkg Package support for Solaris
solr Apache Solr Salt Module
splunk_search Module for interop with the Splunk API
sqlite3 Support for SQLite3
ssh Manage client ssh components
state Control the state system on the minion
status Module for returning various status data about a minion.
sudo Allow for the calling of execution modules via sudo
supervisord Provide the service module for system supervisord or supervisord in a
svn Subversion SCM
swift Module for handling OpenStack Swift calls
sysbench The 'sysbench' module is used to analyze the performance of the minions, right from the master! It measures various system parameters such as CPU, Memory, File I/O, Threads and Mutex.
syslog_ng Module for getting information about syslog-ng
sysmod The sys module provides information about the available functions on the minion
sysrc sysrc module for FreeBSD
system Support for reboot, shutdown, etc
systemd Provide the service module for systemd
test Module for running arbitrary tests
test_virtual Module for running arbitrary tests with a __virtual__ function
timezone Module for managing timezone on POSIX-like systems.
tls A salt module for SSL/TLS.
tomcat Support for Tomcat
tuned
maintainer:Syed Ali <alicsyed@gmail.com>
twilio_notify Module for notifications via Twilio
upstart Module for the management of upstart systems.
uptime Wrapper around uptime API
useradd Manage users with the useradd command
uwsgi uWSGI stats server http://uwsgi-docs.readthedocs.org/en/latest/StatsServer.html
varnish Support for Varnish
vbox_guest VirtualBox Guest Additions installer
virt Work with virtual machines managed by libvirt
virtualenv_mod Create virtualenv environments
win_autoruns Module for listing programs that automatically run on startup
win_dacl Manage DACLs on Windows
win_disk Module for gathering disk information on Windows
win_dns_client Module for configuring DNS Client on Windows systems
win_file Manage information about files on the minion, set/read user, group
win_firewall Module for configuring Windows Firewall
win_groupadd Manage groups on Windows
win_ip The networking module for Windows based systems
win_network Module for gathering and managing network information
win_ntp Management of NTP servers on Windows
win_path Manage the Windows System PATH
win_pkg A module to manage software on Windows
win_repo Module to manage Windows software repo on a Standalone Minion
win_servermanager Manage Windows features via the ServerManager powershell module
win_service Windows Service module.
win_shadow Manage the shadow file
win_status Module for returning various status data about a minion.
win_system Support for reboot, shutdown, etc
win_timezone Module for managing timezone on Windows systems.
win_update Module for running windows updates.
win_useradd Manage Windows users with the net user command
x509 Manage X509 certificates
xapi This module (mostly) uses the XenAPI to manage Xen virtual machines.
xfs Module for managing XFS file systems.
xmpp Module for Sending Messages via XMPP (a.k.a.
yumpkg Support for YUM
zcbuildout Management of zc.buildout
zfs Salt interface to ZFS commands
zk_concurrency Concurrency controls in zookeeper
znc znc - An advanced IRC bouncer
zpool Module for running ZFS zpool command
zypper Package support for openSUSE via the zypper package manager

Full list of netapi modules

rest_cherrypy

A REST API for Salt

New in version 2014.7.0.

depends:
  • CherryPy Python module. Versions 3.2.{2,3,4} are strongly

recommended due to a known SSL error introduced in version 3.2.5. The issue was reportedly resolved with CherryPy milestone 3.3, but the patch was committed for version 3.6.1.

  • salt-api package
optdepends:
  • ws4py Python module for websockets support.
client_libraries:
 
configuration:

All authentication is done through Salt's external auth system which requires additional configuration not described here.

Example production-ready configuration; add to the Salt master config file and restart the salt-master and salt-api daemons:

rest_cherrypy:
  port: 8000
  ssl_crt: /etc/pki/tls/certs/localhost.crt
  ssl_key: /etc/pki/tls/certs/localhost.key

Using only a secure HTTPS connection is strongly recommended since Salt authentication credentials will be sent over the wire.

A self-signed certificate can be generated using the create_self_signed_cert() execution function. Running this function requires pyOpenSSL and the salt-call script is available in the salt-minion package.

salt-call --local tls.create_self_signed_cert

All available configuration options are detailed below. These settings configure the CherryPy HTTP server and do not apply when using an external server such as Apache or Nginx.

port

Required

The port for the webserver to listen on.

host : 0.0.0.0

The socket interface for the HTTP server to listen on.

debug : False

Starts the web server in development mode. It will reload itself when the underlying code is changed and will output more debugging info.

ssl_crt

The path to a SSL certificate. (See below)

ssl_key

The path to the private key for your SSL certificate. (See below)

disable_ssl

A flag to disable SSL. Warning: your Salt authentication credentials will be sent in the clear!

webhook_disable_auth : False

The Webhook URL requires authentication by default but external services cannot always be configured to send authentication. See the Webhook documentation for suggestions on securing this interface.

webhook_url : /hook

Configure the URL endpoint for the Webhook entry point.

thread_pool : 100

The number of worker threads to start up in the pool.

socket_queue_size : 30

Specify the maximum number of HTTP connections to queue.

expire_responses : True

Whether to check for and kill HTTP responses that have exceeded the default timeout.

max_request_body_size : 1048576

Maximum size for the HTTP request body.

collect_stats : False

Collect and report statistics about the CherryPy server

Reports are available via the Stats URL.

static

A filesystem path to static HTML/JavaScript/CSS/image assets.

static_path : /static

The URL prefix to use when serving static assets out of the directory specified in the static setting.

app

A filesystem path to an HTML file that will be served as a static file. This is useful for bootstrapping a single-page JavaScript app.

app_path : /app

The URL prefix to use for serving the HTML file specified in the app setting. This should be a simple name containing no slashes.

Any path information after the specified path is ignored; this is useful for apps that utilize the HTML5 history API.

root_prefix : /

A URL path to the main entry point for the application. This is useful for serving multiple applications from the same URL.

Authentication

Authentication is performed by passing a session token with each request. Tokens are generated via the Login URL.

The token may be sent in one of two ways:

  • Include a custom header named X-Auth-Token.

    For example, using curl:

    curl -sSk https://localhost:8000/login             -H 'Accept: application/x-yaml'             -d username=saltdev             -d password=saltdev             -d eauth=auto
    

    Copy the token value from the output and include it in subsequent requests:

    curl -sSk https://localhost:8000             -H 'Accept: application/x-yaml'             -H 'X-Auth-Token: 697adbdc8fe971d09ae4c2a3add7248859c87079'            -d client=local             -d tgt='*'             -d fun=test.ping
    
  • Sent via a cookie. This option is a convenience for HTTP clients that automatically handle cookie support (such as browsers).

    For example, using curl:

    # Write the cookie file:
    curl -sSk https://localhost:8000/login             -c ~/cookies.txt             -H 'Accept: application/x-yaml'             -d username=saltdev             -d password=saltdev             -d eauth=auto
    
    # Read the cookie file:
    curl -sSk https://localhost:8000             -b ~/cookies.txt             -H 'Accept: application/x-yaml'             -d client=local             -d tgt='*'             -d fun=test.ping
    

See also

You can bypass the session handling via the Run URL.

Usage

Commands are sent to a running Salt master via this module by sending HTTP requests to the URLs detailed below.

Content negotiation

This REST interface is flexible in what data formats it will accept as well as what formats it will return (e.g., JSON, YAML, x-www-form-urlencoded).

  • Specify the format of data in the request body by including the Content-Type header.
  • Specify the desired data format for the response body with the Accept header.

Data sent in POST and PUT requests must be in the format of a list of lowstate dictionaries. This allows multiple commands to be executed in a single HTTP request. The order of commands in the request corresponds to the return for each command in the response.

Lowstate, broadly, is a dictionary of values that are mapped to a function call. This pattern is used pervasively throughout Salt. The functions called from netapi modules are described in Client Interfaces.

The following example (in JSON format) causes Salt to execute two commands, a command sent to minions as well as a runner function on the master:

[{
    "client": "local",
    "tgt": "*",
    "fun": "test.fib",
    "arg": ["10"]
},
{
    "client": "runner",
    "fun": "jobs.lookup_jid",
    "jid": "20130603122505459265"
}]

x-www-form-urlencoded

Sending JSON or YAML in the request body is simple and most flexible, however sending data in urlencoded format is also supported with the caveats below. It is the default format for HTML forms, many JavaScript libraries, and the curl command.

For example, the equivalent to running salt '*' test.ping is sending fun=test.ping&arg&client=local&tgt=* in the HTTP request body.

Caveats:

  • Only a single command may be sent per HTTP request.

  • Repeating the arg parameter multiple times will cause those parameters to be combined into a single list.

    Note, some popular frameworks and languages (notably jQuery, PHP, and Ruby on Rails) will automatically append empty brackets onto repeated parameters. E.g., arg=one, arg=two will be sent as arg[]=one, arg[]=two. This is not supported; send JSON or YAML instead.

Deployment

The rest_cherrypy netapi module is a standard Python WSGI app. It can be deployed one of two ways.

salt-api using the CherryPy server

The default configuration is to run this module using salt-api to start the Python-based CherryPy server. This server is lightweight, multi-threaded, encrypted with SSL, and should be considered production-ready.

Using a WSGI-compliant web server

This module may be deployed on any WSGI-compliant server such as Apache with mod_wsgi or Nginx with FastCGI, to name just two (there are many).

Note, external WSGI servers handle URLs, paths, and SSL certs directly. The rest_cherrypy configuration options are ignored and the salt-api daemon does not need to be running at all. Remember Salt authentication credentials are sent in the clear unless SSL is being enforced!

An example Apache virtual host configuration:

<VirtualHost *:80>
    ServerName example.com
    ServerAlias *.example.com

    ServerAdmin webmaster@example.com

    LogLevel warn
    ErrorLog /var/www/example.com/logs/error.log
    CustomLog /var/www/example.com/logs/access.log combined

    DocumentRoot /var/www/example.com/htdocs

    WSGIScriptAlias / /path/to/salt/netapi/rest_cherrypy/wsgi.py
</VirtualHost>
REST URI Reference
/
class salt.netapi.rest_cherrypy.app.LowDataAdapter

The primary entry point to Salt's REST API

GET()

An explanation of the API with links of where to go next

GET /
Request Headers:
 
  • Accept -- the desired response format.
Status Codes:
  • 200 -- success
  • 401 -- authentication required
  • 406 -- requested Content-Type not available

Example request:

curl -i localhost:8000
GET / HTTP/1.1
Host: localhost:8000
Accept: application/json

Example response:

HTTP/1.1 200 OK
Content-Type: application/json
POST

Mock out specified imports

This allows autodoc to do its thing without having oodles of req'd installed libs. This doesn't work with import * imports.

http://read-the-docs.readthedocs.org/en/latest/faq.html#i-get-import-errors-on-libraries-that-depend-on-c-modules

/login
class salt.netapi.rest_cherrypy.app.Login(*args, **kwargs)

Log in to receive a session token

Authentication information.

GET()

Present the login interface

GET /login

An explanation of how to log in.

Status Codes:
  • 200 -- success
  • 401 -- authentication required
  • 406 -- requested Content-Type not available

Example request:

curl -i localhost:8000/login
GET /login HTTP/1.1
Host: localhost:8000
Accept: text/html

Example response:

HTTP/1.1 200 OK
Content-Type: text/html
POST(**kwargs)

Authenticate against Salt's eauth system

POST /login
Request Headers:
 
  • X-Auth-Token -- a session token from Login.
  • Accept -- the desired response format.
  • Content-Type -- the format of the request body.
Form Parameters:
 
  • eauth -- the eauth backend configured for the user
  • username -- username
  • password -- password
Status Codes:
  • 200 -- success
  • 401 -- authentication required
  • 406 -- requested Content-Type not available

Example request:

curl -si localhost:8000/login \
        -H "Accept: application/json" \
        -d username='saltuser' \
        -d password='saltpass' \
        -d eauth='pam'
POST / HTTP/1.1
Host: localhost:8000
Content-Length: 42
Content-Type: application/x-www-form-urlencoded
Accept: application/json

username=saltuser&password=saltpass&eauth=pam

Example response:

HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 206
X-Auth-Token: 6d1b722e
Set-Cookie: session_id=6d1b722e; expires=Sat, 17 Nov 2012 03:23:52 GMT; Path=/

{"return": {
    "token": "6d1b722e",
    "start": 1363805943.776223,
    "expire": 1363849143.776224,
    "user": "saltuser",
    "eauth": "pam",
    "perms": [
        "grains.*",
        "status.*",
        "sys.*",
        "test.*"
    ]
}}
/logout
class salt.netapi.rest_cherrypy.app.Logout

Class to remove or invalidate sessions

POST()

Destroy the currently active session and expire the session cookie

/minions
class salt.netapi.rest_cherrypy.app.Minions

Convenience URLs for working with minions

GET(mid=None)

A convenience URL for getting lists of minions or getting minion details

GET /minions/(mid)
Request Headers:
 
  • X-Auth-Token -- a session token from Login.
  • Accept -- the desired response format.
Status Codes:
  • 200 -- success
  • 401 -- authentication required
  • 406 -- requested Content-Type not available

Example request:

curl -i localhost:8000/minions/ms-3
GET /minions/ms-3 HTTP/1.1
Host: localhost:8000
Accept: application/x-yaml

Example response:

HTTP/1.1 200 OK
Content-Length: 129005
Content-Type: application/x-yaml

return:
- ms-3:
    grains.items:
        ...
POST(**kwargs)

Start an execution command and immediately return the job id

POST /minions
Request Headers:
 
  • X-Auth-Token -- a session token from Login.
  • Accept -- the desired response format.
  • Content-Type -- the format of the request body.
Response Headers:
 
  • Content-Type -- the format of the response body; depends on the Accept request header.
Status Codes:
  • 200 -- success
  • 401 -- authentication required
  • 406 -- requested Content-Type not available

lowstate data describing Salt commands must be sent in the request body. The client option will be set to local_async().

Example request:

curl -sSi localhost:8000/minions \
    -H "Accept: application/x-yaml" \
    -d tgt='*' \
    -d fun='status.diskusage'
POST /minions HTTP/1.1
Host: localhost:8000
Accept: application/x-yaml
Content-Length: 26
Content-Type: application/x-www-form-urlencoded

tgt=*&fun=status.diskusage

Example response:

HTTP/1.1 202 Accepted
Content-Length: 86
Content-Type: application/x-yaml

return:
- jid: '20130603122505459265'
  minions: [ms-4, ms-3, ms-2, ms-1, ms-0]
_links:
  jobs:
    - href: /jobs/20130603122505459265
/jobs
class salt.netapi.rest_cherrypy.app.Jobs
GET(jid=None)

A convenience URL for getting lists of previously run jobs or getting the return from a single job

GET /jobs/(jid)

List jobs or show a single job from the job cache.

Status Codes:
  • 200 -- success
  • 401 -- authentication required
  • 406 -- requested Content-Type not available

Example request:

curl -i localhost:8000/jobs
GET /jobs HTTP/1.1
Host: localhost:8000
Accept: application/x-yaml

Example response:

HTTP/1.1 200 OK
Content-Length: 165
Content-Type: application/x-yaml

return:
- '20121130104633606931':
    Arguments:
    - '3'
    Function: test.fib
    Start Time: 2012, Nov 30 10:46:33.606931
    Target: jerry
    Target-type: glob

Example request:

curl -i localhost:8000/jobs/20121130104633606931
GET /jobs/20121130104633606931 HTTP/1.1
Host: localhost:8000
Accept: application/x-yaml

Example response:

HTTP/1.1 200 OK
Content-Length: 73
Content-Type: application/x-yaml

info:
- Arguments:
    - '3'
    Function: test.fib
    Minions:
    - jerry
    Start Time: 2012, Nov 30 10:46:33.606931
    Target: '*'
    Target-type: glob
    User: saltdev
    jid: '20121130104633606931'
return:
- jerry:
    - - 0
    - 1
    - 1
    - 2
    - 6.9141387939453125e-06
/run
class salt.netapi.rest_cherrypy.app.Run

Class to run commands without normal session handling

POST(**kwargs)

Run commands bypassing the normal session handling

POST /run

This entry point is primarily for "one-off" commands. Each request must pass full Salt authentication credentials. Otherwise this URL is identical to the root URL (/).

lowstate data describing Salt commands must be sent in the request body.

Status Codes:
  • 200 -- success
  • 401 -- authentication required
  • 406 -- requested Content-Type not available

Example request:

curl -sS localhost:8000/run \
    -H 'Accept: application/x-yaml' \
    -d client='local' \
    -d tgt='*' \
    -d fun='test.ping' \
    -d username='saltdev' \
    -d password='saltdev' \
    -d eauth='pam'
POST /run HTTP/1.1
Host: localhost:8000
Accept: application/x-yaml
Content-Length: 75
Content-Type: application/x-www-form-urlencoded

client=local&tgt=*&fun=test.ping&username=saltdev&password=saltdev&eauth=pam

Example response:

HTTP/1.1 200 OK
Content-Length: 73
Content-Type: application/x-yaml

return:
- ms-0: true
    ms-1: true
    ms-2: true
    ms-3: true
    ms-4: true

The /run enpoint can also be used to issue commands using the salt-ssh subsystem.

When using salt-ssh, eauth credentials should not be supplied. Instad, authentication should be handled by the SSH layer itself. The use of the salt-ssh client does not require a salt master to be running. Instead, only a roster file must be present in the salt configuration directory.

All SSH client requests are synchronous.

** Example SSH client request:**

curl -sS localhost:8000/run \
    -H 'Accept: application/x-yaml' \
    -d client='ssh' \
    -d tgt='*' \
    -d fun='test.ping'
POST /run HTTP/1.1
Host: localhost:8000
Accept: application/x-yaml
Content-Length: 75
Content-Type: application/x-www-form-urlencoded

client=ssh&tgt=*&fun=test.ping

Example SSH response:

return:
- silver:
  fun: test.ping
  fun_args: []
  id: silver
  jid: '20141203103525666185'
  retcode: 0
  return: true
  success: true
/events
class salt.netapi.rest_cherrypy.app.Events

Expose the Salt event bus

The event bus on the Salt master exposes a large variety of things, notably when executions are started on the master and also when minions ultimately return their results. This URL provides a real-time window into a running Salt infrastructure.

See also

events

GET(token=None, salt_token=None)

An HTTP stream of the Salt master event bus

This stream is formatted per the Server Sent Events (SSE) spec. Each event is formatted as JSON.

GET /events
Status Codes:
  • 200 -- success
  • 401 -- authentication required
  • 406 -- requested Content-Type not available
Query Parameters:
 
  • token -- optional parameter containing the token ordinarily supplied via the X-Auth-Token header in order to allow cross-domain requests in browsers that do not include CORS support in the EventSource API. E.g., curl -NsS localhost:8000/events?token=308650d
  • salt_token -- optional parameter containing a raw Salt eauth token (not to be confused with the token returned from the /login URL). E.g., curl -NsS localhost:8000/events?salt_token=30742765

Example request:

curl -NsS localhost:8000/events
GET /events HTTP/1.1
Host: localhost:8000

Example response:

Note, the tag field is not part of the spec. SSE compliant clients should ignore unknown fields. This addition allows non-compliant clients to only watch for certain tags without having to deserialze the JSON object each time.

HTTP/1.1 200 OK
Connection: keep-alive
Cache-Control: no-cache
Content-Type: text/event-stream;charset=utf-8

retry: 400

tag: salt/job/20130802115730568475/new
data: {'tag': 'salt/job/20130802115730568475/new', 'data': {'minions': ['ms-4', 'ms-3', 'ms-2', 'ms-1', 'ms-0']}}

tag: salt/job/20130802115730568475/ret/jerry
data: {'tag': 'salt/job/20130802115730568475/ret/jerry', 'data': {'jid': '20130802115730568475', 'return': True, 'retcode': 0, 'success': True, 'cmd': '_return', 'fun': 'test.ping', 'id': 'ms-1'}}

The event stream can be easily consumed via JavaScript:

var source = new EventSource('/events');
source.onopen = function() { console.debug('opening') };
source.onerror = function(e) { console.debug('error!', e) };
source.onmessage = function(e) {
    console.debug('Tag: ', e.data.tag)
    console.debug('Data: ', e.data.data)
};

Or using CORS:

var source = new EventSource('/events?token=ecd589e4e01912cf3c4035afad73426dbb8dba75', {withCredentials: true});

It is also possible to consume the stream via the shell.

Records are separated by blank lines; the data: and tag: prefixes will need to be removed manually before attempting to unserialize the JSON.

curl's -N flag turns off input buffering which is required to process the stream incrementally.

Here is a basic example of printing each event as it comes in:

curl -NsS localhost:8000/events |\
        while IFS= read -r line ; do
            echo $line
        done

Here is an example of using awk to filter events based on tag:

curl -NsS localhost:8000/events |\
        awk '
            BEGIN { RS=""; FS="\\n" }
            $1 ~ /^tag: salt\/job\/[0-9]+\/new$/ { print $0 }
        '
tag: salt/job/20140112010149808995/new
data: {"tag": "salt/job/20140112010149808995/new", "data": {"tgt_type": "glob", "jid": "20140112010149808995", "tgt": "jerry", "_stamp": "2014-01-12_01:01:49.809617", "user": "shouse", "arg": [], "fun": "test.ping", "minions": ["jerry"]}}
tag: 20140112010149808995
data: {"tag": "20140112010149808995", "data": {"fun_args": [], "jid": "20140112010149808995", "return": true, "retcode": 0, "success": true, "cmd": "_return", "_stamp": "2014-01-12_01:01:49.819316", "fun": "test.ping", "id": "jerry"}}
/hook
class salt.netapi.rest_cherrypy.app.Webhook

A generic web hook entry point that fires an event on Salt's event bus

External services can POST data to this URL to trigger an event in Salt. For example, Amazon SNS, Jenkins-CI or Travis-CI, or GitHub web hooks.

Note

Be mindful of security

Salt's Reactor can run any code. A Reactor SLS that responds to a hook event is responsible for validating that the event came from a trusted source and contains valid data.

This is a generic interface and securing it is up to you!

This URL requires authentication however not all external services can be configured to authenticate. For this reason authentication can be selectively disabled for this URL. Follow best practices -- always use SSL, pass a secret key, configure the firewall to only allow traffic from a known source, etc.

The event data is taken from the request body. The Content-Type header is respected for the payload.

The event tag is prefixed with salt/netapi/hook and the URL path is appended to the end. For example, a POST request sent to /hook/mycompany/myapp/mydata will produce a Salt event with the tag salt/netapi/hook/mycompany/myapp/mydata.

The following is an example .travis.yml file to send notifications to Salt of successful test runs:

language: python
script: python -m unittest tests
after_success:
    - |
        curl -sSk https://saltapi-url.example.com:8000/hook/travis/build/success                         -d branch="${TRAVIS_BRANCH}"                         -d commit="${TRAVIS_COMMIT}"

See also

events, reactor

POST(*args, **kwargs)

Fire an event in Salt with a custom event tag and data

POST /hook
Status Codes:
  • 200 -- success
  • 401 -- authentication required
  • 406 -- requested Content-Type not available
  • 413 -- request body is too large

Example request:

curl -sS localhost:8000/hook -d foo='Foo!' -d bar='Bar!'
POST /hook HTTP/1.1
Host: localhost:8000
Content-Length: 16
Content-Type: application/x-www-form-urlencoded

foo=Foo&bar=Bar!

Example response:

HTTP/1.1 200 OK
Content-Length: 14
Content-Type: application/json

{"success": true}

As a practical example, an internal continuous-integration build server could send an HTTP POST request to the URL https://localhost:8000/hook/mycompany/build/success which contains the result of a build and the SHA of the version that was built as JSON. That would then produce the following event in Salt that could be used to kick off a deployment via Salt's Reactor:

Event fired at Fri Feb 14 17:40:11 2014
*************************
Tag: salt/netapi/hook/mycompany/build/success
Data:
{'_stamp': '2014-02-14_17:40:11.440996',
    'headers': {
        'X-My-Secret-Key': 'F0fAgoQjIT@W',
        'Content-Length': '37',
        'Content-Type': 'application/json',
        'Host': 'localhost:8000',
        'Remote-Addr': '127.0.0.1'},
    'post': {'revision': 'aa22a3c4b2e7', 'result': True}}

Salt's Reactor could listen for the event:

reactor:
  - 'salt/netapi/hook/mycompany/build/*':
    - /srv/reactor/react_ci_builds.sls

And finally deploy the new build:

{% set secret_key = data.get('headers', {}).get('X-My-Secret-Key') %}
{% set build = data.get('post', {}) %}

{% if secret_key == 'F0fAgoQjIT@W' and build.result == True %}
deploy_my_app:
  cmd.state.sls:
    - tgt: 'application*'
    - arg:
      - myapp.deploy
    - kwarg:
        pillar:
          revision: {{ revision }}
{% endif %}
/keys
class salt.netapi.rest_cherrypy.app.Keys

Convenience URLs for working with minion keys

New in version 2014.7.0.

These URLs wrap the functionality provided by the key wheel module functions.

GET(mid=None)

Show the list of minion keys or detail on a specific key

New in version 2014.7.0.

GET /keys/(mid)

List all keys or show a specific key

Status Codes:
  • 200 -- success
  • 401 -- authentication required
  • 406 -- requested Content-Type not available

Example request:

curl -i localhost:8000/keys
GET /keys HTTP/1.1
Host: localhost:8000
Accept: application/x-yaml

Example response:

HTTP/1.1 200 OK
Content-Length: 165
Content-Type: application/x-yaml

return:
  local:
  - master.pem
  - master.pub
  minions:
  - jerry
  minions_pre: []
  minions_rejected: []

Example request:

curl -i localhost:8000/keys/jerry
GET /keys/jerry HTTP/1.1
Host: localhost:8000
Accept: application/x-yaml

Example response:

HTTP/1.1 200 OK
Content-Length: 73
Content-Type: application/x-yaml

return:
  minions:
    jerry: 51:93:b3:d0:9f:3a:6d:e5:28:67:c2:4b:27:d6:cd:2b
POST(mid, keysize=None, force=None, **kwargs)

Easily generate keys for a minion and auto-accept the new key

New in version 2014.7.0.

Example partial kickstart script to bootstrap a new minion:

%post
mkdir -p /etc/salt/pki/minion
curl -sSk https://localhost:8000/keys \
        -d mid=jerry \
        -d username=kickstart \
        -d password=kickstart \
        -d eauth=pam \
    | tar -C /etc/salt/pki/minion -xf -

mkdir -p /etc/salt/minion.d
printf 'master: 10.0.0.5\nid: jerry' > /etc/salt/minion.d/id.conf
%end
POST /keys

Generate a public and private key and return both as a tarball

Authentication credentials must be passed in the request.

Status Codes:
  • 200 -- success
  • 401 -- authentication required
  • 406 -- requested Content-Type not available

Example request:

curl -sSk https://localhost:8000/keys \
        -d mid=jerry \
        -d username=kickstart \
        -d password=kickstart \
        -d eauth=pam \
        -o jerry-salt-keys.tar
POST /keys HTTP/1.1
Host: localhost:8000

Example response:

HTTP/1.1 200 OK
Content-Length: 10240
Content-Disposition: attachment; filename="saltkeys-jerry.tar"
Content-Type: application/x-tar

jerry.pub0000644000000000000000000000070300000000000010730 0ustar  00000000000000
/ws
class salt.netapi.rest_cherrypy.app.WebsocketEndpoint

Open a WebSocket connection to Salt's event bus

The event bus on the Salt master exposes a large variety of things, notably when executions are started on the master and also when minions ultimately return their results. This URL provides a real-time window into a running Salt infrastructure. Uses websocket as the transport mechanism.

See also

events

GET(token=None, **kwargs)

Return a websocket connection of Salt's event stream

GET /ws/(token)
Query Parameters:
 
  • format_events --

    The event stream will undergo server-side formatting if the format_events URL parameter is included in the request. This can be useful to avoid formatting on the client-side:

    curl -NsS <...snip...> localhost:8000/ws?format_events
    
Request Headers:
 
  • X-Auth-Token -- an authentication token from Login.
Status Codes:
  • 101 -- switching to the websockets protocol
  • 401 -- authentication required
  • 406 -- requested Content-Type not available

Example request:

curl -NsSk
-H 'X-Auth-Token: ffedf49d' -H 'Host: localhost:8000' -H 'Connection: Upgrade' -H 'Upgrade: websocket' -H 'Origin: https://localhost:8000' -H 'Sec-WebSocket-Version: 13' -H 'Sec-WebSocket-Key: '"$(echo -n $RANDOM | base64)" localhost:8000/ws
GET /ws HTTP/1.1
Connection: Upgrade
Upgrade: websocket
Host: localhost:8000
Origin: https://localhost:8000
Sec-WebSocket-Version: 13
Sec-WebSocket-Key: s65VsgHigh7v/Jcf4nXHnA==
X-Auth-Token: ffedf49d

Example response:

HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: mWZjBV9FCglzn1rIKJAxrTFlnJE=
Sec-WebSocket-Version: 13

An authentication token may optionally be passed as part of the URL for browsers that cannot be configured to send the authentication header or cookie:

curl -NsS <...snip...> localhost:8000/ws/ffedf49d

The event stream can be easily consumed via JavaScript:

// Note, you must be authenticated!
var source = new Websocket('ws://localhost:8000/ws/d0ce6c1a');
source.onerror = function(e) { console.debug('error!', e); };
source.onmessage = function(e) { console.debug(e.data); };

source.send('websocket client ready')

source.close();

Or via Python, using the Python module websocket-client for example.

# Note, you must be authenticated!

from websocket import create_connection

ws = create_connection('ws://localhost:8000/ws/d0ce6c1a')
ws.send('websocket client ready')

# Look at https://pypi.python.org/pypi/websocket-client/ for more
# examples.
while listening_to_events:
    print ws.recv()

ws.close()

Above examples show how to establish a websocket connection to Salt and activating real time updates from Salt's event stream by signaling websocket client ready.

/stats
class salt.netapi.rest_cherrypy.app.Stats

Expose statistics on the running CherryPy server

GET()

Return a dump of statistics collected from the CherryPy server

GET /stats
Request Headers:
 
  • X-Auth-Token -- a session token from Login.
  • Accept -- the desired response format.
Response Headers:
 
  • Content-Type -- the format of the response body; depends on the Accept request header.
Status Codes:
  • 200 -- success
  • 401 -- authentication required
  • 406 -- requested Content-Type not available

rest_tornado

A non-blocking REST API for Salt
depends:
  • tornado Python module
configuration:

All authentication is done through Salt's external auth system which requires additional configuration not described here.

In order to run rest_tornado with the salt-master add the following to the Salt master config file.

rest_tornado:
    # can be any port
    port: 8000
    # address to bind to (defaults to 0.0.0.0)
    address: 0.0.0.0
    # socket backlog
    backlog: 128
    ssl_crt: /etc/pki/api/certs/server.crt
    # no need to specify ssl_key if cert and key
    # are in one single file
    ssl_key: /etc/pki/api/certs/server.key
    debug: False
    disable_ssl: False
    webhook_disable_auth: False
Authentication

Authentication is performed by passing a session token with each request. Tokens are generated via the SaltAuthHandler URL.

The token may be sent in one of two ways:

  • Include a custom header named X-Auth-Token.
  • Sent via a cookie. This option is a convenience for HTTP clients that automatically handle cookie support (such as browsers).

See also

You can bypass the session handling via the RunSaltAPIHandler URL.

Usage

Commands are sent to a running Salt master via this module by sending HTTP requests to the URLs detailed below.

Content negotiation

This REST interface is flexible in what data formats it will accept as well as what formats it will return (e.g., JSON, YAML, x-www-form-urlencoded).

  • Specify the format of data in the request body by including the Content-Type header.
  • Specify the desired data format for the response body with the Accept header.

Data sent in POST and PUT requests must be in the format of a list of lowstate dictionaries. This allows multiple commands to be executed in a single HTTP request.

lowstate

A dictionary containing various keys that instruct Salt which command to run, where that command lives, any parameters for that command, any authentication credentials, what returner to use, etc.

Salt uses the lowstate data format internally in many places to pass command data between functions. Salt also uses lowstate for the LocalClient() Python API interface.

The following example (in JSON format) causes Salt to execute two commands:

[{
    "client": "local",
    "tgt": "*",
    "fun": "test.fib",
    "arg": ["10"]
},
{
    "client": "runner",
    "fun": "jobs.lookup_jid",
    "jid": "20130603122505459265"
}]

Multiple commands in a Salt API request will be executed in serial and makes no gaurantees that all commands will run. Meaning that if test.fib (from the example above) had an exception, the API would still execute "jobs.lookup_jid".

Responses to these lowstates are an in-order list of dicts containing the return data, a yaml response could look like:

- ms-1: true
  ms-2: true
- ms-1: foo
  ms-2: bar

In the event of an exception while executing a command the return for that lowstate will be a string, for example if no minions matched the first lowstate we would get a return like:

- No minions matched the target. No command was sent, no jid was assigned.
- ms-1: true
  ms-2: true

x-www-form-urlencoded

Sending JSON or YAML in the request body is simple and most flexible, however sending data in urlencoded format is also supported with the caveats below. It is the default format for HTML forms, many JavaScript libraries, and the curl command.

For example, the equivalent to running salt '*' test.ping is sending fun=test.ping&arg&client=local&tgt=* in the HTTP request body.

Caveats:

  • Only a single command may be sent per HTTP request.

  • Repeating the arg parameter multiple times will cause those parameters to be combined into a single list.

    Note, some popular frameworks and languages (notably jQuery, PHP, and Ruby on Rails) will automatically append empty brackets onto repeated parameters. E.g., arg=one, arg=two will be sent as arg[]=one, arg[]=two. This is not supported; send JSON or YAML instead.

A Websockets add-on to saltnado
depends:
  • tornado Python module

In order to enable saltnado_websockets you must add websockets: True to your saltnado config block.

rest_tornado:
    # can be any port
    port: 8000
    ssl_crt: /etc/pki/api/certs/server.crt
    # no need to specify ssl_key if cert and key
    # are in one single file
    ssl_key: /etc/pki/api/certs/server.key
    debug: False
    disable_ssl: False
    websockets: True
All Events

Exposes all "real-time" events from Salt's event bus on a websocket connection. It should be noted that "Real-time" here means these events are made available to the server as soon as any salt related action (changes to minions, new jobs etc) happens. Clients are however assumed to be able to tolerate any network transport related latencies. Functionality provided by this endpoint is similar to the /events end point.

The event bus on the Salt master exposes a large variety of things, notably when executions are started on the master and also when minions ultimately return their results. This URL provides a real-time window into a running Salt infrastructure. Uses websocket as the transport mechanism.

Exposes GET method to return websocket connections. All requests should include an auth token. A way to obtain obtain authentication tokens is shown below.

% curl -si localhost:8000/login \
    -H "Accept: application/json" \
    -d username='salt' \
    -d password='salt' \
    -d eauth='pam'

Which results in the response

{
    "return": [{
        "perms": [".*", "@runner", "@wheel"],
        "start": 1400556492.277421,
        "token": "d0ce6c1a37e99dcc0374392f272fe19c0090cca7",
        "expire": 1400599692.277422,
        "user": "salt",
        "eauth": "pam"
    }]
}

In this example the token returned is d0ce6c1a37e99dcc0374392f272fe19c0090cca7 and can be included in subsequent websocket requests (as part of the URL).

The event stream can be easily consumed via JavaScript:

// Note, you must be authenticated!

// Get the Websocket connection to Salt
var source = new Websocket('wss://localhost:8000/all_events/d0ce6c1a37e99dcc0374392f272fe19c0090cca7');

// Get Salt's "real time" event stream.
source.onopen = function() { source.send('websocket client ready'); };

// Other handlers
source.onerror = function(e) { console.debug('error!', e); };

// e.data represents Salt's "real time" event data as serialized JSON.
source.onmessage = function(e) { console.debug(e.data); };

// Terminates websocket connection and Salt's "real time" event stream on the server.
source.close();

Or via Python, using the Python module websocket-client for example. Or the tornado client.

# Note, you must be authenticated!

from websocket import create_connection

# Get the Websocket connection to Salt
ws = create_connection('wss://localhost:8000/all_events/d0ce6c1a37e99dcc0374392f272fe19c0090cca7')

# Get Salt's "real time" event stream.
ws.send('websocket client ready')


# Simple listener to print results of Salt's "real time" event stream.
# Look at https://pypi.python.org/pypi/websocket-client/ for more examples.
while listening_to_events:
    print ws.recv()       #  Salt's "real time" event data as serialized JSON.

# Terminates websocket connection and Salt's "real time" event stream on the server.
ws.close()

# Please refer to https://github.com/liris/websocket-client/issues/81 when using a self signed cert

Above examples show how to establish a websocket connection to Salt and activating real time updates from Salt's event stream by signaling websocket client ready.

Formatted Events

Exposes formatted "real-time" events from Salt's event bus on a websocket connection. It should be noted that "Real-time" here means these events are made available to the server as soon as any salt related action (changes to minions, new jobs etc) happens. Clients are however assumed to be able to tolerate any network transport related latencies. Functionality provided by this endpoint is similar to the /events end point.

The event bus on the Salt master exposes a large variety of things, notably when executions are started on the master and also when minions ultimately return their results. This URL provides a real-time window into a running Salt infrastructure. Uses websocket as the transport mechanism.

Formatted events parses the raw "real time" event stream and maintains a current view of the following:

  • minions
  • jobs

A change to the minions (such as addition, removal of keys or connection drops) or jobs is processed and clients are updated. Since we use salt's presence events to track minions, please enable presence_events and set a small value for the loop_interval in the salt master config file.

Exposes GET method to return websocket connections. All requests should include an auth token. A way to obtain obtain authentication tokens is shown below.

% curl -si localhost:8000/login \
    -H "Accept: application/json" \
    -d username='salt' \
    -d password='salt' \
    -d eauth='pam'

Which results in the response

{
    "return": [{
        "perms": [".*", "@runner", "@wheel"],
        "start": 1400556492.277421,
        "token": "d0ce6c1a37e99dcc0374392f272fe19c0090cca7",
        "expire": 1400599692.277422,
        "user": "salt",
        "eauth": "pam"
    }]
}

In this example the token returned is d0ce6c1a37e99dcc0374392f272fe19c0090cca7 and can be included in subsequent websocket requests (as part of the URL).

The event stream can be easily consumed via JavaScript:

// Note, you must be authenticated!

// Get the Websocket connection to Salt
var source = new Websocket('wss://localhost:8000/formatted_events/d0ce6c1a37e99dcc0374392f272fe19c0090cca7');

// Get Salt's "real time" event stream.
source.onopen = function() { source.send('websocket client ready'); };

// Other handlers
source.onerror = function(e) { console.debug('error!', e); };

// e.data represents Salt's "real time" event data as serialized JSON.
source.onmessage = function(e) { console.debug(e.data); };

// Terminates websocket connection and Salt's "real time" event stream on the server.
source.close();

Or via Python, using the Python module websocket-client for example. Or the tornado client.

# Note, you must be authenticated!

from websocket import create_connection

# Get the Websocket connection to Salt
ws = create_connection('wss://localhost:8000/formatted_events/d0ce6c1a37e99dcc0374392f272fe19c0090cca7')

# Get Salt's "real time" event stream.
ws.send('websocket client ready')


# Simple listener to print results of Salt's "real time" event stream.
# Look at https://pypi.python.org/pypi/websocket-client/ for more examples.
while listening_to_events:
    print ws.recv()       #  Salt's "real time" event data as serialized JSON.

# Terminates websocket connection and Salt's "real time" event stream on the server.
ws.close()

# Please refer to https://github.com/liris/websocket-client/issues/81 when using a self signed cert

Above examples show how to establish a websocket connection to Salt and activating real time updates from Salt's event stream by signaling websocket client ready.

Example responses

Minion information is a dictionary keyed by each connected minion's id (mid), grains information for each minion is also included.

Minion information is sent in response to the following minion events:

  • connection drops
    • requires running manage.present periodically every loop_interval seconds
  • minion addition

  • minon removal

# Not all grains are shown
data: {
    "minions": {
        "minion1": {
            "id": "minion1",
            "grains": {
                "kernel": "Darwin",
                "domain": "local",
                "zmqversion": "4.0.3",
                "kernelrelease": "13.2.0"
            }
        }
    }
}

Job information is also tracked and delivered.

Job information is also a dictionary in which each job's information is keyed by salt's jid.

data: {
    "jobs": {
        "20140609153646699137": {
            "tgt_type": "glob",
            "jid": "20140609153646699137",
            "tgt": "*",
            "start_time": "2014-06-09T15:36:46.700315",
            "state": "complete",
            "fun": "test.ping",
            "minions": {
                "minion1": {
                    "return": true,
                    "retcode": 0,
                    "success": true
                }
            }
        }
    }
}
Setup
REST URI Reference
/
salt.netapi.rest_tornado.saltnado.SaltAPIHandler

alias of <Mock object at 0x7fcfcd6d24d0>

/login
salt.netapi.rest_tornado.saltnado.SaltAuthHandler

alias of <Mock object at 0x7fcfd7bb2c90>

/minions
salt.netapi.rest_tornado.saltnado.MinionSaltAPIHandler

alias of <Mock object at 0x7fcfd7bb2bd0>

/jobs
salt.netapi.rest_tornado.saltnado.JobsSaltAPIHandler

alias of <Mock object at 0x7fcfd7bb2950>

/run
salt.netapi.rest_tornado.saltnado.RunSaltAPIHandler

alias of <Mock object at 0x7fcfcf19abd0>

/events
salt.netapi.rest_tornado.saltnado.EventsSaltAPIHandler

alias of <Mock object at 0x7fcfcf19ad90>

/hook
salt.netapi.rest_tornado.saltnado.WebhookSaltAPIHandler

alias of <Mock object at 0x7fcfd7bb2c10>

rest_wsgi

A minimalist REST API for Salt

This rest_wsgi module provides a no-frills REST interface for sending commands to the Salt master. There are no dependencies.

Extra care must be taken when deploying this module into production. Please read this documentation in entirety.

All authentication is done through Salt's external auth system.

Usage
  • All requests must be sent to the root URL (/).
  • All requests must be sent as a POST request with JSON content in the request body.
  • All responses are in JSON.

See also

rest_cherrypy

The rest_cherrypy module is more full-featured, production-ready, and has builtin security features.

Deployment

The rest_wsgi netapi module is a standard Python WSGI app. It can be deployed one of two ways.

Using a WSGI-compliant web server

This module may be run via any WSGI-compliant production server such as Apache with mod_wsgi or Nginx with FastCGI.

It is strongly recommended that this app be used with a server that supports HTTPS encryption since raw Salt authentication credentials must be sent with every request. Any apps that access Salt through this interface will need to manually manage authentication credentials (either username and password or a Salt token). Tread carefully.

salt-api using a development-only server

If run directly via the salt-api daemon it uses the wsgiref.simple_server() that ships in the Python standard library. This is a single-threaded server that is intended for testing and development. This server does not use encryption; please note that raw Salt authentication credentials must be sent with every HTTP request.

Running this module via salt-api is not recommended!

In order to start this module via the salt-api daemon the following must be put into the Salt master config:

rest_wsgi:
    port: 8001
Usage examples
POST /

Example request for a basic test.ping:

% curl -sS -i \
        -H 'Content-Type: application/json' \
        -d '[{"eauth":"pam","username":"saltdev","password":"saltdev","client":"local","tgt":"*","fun":"test.ping"}]' localhost:8001

Example response:

HTTP/1.0 200 OK
Content-Length: 89
Content-Type: application/json

{"return": [{"ms--4": true, "ms--3": true, "ms--2": true, "ms--1": true, "ms--0": true}]}

Example request for an asynchronous test.ping:

% curl -sS -i \
        -H 'Content-Type: application/json' \
        -d '[{"eauth":"pam","username":"saltdev","password":"saltdev","client":"local_async","tgt":"*","fun":"test.ping"}]' localhost:8001

Example response:

HTTP/1.0 200 OK
Content-Length: 103
Content-Type: application/json

{"return": [{"jid": "20130412192112593739", "minions": ["ms--4", "ms--3", "ms--2", "ms--1", "ms--0"]}]}

Example request for looking up a job ID:

% curl -sS -i \
        -H 'Content-Type: application/json' \
        -d '[{"eauth":"pam","username":"saltdev","password":"saltdev","client":"runner","fun":"jobs.lookup_jid","jid":"20130412192112593739"}]' localhost:8001

Example response:

HTTP/1.0 200 OK
Content-Length: 89
Content-Type: application/json

{"return": [{"ms--4": true, "ms--3": true, "ms--2": true, "ms--1": true, "ms--0": true}]}
form lowstate:A list of lowstate data appropriate for the client interface you are calling.
status 200:success
status 401:authentication required

Full list of builtin output modules

Follow one of the below links for further information and examples

compact Display compact output data structure
highstate Outputter for displaying results of state runs
json_out Display return data in JSON format
key Display salt-key output
nested Recursively display nested data
newline_values_only Display values only, separated by newlines
no_out Display no output
no_return Display output for minions that did not return
overstatestage Display clean output of an overstate stage
pprint_out Python pretty-print (pprint)
progress Display return data as a progress bar
raw Display raw output data structure
txt Simple text outputter
virt_query virt.query outputter
yaml_out Display return data in YAML format

Peer Communication

Salt 0.9.0 introduced the capability for Salt minions to publish commands. The intent of this feature is not for Salt minions to act as independent brokers one with another, but to allow Salt minions to pass commands to each other.

In Salt 0.10.0 the ability to execute runners from the master was added. This allows for the master to return collective data from runners back to the minions via the peer interface.

The peer interface is configured through two options in the master configuration file. For minions to send commands from the master the peer configuration is used. To allow for minions to execute runners from the master the peer_run configuration is used.

Since this presents a viable security risk by allowing minions access to the master publisher the capability is turned off by default. The minions can be allowed access to the master publisher on a per minion basis based on regular expressions. Minions with specific ids can be allowed access to certain Salt modules and functions.

Peer Configuration

The configuration is done under the peer setting in the Salt master configuration file, here are a number of configuration possibilities.

The simplest approach is to enable all communication for all minions, this is only recommended for very secure environments.

peer:
  .*:
    - .*

This configuration will allow minions with IDs ending in example.com access to the test, ps, and pkg module functions.

peer:
  .*example.com:
    - test.*
    - ps.*
    - pkg.*

The configuration logic is simple, a regular expression is passed for matching minion ids, and then a list of expressions matching minion functions is associated with the named minion. For instance, this configuration will also allow minions ending with foo.org access to the publisher.

peer:
  .*example.com:
    - test.*
    - ps.*
    - pkg.*
  .*foo.org:
    - test.*
    - ps.*
    - pkg.*

Peer Runner Communication

Configuration to allow minions to execute runners from the master is done via the peer_run option on the master. The peer_run configuration follows the same logic as the peer option. The only difference is that access is granted to runner modules.

To open up access to all minions to all runners:

peer_run:
  .*:
    - .*

This configuration will allow minions with IDs ending in example.com access to the manage and jobs runner functions.

peer_run:
  .*example.com:
    - manage.*
    - jobs.*

Using Peer Communication

The publish module was created to manage peer communication. The publish module comes with a number of functions to execute peer communication in different ways. Currently there are three functions in the publish module. These examples will show how to test the peer system via the salt-call command.

To execute test.ping on all minions:

# salt-call publish.publish \* test.ping

To execute the manage.up runner:

# salt-call publish.runner manage.up

To match minions using other matchers, use expr_form:

# salt-call publish.publish 'webserv* and not G@os:Ubuntu' test.ping expr_form='compound'

Pillars

Salt includes a number of built-in external pillars, listed at Full list of builtin pillar modules.

You may also wish to look at the standard pillar documentation, at Pillar Configuration

The source for the built-in Salt pillars can be found here: https://github.com/saltstack/salt/blob/develop/salt/pillar

Full list of builtin pillar modules

cmd_json Execute a command and read the output as JSON.
cmd_yaml Execute a command and read the output as YAML.
cmd_yamlex Execute a command and read the output as YAMLEX.
cobbler A module to pull data from Cobbler via its API into the Pillar dictionary
django_orm Generate Pillar data from Django models through the Django ORM
ec2_pillar Retrieve EC2 instance data for minions.
etcd_pillar Use etcd data as a Pillar source
file_tree Recursively iterate over directories and add all files as Pillar data.
foreman A module to pull data from Foreman via its API into the Pillar dictionary
git_pillar Clone a remote git repository and use the filesystem as a Pillar source
hg_pillar Use remote Mercurial repository as a Pillar source.
hiera Use hiera data as a Pillar source
libvirt Load up the libvirt keys into Pillar for a given minion if said keys have been generated using the libvirt key runner
mongo Read Pillar data from a mongodb collection
mysql Retrieve Pillar data by doing a MySQL query
pepa Pepa
pillar_ldap Use LDAP data as a Pillar source
puppet Execute an unmodified puppet_node_classifier and read the output as YAML.
reclass_adapter Use the "reclass" database as a Pillar source
redismod Read pillar data from a Redis backend
s3 Copy pillar data from a bucket in Amazon S3
svn_pillar Clone a remote SVN repository and use the filesystem as a Pillar source
varstack_pillar Use Varstack data as a Pillar source
virtkey Accept a key from a hypervisor if the virt runner has already submitted an authorization request

Renderers

The Salt state system operates by gathering information from common data types such as lists, dictionaries, and strings that would be familiar to any developer.

SLS files are translated from whatever data templating format they are written in back into Python data types to be consumed by Salt.

By default SLS files are rendered as Jinja templates and then parsed as YAML documents. But since the only thing the state system cares about is raw data, the SLS files can be any structured format that can be dreamed up.

Currently there is support for Jinja + YAML, Mako + YAML, Wempy + YAML, Jinja + json, Mako + json and Wempy + json.

Renderers can be written to support any template type. This means that the Salt states could be managed by XML files, HTML files, Puppet files, or any format that can be translated into the Pythonic data structure used by the state system.

Multiple Renderers

A default renderer is selected in the master configuration file by providing a value to the renderer key.

When evaluating an SLS, more than one renderer can be used.

When rendering SLS files, Salt checks for the presence of a Salt-specific shebang line.

The shebang line directly calls the name of the renderer as it is specified within Salt. One of the most common reasons to use multiple renderers is to use the Python or py renderer.

Below, the first line is a shebang that references the py renderer.

#!py

def run():
    '''
    Install the python-mako package
    '''
    return {'include': ['python'],
            'python-mako': {'pkg': ['installed']}}

Composing Renderers

A renderer can be composed from other renderers by connecting them in a series of pipes(|).

In fact, the default Jinja + YAML renderer is implemented by connecting a YAML renderer to a Jinja renderer. Such renderer configuration is specified as: jinja | yaml.

Other renderer combinations are possible:

yaml
i.e, just YAML, no templating.
mako | yaml
pass the input to the mako renderer, whose output is then fed into the yaml renderer.
jinja | mako | yaml
This one allows you to use both jinja and mako templating syntax in the input and then parse the final rendered output as YAML.

The following is a contrived example SLS file using the jinja | mako | yaml renderer:

#!jinja|mako|yaml

An_Example:
  cmd.run:
    - name: |
        echo "Using Salt ${grains['saltversion']}" \
             "from path {{grains['saltpath']}}."
    - cwd: /

<%doc> ${...} is Mako's notation, and so is this comment. </%doc>
{#     Similarly, {{...}} is Jinja's notation, and so is this comment. #}

For backward compatibility, jinja | yaml can also be written as yaml_jinja, and similarly, the yaml_mako, yaml_wempy, json_jinja, json_mako, and json_wempy renderers are all supported.

Keep in mind that not all renderers can be used alone or with any other renderers. For example, the template renderers shouldn't be used alone as their outputs are just strings, which still need to be parsed by another renderer to turn them into highstate data structures.

For example, it doesn't make sense to specify yaml | jinja because the output of the YAML renderer is a highstate data structure (a dict in Python), which cannot be used as the input to a template renderer. Therefore, when combining renderers, you should know what each renderer accepts as input and what it returns as output.

Writing Renderers

A custom renderer must be a Python module placed in the renderers directory and the module implement the render function.

The render function will be passed the path of the SLS file as an argument.

The purpose of of render function is to parse the passed file and to return the Python data structure derived from the file.

Custom renderers must be placed in a _renderers directory within the file_roots specified by the master config file.

Custom renderers are distributed when any of the following are run:

state.highstate

saltutil.sync_renderers

saltutil.sync_all

Any custom renderers which have been synced to a minion, that are named the same as one of Salt's default set of renderers, will take the place of the default renderer with the same name.

Examples

The best place to find examples of renderers is in the Salt source code.

Documentation for renderers included with Salt can be found here:

https://github.com/saltstack/salt/blob/develop/salt/renderers

Here is a simple YAML renderer example:

import yaml
def render(yaml_data, env='', sls='', **kws):
    if not isinstance(yaml_data, basestring):
        yaml_data = yaml_data.read()
    data = yaml.load(yaml_data)
    return data if data else {}

Full List of Renderers

Full list of builtin renderer modules
cheetah Cheetah Renderer for Salt
genshi Genshi Renderer for Salt
gpg Renderer that will decrypt GPG ciphers
hjson Hjson Renderer for Salt
jinja Jinja loading utils to enable a more powerful backend for jinja templates
json JSON Renderer for Salt
mako Mako Renderer for Salt
msgpack
py Pure python state renderer
pydsl A Python-based DSL
pyobjects Python renderer that includes a Pythonic Object based interface
stateconf A flexible renderer that takes a templating engine and a data format
wempy
yaml YAML Renderer for Salt
yamlex

Returners

By default the return values of the commands sent to the Salt minions are returned to the Salt master, however anything at all can be done with the results data.

By using a Salt returner, results data can be redirected to external data-stores for analysis and archival.

Returners pull their configuration values from the Salt minions. Returners are only configured once, which is generally at load time.

The returner interface allows the return data to be sent to any system that can receive data. This means that return data can be sent to a Redis server, a MongoDB server, a MySQL server, or any system.

Using Returners

All Salt commands will return the command data back to the master. Specifying returners will ensure that the data is _also_ sent to the specified returner interfaces.

Specifying what returners to use is done when the command is invoked:

salt '*' test.ping --return redis_return

This command will ensure that the redis_return returner is used.

It is also possible to specify multiple returners:

salt '*' test.ping --return mongo_return,redis_return,cassandra_return

In this scenario all three returners will be called and the data from the test.ping command will be sent out to the three named returners.

Writing a Returner

A returner is a Python module containing at minimum a returner function. Other optional functions can be included to add support for Master Job Cache, External Job Cache, and Event Returners.

returner
The returner function must accept a single argument. The argument contains return data from the called minion function. If the minion function test.ping is called, the value of the argument will be a dictionary. Run the following command from a Salt master to get a sample of the dictionary:
salt-call --local --metadata test.ping --out=pprint
import redis
import json

def returner(ret):
    '''
    Return information to a redis server
    '''
    # Get a redis connection
    serv = redis.Redis(
                host='redis-serv.example.com',
                port=6379,
                db='0')
    serv.sadd("%(id)s:jobs" % ret, ret['jid'])
    serv.set("%(jid)s:%(id)s" % ret, json.dumps(ret['return']))
    serv.sadd('jobs', ret['jid'])
    serv.sadd(ret['jid'], ret['id'])

The above example of a returner set to send the data to a Redis server serializes the data as JSON and sets it in redis.

Master Job Cache Support

Master Job Cache, External Job Cache, and Event Returners. Salt's Master Job Cache allows returners to be used as a pluggable replacement for the Default Job Cache. In order to do so, a returner must implement the following functions:

Note

The code samples contained in this section were taken from the cassandra_cql returner.

prep_jid

Ensures that job ids (jid) don't collide, unless passed_jid is provided.

nochache is an optional boolean that indicates if return data should be cached. passed_jid is a caller provided jid which should be returned unconditionally.

def prep_jid(nocache, passed_jid=None):  # pylint: disable=unused-argument
    '''
    Do any work necessary to prepare a JID, including sending a custom id
    '''
    return passed_jid if passed_jid is not None else salt.utils.jid.gen_jid()
save_load
Save job information. The jid is generated by prep_jid and should be considered a unique identifier for the job. The jid, for example, could be used as the primary/unique key in a database. The load is what is returned to a Salt master by a minion. The following code example stores the load as a JSON string in the salt.jids table.
def save_load(jid, load):
    '''
    Save the load to the specified jid id
    '''
    query = '''INSERT INTO salt.jids (
                 jid, load
               ) VALUES (
                 '{0}', '{1}'
               );'''.format(jid, json.dumps(load))

    # cassandra_cql.cql_query may raise a CommandExecutionError
    try:
        __salt__['cassandra_cql.cql_query'](query)
    except CommandExecutionError:
        log.critical('Could not save load in jids table.')
        raise
    except Exception as e:
        log.critical('''Unexpected error while inserting into
         jids: {0}'''.format(str(e)))
        raise
get_load
must accept a job id (jid) and return the job load stored by save_load, or an empty dictionary when not found.
def get_load(jid):
    '''
    Return the load data that marks a specified jid
    '''
    query = '''SELECT load FROM salt.jids WHERE jid = '{0}';'''.format(jid)

    ret = {}

    # cassandra_cql.cql_query may raise a CommandExecutionError
    try:
        data = __salt__['cassandra_cql.cql_query'](query)
        if data:
            load = data[0].get('load')
            if load:
                ret = json.loads(load)
    except CommandExecutionError:
        log.critical('Could not get load from jids table.')
        raise
    except Exception as e:
        log.critical('''Unexpected error while getting load from
         jids: {0}'''.format(str(e)))
        raise

    return ret
External Job Cache Support

Salt's External Job Cache extends the Master Job Cache. External Job Cache support requires the following functions in addition to what is required for Master Job Cache support:

get_jid
Return a dictionary containing the information (load) returned by each minion when the specified job id was executed.

Sample:

{
    "local": {
        "master_minion": {
            "fun_args": [],
            "jid": "20150330121011408195",
            "return": true,
            "retcode": 0,
            "success": true,
            "cmd": "_return",
            "_stamp": "2015-03-30T12:10:12.708663",
            "fun": "test.ping",
            "id": "master_minion"
        }
    }
}
get_fun
Return a dictionary of minions that called a given Salt function as their last function call.

Sample:

{
    "local": {
        "minion1": "test.ping",
        "minion3": "test.ping",
        "minion2": "test.ping"
    }
}
get_jids
Return a list of all job ids.

Sample:

{
    "local": [
        "20150330121011408195",
        "20150330195922139916"
    ]
}
get_minions
Returns a list of minions

Sample:

{
     "local": [
         "minion3",
         "minion2",
         "minion1",
         "master_minion"
     ]
}

Please refer to one or more of the existing returners (i.e. mysql, cassandra_cql) if you need further clarification.

Event Support

An event_return function must be added to the returner module to allow events to be logged from a master via the returner. A list of events are passed to the function by the master.

The following example was taken from the MySQL returner. In this example, each event is inserted into the salt_events table keyed on the event tag. The tag contains the jid and therefore is guaranteed to be unique.

def event_return(events):
 '''
 Return event to mysql server

 Requires that configuration be enabled via 'event_return'
 option in master config.
 '''
 with _get_serv(events, commit=True) as cur:
     for event in events:
         tag = event.get('tag', '')
         data = event.get('data', '')
         sql = '''INSERT INTO `salt_events` (`tag`, `data`, `master_id` )
                  VALUES (%s, %s, %s)'''
         cur.execute(sql, (tag, json.dumps(data), __opts__['id']))
Custom Returners

Place custom returners in a _returners directory within the file_roots specified by the master config file.

Custom returners are distributed when any of the following are called:

state.highstate

saltutil.sync_returners

saltutil.sync_all

Any custom returners which have been synced to a minion that are named the same as one of Salt's default set of returners will take the place of the default returner with the same name.

Naming the Returner

Note that a returner's default name is its filename (i.e. foo.py becomes returner foo), but that its name can be overridden by using a __virtual__ function. A good example of this can be found in the redis returner, which is named redis_return.py but is loaded as simply redis:

try:
    import redis
    HAS_REDIS = True
except ImportError:
    HAS_REDIS = False

__virtualname__ = 'redis'

def __virtual__():
    if not HAS_REDIS:
        return False
    return __virtualname__
Testing the Returner

The returner, prep_jid, save_load, get_load, and event_return functions can be tested by configuring the Master Job Cache and Event Returners in the master config file and submitting a job to test.ping each minion from the master.

Once you have successfully exercised the Master Job Cache functions, test the External Job Cache functions using the ret execution module.

salt-call ret.get_jids cassandra_cql --output=json
salt-call ret.get_fun cassandra_cql test.ping --output=json
salt-call ret.get_minions cassandra_cql --output=json
salt-call ret.get_jid cassandra_cql 20150330121011408195 --output=json

Event Returners

For maximimum visibility into the history of events across a Salt infrastructure, all events seen by a salt master may be logged to a returner.

To enable event logging, set the event_return configuration option in the master config to returner which should be designated as the handler for event returns.

Note

Not all returners support event returns. Verify a returner has an event_return() function before using.

Note

On larger installations, many hundreds of events may be generated on a busy master every second. Be certain to closely monitor the storage of a given returner as Salt can easily overwhealm an underpowered server with thousands of returns.

Full List of Returners

Full list of builtin returner modules
carbon_return Take data from salt and "return" it into a carbon receiver
cassandra_cql_return Return data to a cassandra server
cassandra_return Return data to a Cassandra ColumnFamily
couchbase_return Simple returner for Couchbase.
couchdb_return Simple returner for CouchDB.
django_return A returner that will infor a Django system that returns are available using Django's signal system.
elasticsearch_return Return data to an elasticsearch server for indexing.
etcd_return Return data to an etcd server or cluster
hipchat_return Return salt data via hipchat.
kafka_return Return data to a Kafka topic
local The local returner is used to test the returner interface, it just prints the
local_cache Return data to local job cache
memcache_return Return data to a memcache server
mongo_future_return Return data to a mongodb server
mongo_return Return data to a mongodb server
multi_returner Read/Write multiple returners
mysql Return data to a mysql server
nagios_return Return salt data to Nagios
odbc Return data to an ODBC compliant server.
postgres Return data to a postgresql server
postgres_local_cache Use a postgresql server for the master job cache.
pushover_returner Return salt data via pushover (http://www.pushover.net)
redis_return Return data to a redis server
sentry_return Salt returner that report execution results back to sentry.
slack_returner Return salt data via slack
sms_return Return data by SMS.
smtp_return Return salt data via email
sqlite3_return Insert minion return data into a sqlite3 database
syslog_return Return data to the host operating system's syslog facility
xmpp_return Return salt data via xmpp

Full list of builtin roster modules

ansible Read in an Ansible inventory file or script
cache Use the minion cache on the master to derive IP addresses based on minion ID.
cloud Use the cloud cache on the master to derive IPv4 addresses based on minion ID.
clustershell This roster resolves hostname in a pdsh/clustershell style.
flat Read in the roster from a flat file using the renderer system
scan Scan a netmask or ipaddr for open ssh ports

Salt Runners

Salt runners are convenience applications executed with the salt-run command.

Salt runners work similarly to Salt execution modules however they execute on the Salt master itself instead of remote Salt minions.

A Salt runner can be a simple client call or a complex application.

Full list of runner modules

cache Return cached data from minions
cloud The Salt Cloud Runner
doc A runner module to collect and display the inline documentation from the
drac Manage Dell DRAC from the Master
error Error generator to enable integration testing of salt runner error handling
f5 Runner to provide F5 Load Balancer functionality
fileserver Directly manage the Salt fileserver plugins
git_pillar Directly manage the salt git_pillar plugin
http Module for making various web calls.
jobs A convenience system to manage jobs, both active and already run
launchd Manage launchd plist files
lxc Control Linux Containers via Salt
manage General management functions for salt, tools like seeing what hosts are up
mine A runner to access data from the salt mine
nacl This runner helps create encrypted passwords that can be included in pillars.
network Network tools to run from the Master
pagerduty Runner Module for Firing Events via PagerDuty
pillar Functions to interact with the pillar compiler on the master
pkg Package helper functions using salt.modules.pkg
queue General management and processing of queues.
sdb Runner for setting and querying data via the sdb API on the master
search Runner frontend to search system
state Execute overstate functions
survey A general map/reduce style salt runner for aggregating results returned by several different minions.
test This runner is used only for test purposes and servers no production purpose
thin The thin runner is used to manage the salt thin systems.
virt Control virtual machines via Salt
winrepo Runner to manage Windows software repo

Writing Salt Runners

A Salt runner is written in a similar manner to a Salt execution module. Both are Python modules which contain functions and each public function is a runner which may be executed via the salt-run command.

For example, if a Python module named test.py is created in the runners directory and contains a function called foo, the test runner could be invoked with the following command:

# salt-run test.foo

Runners have several options for controlling output.

Any print statement in a runner is automatically also fired onto the master event bus where. For example:

def a_runner(outputter=None, display_progress=False):
    print('Hello world')
    ...

The above would result in an event fired as follows:

Event fired at Tue Jan 13 15:26:45 2015
*************************
Tag: salt/run/20150113152644070246/print
Data:
{'_stamp': '2015-01-13T15:26:45.078707',
 'data': 'hello',
  'outputter': 'pprint'}

A runner may also send a progress event, which is displayed to the user during runner execution and is also passed across the event bus if the display_progress argument to a runner is set to True.

A custom runner may send its own progress event by using the __jid_event_.fire_event() method as shown here:

if display_progress:
    __jid_event__.fire_event({'message': 'A progress message', 'progress')

The above would produce output on the console reading: A progress message as well as an event on the event similar to:

Event fired at Tue Jan 13 15:21:20 2015
*************************
Tag: salt/run/20150113152118341421/progress
Data:
{'_stamp': '2015-01-13T15:21:20.390053',
 'message': "A progress message"}

A runner could use the same approach to send an event with a customized tag onto the event bus by replacing the second argument (progress) with whatever tag is desired. However, this will not be shown on the command-line and will only be fired onto the event bus.

Synchronous vs. Asynchronous

A runner may be fired asychronously which will immediately return control. In this case, no output will be display to the user if salt-run is being used from the command-line. If used programatically, no results will be returned. If results are desired, they must be gathered either by firing events on the bus from the runner and then watching for them or by some other means.

Note

When running a runner in asyncronous mode, the --progress flag will not deliver output to the salt-run CLI. However, progress events will still be fired on the bus.

In synchronous mode, which is the default, control will not be returned until the runner has finished executing.

To add custom runners, put them in a directory and add it to runner_dirs in the master configuration file.

Examples

Examples of runners can be found in the Salt distribution:

https://github.com/saltstack/salt/blob/develop/salt/runners

A simple runner that returns a well-formatted list of the minions that are responding to Salt calls could look like this:

# Import salt modules
import salt.client

def up():
    '''
    Print a list of all of the minions that are up
    '''
    client = salt.client.LocalClient(__opts__['conf_file'])
    minions = client.cmd('*', 'test.ping', timeout=1)
    for minion in sorted(minions):
        print minion

State Enforcement

Salt offers an optional interface to manage the configuration or "state" of the Salt minions. This interface is a fully capable mechanism used to enforce the state of systems from a central manager.

Mod Aggregate State Runtime Modifications

New in version 2014.7.0.

The mod_aggregate system was added in the 2014.7.0 release of Salt and allows for runtime modification of the executing state data. Simply put, it allows for the data used by Salt's state system to be changed on the fly at runtime, kind of like a configuration management JIT compiler or a runtime import system. All in all, it makes Salt much more dynamic.

How it Works

The best example is the pkg state. One of the major requests in Salt has long been adding the ability to install all packages defined at the same time. The mod_aggregate system makes this a reality. While executing Salt's state system, when a pkg state is reached the mod_aggregate function in the state module is called. For pkg this function scans all of the other states that are slated to run, and picks up the references to name and pkgs, then adds them to pkgs in the first state. The result is a single call to yum, apt-get, pacman, etc as part of the first package install.

How to Use it

Note

Since this option changes the basic behavior of the state runtime, after it is enabled states should be executed using test=True to ensure that the desired behavior is preserved.

In config files

The first way to enable aggregation is with a configuration option in either the master or minion configuration files. Salt will invoke mod_aggregate the first time it encounters a state module that has aggregate support.

If this option is set in the master config it will apply to all state runs on all minions, if set in the minion config it will only apply to said minion.

Enable for all states:

state_aggregate: True

Enable for only specific state modules:

state_aggregate:
  - pkg
In states

The second way to enable aggregation is with the state-level aggregate keyword. In this configuration, Salt will invoke the mod_aggregate function the first time it encounters this keyword. Any additional occurrences of the keyword will be ignored as the aggregation has already taken place.

The following example will trigger mod_aggregate when the lamp_stack state is processed resulting in a single call to the underlying package manager.

lamp_stack:
  pkg.installed:
    - pkgs:
      - php
      - mysql-client
    - aggregate: True

memcached:
  pkg.installed:
    - name: memcached
Adding mod_aggregate to a State Module

Adding a mod_aggregate routine to an existing state module only requires adding an additional function to the state module called mod_aggregate.

The mod_aggregate function just needs to accept three parameters and return the low data to use. Since mod_aggregate is working on the state runtime level it does need to manipulate low data.

The three parameters are low, chunks, and running. The low option is the low data for the state execution which is about to be called. The chunks is the list of all of the low data dictionaries which are being executed by the runtime and the running dictionary is the return data from all of the state executions which have already be executed.

This example, simplified from the pkg state, shows how to create mod_aggregate functions:

def mod_aggregate(low, chunks, running):
    '''
    The mod_aggregate function which looks up all packages in the available
    low chunks and merges them into a single pkgs ref in the present low data
    '''
    pkgs = []
    # What functions should we aggregate?
    agg_enabled = [
            'installed',
            'latest',
            'removed',
            'purged',
            ]
    # The `low` data is just a dict with the state, function (fun) and
    # arguments passed in from the sls
    if low.get('fun') not in agg_enabled:
        return low
    # Now look into what other things are set to execute
    for chunk in chunks:
        # The state runtime uses "tags" to track completed jobs, it may
        # look familiar with the _|-
        tag = salt.utils.gen_state_tag(chunk)
        if tag in running:
            # Already ran the pkg state, skip aggregation
            continue
        if chunk.get('state') == 'pkg':
            if '__agg__' in chunk:
                continue
            # Check for the same function
            if chunk.get('fun') != low.get('fun'):
                continue
            # Pull out the pkg names!
            if 'pkgs' in chunk:
                pkgs.extend(chunk['pkgs'])
                chunk['__agg__'] = True
            elif 'name' in chunk:
                pkgs.append(chunk['name'])
                chunk['__agg__'] = True
    if pkgs:
        if 'pkgs' in low:
            low['pkgs'].extend(pkgs)
        else:
            low['pkgs'] = pkgs
    # The low has been modified and needs to be returned to the state
    # runtime for execution
    return low

Altering States

Note

This documentation has been moved here.

File State Backups

In 0.10.2 a new feature was added for backing up files that are replaced by the file.managed and file.recurse states. The new feature is called the backup mode. Setting the backup mode is easy, but it can be set in a number of places.

The backup_mode can be set in the minion config file:

backup_mode: minion

Or it can be set for each file:

/etc/ssh/sshd_config:
  file.managed:
    - source: salt://ssh/sshd_config
    - backup: minion
Backed-up Files

The files will be saved in the minion cachedir under the directory named file_backup. The files will be in the location relative to where they were under the root filesystem and be appended with a timestamp. This should make them easy to browse.

Interacting with Backups

Starting with version 0.17.0, it will be possible to list, restore, and delete previously-created backups.

Listing

The backups for a given file can be listed using file.list_backups:

# salt foo.bar.com file.list_backups /tmp/foo.txt
foo.bar.com:
    ----------
    0:
        ----------
        Backup Time:
            Sat Jul 27 2013 17:48:41.738027
        Location:
            /var/cache/salt/minion/file_backup/tmp/foo.txt_Sat_Jul_27_17:48:41_738027_2013
        Size:
            13
    1:
        ----------
        Backup Time:
            Sat Jul 27 2013 17:48:28.369804
        Location:
            /var/cache/salt/minion/file_backup/tmp/foo.txt_Sat_Jul_27_17:48:28_369804_2013
        Size:
            35
Restoring

Restoring is easy using file.restore_backup, just pass the path and the numeric id found with file.list_backups:

# salt foo.bar.com file.restore_backup /tmp/foo.txt 1
foo.bar.com:
    ----------
    comment:
        Successfully restored /var/cache/salt/minion/file_backup/tmp/foo.txt_Sat_Jul_27_17:48:28_369804_2013 to /tmp/foo.txt
    result:
        True

The existing file will be backed up, just in case, as can be seen if file.list_backups is run again:

# salt foo.bar.com file.list_backups /tmp/foo.txt
foo.bar.com:
    ----------
    0:
        ----------
        Backup Time:
            Sat Jul 27 2013 18:00:19.822550
        Location:
            /var/cache/salt/minion/file_backup/tmp/foo.txt_Sat_Jul_27_18:00:19_822550_2013
        Size:
            53
    1:
        ----------
        Backup Time:
            Sat Jul 27 2013 17:48:41.738027
        Location:
            /var/cache/salt/minion/file_backup/tmp/foo.txt_Sat_Jul_27_17:48:41_738027_2013
        Size:
            13
    2:
        ----------
        Backup Time:
            Sat Jul 27 2013 17:48:28.369804
        Location:
            /var/cache/salt/minion/file_backup/tmp/foo.txt_Sat_Jul_27_17:48:28_369804_2013
        Size:
            35

Note

Since no state is being run, restoring a file will not trigger any watches for the file. So, if you are restoring a config file for a service, it will likely still be necessary to run a service.restart.

Deleting

Deleting backups can be done using file.delete_backup:

# salt foo.bar.com file.delete_backup /tmp/foo.txt 0
foo.bar.com:
    ----------
    comment:
        Successfully removed /var/cache/salt/minion/file_backup/tmp/foo.txt_Sat_Jul_27_18:00:19_822550_2013
    result:
        True

Understanding State Compiler Ordering

Note

This tutorial is an intermediate level tutorial. Some basic understanding of the state system and writing Salt Formulas is assumed.

Salt's state system is built to deliver all of the power of configuration management systems without sacrificing simplicity. This tutorial is made to help users understand in detail just how the order is defined for state executions in Salt.

This tutorial is written to represent the behavior of Salt as of version 0.17.0.

Compiler Basics

To understand ordering in depth some very basic knowledge about the state compiler is very helpful. No need to worry though, this is very high level!

High Data and Low Data

When defining Salt Formulas in YAML the data that is being represented is referred to by the compiler as High Data. When the data is initially loaded into the compiler it is a single large python dictionary, this dictionary can be viewed raw by running:

salt '*' state.show_highstate

This "High Data" structure is then compiled down to "Low Data". The Low Data is what is matched up to create individual executions in Salt's configuration management system. The low data is an ordered list of single state calls to execute. Once the low data is compiled the evaluation order can be seen.

The low data can be viewed by running:

salt '*' state.show_lowstate

Note

The state execution module contains MANY functions for evaluating the state system and is well worth a read! These routines can be very useful when debugging states or to help deepen one's understanding of Salt's state system.

As an example, a state written thusly:

apache:
  pkg.installed:
    - name: httpd
  service.running:
    - name: httpd
    - watch:
      - file: apache_conf
      - pkg: apache

apache_conf:
  file.managed:
    - name: /etc/httpd/conf.d/httpd.conf
    - source: salt://apache/httpd.conf

Will have High Data which looks like this represented in json:

{
    "apache": {
        "pkg": [
            {
                "name": "httpd"
            },
            "installed",
            {
                "order": 10000
            }
        ],
        "service": [
            {
                "name": "httpd"
            },
            {
                "watch": [
                    {
                        "file": "apache_conf"
                    },
                    {
                        "pkg": "apache"
                    }
                ]
            },
            "running",
            {
                "order": 10001
            }
        ],
        "__sls__": "blah",
        "__env__": "base"
    },
    "apache_conf": {
        "file": [
            {
                "name": "/etc/httpd/conf.d/httpd.conf"
            },
            {
                "source": "salt://apache/httpd.conf"
            },
            "managed",
            {
                "order": 10002
            }
        ],
        "__sls__": "blah",
        "__env__": "base"
    }
}

The subsequent Low Data will look like this:

[
    {
        "name": "httpd",
        "state": "pkg",
        "__id__": "apache",
        "fun": "installed",
        "__env__": "base",
        "__sls__": "blah",
        "order": 10000
    },
    {
        "name": "httpd",
        "watch": [
            {
                "file": "apache_conf"
            },
            {
                "pkg": "apache"
            }
        ],
        "state": "service",
        "__id__": "apache",
        "fun": "running",
        "__env__": "base",
        "__sls__": "blah",
        "order": 10001
    },
    {
        "name": "/etc/httpd/conf.d/httpd.conf",
        "source": "salt://apache/httpd.conf",
        "state": "file",
        "__id__": "apache_conf",
        "fun": "managed",
        "__env__": "base",
        "__sls__": "blah",
        "order": 10002
    }
]

This tutorial discusses the Low Data evaluation and the state runtime.

Ordering Layers

Salt defines 2 order interfaces which are evaluated in the state runtime and defines these orders in a number of passes.

Definition Order

Note

The Definition Order system can be disabled by turning the option state_auto_order to False in the master configuration file.

The top level of ordering is the Definition Order. The Definition Order is the order in which states are defined in salt formulas. This is very straightforward on basic states which do not contain include statements or a top file, as the states are just ordered from the top of the file, but the include system starts to bring in some simple rules for how the Definition Order is defined.

Looking back at the "Low Data" and "High Data" shown above, the order key has been transparently added to the data to enable the Definition Order.

The Include Statement

Basically, if there is an include statement in a formula, then the formulas which are included will be run BEFORE the contents of the formula which is including them. Also, the include statement is a list, so they will be loaded in the order in which they are included.

In the following case:

foo.sls

include:
  - bar
  - baz

bar.sls

include:
  - quo

baz.sls

include:
  - qux

In the above case if state.sls foo were called then the formulas will be loaded in the following order:

  1. quo
  2. bar
  3. qux
  4. baz
  5. foo
The order Flag

The Definition Order happens transparently in the background, but the ordering can be explicitly overridden using the order flag in states:

apache:
  pkg.installed:
    - name: httpd
    - order: 1

This order flag will over ride the definition order, this makes it very simple to create states that are always executed first, last or in specific stages, a great example is defining a number of package repositories that need to be set up before anything else, or final checks that need to be run at the end of a state run by using order: last or order: -1.

When the order flag is explicitly set the Definition Order system will omit setting an order for that state and directly use the order flag defined.

Lexicographical Fall-back

Salt states were written to ALWAYS execute in the same order. Before the introduction of Definition Order in version 0.17.0 everything was ordered lexicographically according to the name of the state, then function then id.

This is the way Salt has always ensured that states always run in the same order regardless of where they are deployed, the addition of the Definition Order method mealy makes this finite ordering easier to follow.

The lexicographical ordering is still applied but it only has any effect when two order statements collide. This means that if multiple states are assigned the same order number that they will fall back to lexicographical ordering to ensure that every execution still happens in a finite order.

Note

If running with state_auto_order: False the order key is not set automatically, since the Lexicographical order can be derived from other keys.

Requisite Ordering

Salt states are fully declarative, in that they are written to declare the state in which a system should be. This means that components can require that other components have been set up successfully. Unlike the other ordering systems, the Requisite system in Salt is evaluated at runtime.

The requisite system is also built to ensure that the ordering of execution never changes, but is always the same for a given set of states. This is accomplished by using a runtime that processes states in a completely predictable order instead of using an event loop based system like other declarative configuration management systems.

Runtime Requisite Evaluation

The requisite system is evaluated as the components are found, and the requisites are always evaluated in the same order. This explanation will be followed by an example, as the raw explanation may be a little dizzying at first as it creates a linear dependency evaluation sequence.

The "Low Data" is an ordered list or dictionaries, the state runtime evaluates each dictionary in the order in which they are arranged in the list. When evaluating a single dictionary it is checked for requisites, requisites are evaluated in order, require then watch then prereq.

Note

If using requisite in statements like require_in and watch_in these will be compiled down to require and watch statements before runtime evaluation.

Each requisite contains an ordered list of requisites, these requisites are looked up in the list of dictionaries and then executed. Once all requisites have been evaluated and executed then the requiring state can safely be run (or not run if requisites have not been met).

This means that the requisites are always evaluated in the same order, again ensuring one of the core design principals of Salt's State system to ensure that execution is always finite is intact.

Simple Runtime Evaluation Example

Given the above "Low Data" the states will be evaluated in the following order:

  1. The pkg.installed is executed ensuring that the apache package is installed, it contains no requisites and is therefore the first defined state to execute.
  2. The service.running state is evaluated but NOT executed, a watch requisite is found, therefore they are read in order, the runtime first checks for the file, sees that it has not been executed and calls for the file state to be evaluated.
  3. The file state is evaluated AND executed, since it, like the pkg state does not contain any requisites.
  4. The evaluation of the service state continues, it next checks the pkg requisite and sees that it is met, with all requisites met the service state is now executed.
Best Practice

The best practice in Salt is to choose a method and stick with it, official states are written using requisites for all associations since requisites create clean, traceable dependency trails and make for the most portable formulas. To accomplish something similar to how classical imperative systems function all requisites can be omitted and the failhard option then set to True in the master configuration, this will stop all state runs at the first instance of a failure.

In the end, using requisites creates very tight and fine grained states, not using requisites makes full sequence runs and while slightly easier to write, and gives much less control over the executions.

Extending External SLS Data

Sometimes a state defined in one SLS file will need to be modified from a separate SLS file. A good example of this is when an argument needs to be overwritten or when a service needs to watch an additional state.

The Extend Declaration

The standard way to extend is via the extend declaration. The extend declaration is a top level declaration like include and encapsulates ID declaration data included from other SLS files. A standard extend looks like this:

include:
  - http
  - ssh

extend:
  apache:
    file:
      - name: /etc/httpd/conf/httpd.conf
      - source: salt://http/httpd2.conf
  ssh-server:
    service:
      - watch:
        - file: /etc/ssh/banner

/etc/ssh/banner:
  file.managed:
    - source: salt://ssh/banner

A few critical things happened here, first off the SLS files that are going to be extended are included, then the extend dec is defined. Under the extend dec 2 IDs are extended, the apache ID's file state is overwritten with a new name and source. Than the ssh server is extended to watch the banner file in addition to anything it is already watching.

Extend is a Top Level Declaration

This means that extend can only be called once in an sls, if if is used twice then only one of the extend blocks will be read. So this is WRONG:

include:
  - http
  - ssh

extend:
  apache:
    file:
      - name: /etc/httpd/conf/httpd.conf
      - source: salt://http/httpd2.conf
# Second extend will overwrite the first!! Only make one
extend:
  ssh-server:
    service:
      - watch:
        - file: /etc/ssh/banner
The Requisite "in" Statement

Since one of the most common things to do when extending another SLS is to add states for a service to watch, or anything for a watcher to watch, the requisite in statement was added to 0.9.8 to make extending the watch and require lists easier. The ssh-server extend statement above could be more cleanly defined like so:

include:
  - ssh

/etc/ssh/banner:
  file.managed:
    - source: salt://ssh/banner
    - watch_in:
      - service: ssh-server
Rules to Extend By

There are a few rules to remember when extending states:

  1. Always include the SLS being extended with an include declaration
  2. Requisites (watch and require) are appended to, everything else is overwritten
  3. extend is a top level declaration, like an ID declaration, cannot be declared twice in a single SLS
  4. Many IDs can be extended under the extend declaration

Failhard Global Option

Normally, when a state fails Salt continues to execute the remainder of the defined states and will only refuse to execute states that require the failed state.

But the situation may exist, where you would want all state execution to stop if a single state execution fails. The capability to do this is called failing hard.

State Level Failhard

A single state can have a failhard set, this means that if this individual state fails that all state execution will immediately stop. This is a great thing to do if there is a state that sets up a critical config file and setting a require for each state that reads the config would be cumbersome. A good example of this would be setting up a package manager early on:

/etc/yum.repos.d/company.repo:
  file.managed:
    - source: salt://company/yumrepo.conf
    - user: root
    - group: root
    - mode: 644
    - order: 1
    - failhard: True

In this situation, the yum repo is going to be configured before other states, and if it fails to lay down the config file, than no other states will be executed.

Global Failhard

It may be desired to have failhard be applied to every state that is executed, if this is the case, then failhard can be set in the master configuration file. Setting failhard in the master configuration file will result in failing hard when any minion gathering states from the master have a state fail.

This is NOT the default behavior, normally Salt will only fail states that require a failed state.

Using the global failhard is generally not recommended, since it can result in states not being executed or even checked. It can also be confusing to see states failhard if an admin is not actively aware that the failhard has been set.

To use the global failhard set failhard: True in the master configuration file.

Global State Arguments

Note

This documentation has been moved here.

Highstate data structure definitions

The Salt State Tree

A state tree is a collection of SLS files that live under the directory specified in file_roots. A state tree can be organized into SLS modules.

Top file

The main state file that instructs minions what environment and modules to use during state execution.

Configurable via state_top.

Include declaration

Defines a list of Module reference strings to include in this SLS.

Occurs only in the top level of the highstate structure.

Example:

include:
  - edit.vim
  - http.server
Module reference

The name of a SLS module defined by a separate SLS file and residing on the Salt Master. A module named edit.vim is a reference to the SLS file salt://edit/vim.sls.

ID declaration

Defines an individual highstate component. Always references a value of a dictionary containing keys referencing State declaration and Requisite declaration. Can be overridden by a Name declaration or a Names declaration.

Occurs on the top level or under the Extend declaration.

Must be unique across entire state tree. If the same ID declaration is used twice, only the first one matched will be used. All subsequent ID declarations with the same name will be ignored.

Note

Naming gotchas

In Salt versions earlier than 0.9.7, ID declarations containing dots would result in unpredictable highstate output.

Extend declaration

Extends a Name declaration from an included SLS module. The keys of the extend declaration always define existing :ref`ID declaration` which have been defined in included SLS modules.

Occurs only in the top level and defines a dictionary.

States cannot be extended more than once in a single state run.

Extend declarations are useful for adding-to or overriding parts of a State declaration that is defined in another SLS file. In the following contrived example, the shown mywebsite.sls file is include -ing and extend -ing the apache.sls module in order to add a watch declaration that will restart Apache whenever the Apache configuration file, mywebsite changes.

include:
  - apache

extend:
  apache:
    service:
      - watch:
        - file: mywebsite

mywebsite:
  file.managed:
    - name: /var/www/mysite

See also

watch_in and require_in

Sometimes it is more convenient to use the watch_in or require_in syntax instead of extending another SLS file.

State Requisites

State declaration

A list which contains one string defining the Function declaration and any number of Function arg declaration dictionaries.

Can, optionally, contain a number of additional components like the name override components — name and names. Can also contain requisite declarations.

Occurs under an ID declaration.

Requisite declaration

A list containing requisite references.

Used to build the action dependency tree. While Salt states are made to execute in a deterministic order, this order is managed by requiring and watching other Salt states.

Occurs as a list component under a State declaration or as a key under an ID declaration.

Requisite reference

A single key dictionary. The key is the name of the referenced State declaration and the value is the ID of the referenced ID declaration.

Occurs as a single index in a Requisite declaration list.

Function declaration

The name of the function to call within the state. A state declaration can contain only a single function declaration.

For example, the following state declaration calls the installed function in the pkg state module:

httpd:
  pkg.installed: []

The function can be declared inline with the state as a shortcut. The actual data structure is compiled to this form:

httpd:
  pkg:
    - installed

Where the function is a string in the body of the state declaration. Technically when the function is declared in dot notation the compiler converts it to be a string in the state declaration list. Note that the use of the first example more than once in an ID declaration is invalid yaml.

INVALID:

httpd:
  pkg.installed
  service.running

When passing a function without arguments and another state declaration within a single ID declaration, then the long or "standard" format needs to be used since otherwise it does not represent a valid data structure.

VALID:

httpd:
  pkg.installed: []
  service.running: []

Occurs as the only index in the State declaration list.

Function arg declaration

A single key dictionary referencing a Python type which is to be passed to the named Function declaration as a parameter. The type must be the data type expected by the function.

Occurs under a Function declaration.

For example in the following state declaration user, group, and mode are passed as arguments to the managed function in the file state module:

/etc/http/conf/http.conf:
  file.managed:
    - user: root
    - group: root
    - mode: 644
Name declaration

Overrides the name argument of a State declaration. If name is not specified the ID declaration satisfies the name argument.

The name is always a single key dictionary referencing a string.

Overriding name is useful for a variety of scenarios.

For example, avoiding clashing ID declarations. The following two state declarations cannot both have /etc/motd as the ID declaration:

motd_perms:
  file.managed:
    - name: /etc/motd
    - mode: 644

motd_quote:
  file.append:
    - name: /etc/motd
    - text: "Of all smells, bread; of all tastes, salt."

Another common reason to override name is if the ID declaration is long and needs to be referenced in multiple places. In the example below it is much easier to specify mywebsite than to specify /etc/apache2/sites-available/mywebsite.com multiple times:

mywebsite:
  file.managed:
    - name: /etc/apache2/sites-available/mywebsite.com
    - source: salt://mywebsite.com

a2ensite mywebsite.com:
  cmd.wait:
    - unless: test -L /etc/apache2/sites-enabled/mywebsite.com
    - watch:
      - file: mywebsite

apache2:
  service.running:
    - watch:
      - file: mywebsite
Names declaration

Expands the contents of the containing State declaration into multiple state declarations, each with its own name.

For example, given the following state declaration:

python-pkgs:
  pkg.installed:
    - names:
      - python-django
      - python-crypto
      - python-yaml

Once converted into the lowstate data structure the above state declaration will be expanded into the following three state declarations:

python-django:
  pkg.installed

python-crypto:
  pkg.installed

python-yaml:
  pkg.installed

Other values can be overridden during the expansion by providing an additional dictionary level.

New in version 2014.7.0.

ius:
  pkgrepo.managed:
    - humanname: IUS Community Packages for Enterprise Linux 6 - $basearch
    - gpgcheck: 1
    - baseurl: http://mirror.rackspace.com/ius/stable/CentOS/6/$basearch
    - gpgkey: http://dl.iuscommunity.org/pub/ius/IUS-COMMUNITY-GPG-KEY
    - names:
        - ius
        - ius-devel:
            - baseurl: http://mirror.rackspace.com/ius/development/CentOS/6/$basearch
Large example

Here is the layout in yaml using the names of the highdata structure components.

<Include Declaration>:
  - <Module Reference>
  - <Module Reference>

<Extend Declaration>:
  <ID Declaration>:
    [<overrides>]


# standard declaration

<ID Declaration>:
  <State Module>:
    - <Function>
    - <Function Arg>
    - <Function Arg>
    - <Function Arg>
    - <Name>: <name>
    - <Requisite Declaration>:
      - <Requisite Reference>
      - <Requisite Reference>


# inline function and names

<ID Declaration>:
  <State Module>.<Function>:
    - <Function Arg>
    - <Function Arg>
    - <Function Arg>
    - <Names>:
      - <name>
      - <name>
      - <name>
    - <Requisite Declaration>:
      - <Requisite Reference>
      - <Requisite Reference>


# multiple states for single id

<ID Declaration>:
  <State Module>:
    - <Function>
    - <Function Arg>
    - <Name>: <name>
    - <Requisite Declaration>:
      - <Requisite Reference>
  <State Module>:
    - <Function>
    - <Function Arg>
    - <Names>:
      - <name>
      - <name>
    - <Requisite Declaration>:
      - <Requisite Reference>

Include and Exclude

Salt sls files can include other sls files and exclude sls files that have been otherwise included. This allows for an sls file to easily extend or manipulate other sls files.

Include

When other sls files are included, everything defined in the included sls file will be added to the state run. When including define a list of sls formulas to include:

include:
  - http
  - libvirt

The include statement will include sls formulas from the same environment that the including sls formula is in. But the environment can be explicitly defined in the configuration to override the running environment, therefore if an sls formula needs to be included from an external environment named "dev" the following syntax is used:

include:
  - dev: http

NOTE: include does not simply inject the states where you place it in the sls file. If you need to guarantee order of execution, consider using requisites.

Do not use dots in SLS file names

The initial implementation of top.sls and Include declaration followed the python import model where a slash is represented as a period. This means that a SLS file with a period in the name ( besides the suffix period) can not be referenced. For example, webserver_1.0.sls is not referenceable because webserver_1.0 would refer to the directory/file webserver_1/0.sls

Relative Include

In Salt 0.16.0 the capability to include sls formulas which are relative to the running sls formula was added, simply precede the formula name with a .:

include:
  - .virt
  - .virt.hyper
Exclude

The exclude statement, added in Salt 0.10.3 allows an sls to hard exclude another sls file or a specific id. The component is excluded after the high data has been compiled, so nothing should be able to override an exclude.

Since the exclude can remove an id or an sls the type of component to exclude needs to be defined. an exclude statement that verifies that the running highstate does not contain the http sls and the /etc/vimrc id would look like this:

exclude:
  - sls: http
  - id: /etc/vimrc

State System Layers

The Salt state system is comprised of multiple layers. While using Salt does not require an understanding of the state layers, a deeper understanding of how Salt compiles and manages states can be very beneficial.

Function Call

The lowest layer of functionality in the state system is the direct state function call. State executions are executions of single state functions at the core. These individual functions are defined in state modules and can be called directly via the state.single command.

salt '*' state.single pkg.installed name='vim'
Low Chunk

The low chunk is the bottom of the Salt state compiler. This is a data representation of a single function call. The low chunk is sent to the state caller and used to execute a single state function.

A single low chunk can be executed manually via the state.low command.

salt '*' state.low '{name: vim, state: pkg, fun: installed}'

The passed data reflects what the state execution system gets after compiling the data down from sls formulas.

Low State

The Low State layer is the list of low chunks "evaluated" in order. To see what the low state looks like for a highstate, run:

salt '*' state.show_lowstate

This will display the raw lowstate in the order which each low chunk will be evaluated. The order of evaluation is not necessarily the order of execution, since requisites are evaluated at runtime. Requisite execution and evaluation is finite; this means that the order of execution can be ascertained with 100% certainty based on the order of the low state.

High Data

High data is the data structure represented in YAML via SLS files. The High data structure is created by merging the data components rendered inside sls files (or other render systems). The High data can be easily viewed by executing the state.show_highstate or state.show_sls functions. Since this data is a somewhat complex data structure, it may be easier to read using the json, yaml, or pprint outputters:

salt '*' state.show_highstate --out yaml
salt '*' state.show_sls edit.vim --out pprint
SLS

Above "High Data", the logical layers are no longer technically required to be executed, or to be executed in a hierarchy. This means that how the High data is generated is optional and very flexible. The SLS layer allows for many mechanisms to be used to render sls data from files or to use the fileserver backend to generate sls and file data from external systems.

The SLS layer can be called directly to execute individual sls formulas.

Note

SLS Formulas have historically been called "SLS files". This is because a single SLS was only constituted in a single file. Now the term "SLS Formula" better expresses how a compartmentalized SLS can be expressed in a much more dynamic way by combining pillar and other sources, and the SLS can be dynamically generated.

To call a single SLS formula named edit.vim, execute state.sls:

salt '*' state.sls edit.vim
HighState

Calling SLS directly logically assigns what states should be executed from the context of the calling minion. The Highstate layer is used to allow for full contextual assignment of what is executed where to be tied to groups of, or individual, minions entirely from the master. This means that the environment of a minion, and all associated execution data pertinent to said minion, can be assigned from the master without needing to execute or configure anything on the target minion. This also means that the minion can independently retrieve information about its complete configuration from the master.

To execute the High State call state.highstate:

salt '*' state.highstate
OverState

The overstate layer expresses the highest functional layer of Salt's automated logic systems. The Overstate allows for stateful and functional orchestration of routines from the master. The overstate defines in data execution stages which minions should execute states, or functions, and in what order using requisite logic.

The Orchestrate Runner

Note

This documentation has been moved here.

Ordering States

The way in which configuration management systems are executed is a hotly debated topic in the configuration management world. Two major philosophies exist on the subject, to either execute in an imperative fashion where things are executed in the order in which they are defined, or in a declarative fashion where dependencies need to be mapped between objects.

Imperative ordering is finite and generally considered easier to write, but declarative ordering is much more powerful and flexible but generally considered more difficult to create.

Salt has been created to get the best of both worlds. States are evaluated in a finite order, which guarantees that states are always executed in the same order, and the states runtime is declarative, making Salt fully aware of dependencies via the requisite system.

State Auto Ordering

Salt always executes states in a finite manner, meaning that they will always execute in the same order regardless of the system that is executing them. But in Salt 0.17.0, the state_auto_order option was added. This option makes states get evaluated in the order in which they are defined in sls files.

The evaluation order makes it easy to know what order the states will be executed in, but it is important to note that the requisite system will override the ordering defined in the files, and the order option described below will also override the order in which states are defined in sls files.

If the classic ordering is preferred (lexicographic), then set state_auto_order to False in the master configuration file.

Requisite Statements

Note

This document represents behavior exhibited by Salt requisites as of version 0.9.7 of Salt.

Often when setting up states any single action will require or depend on another action. Salt allows for the building of relationships between states with requisite statements. A requisite statement ensures that the named state is evaluated before the state requiring it. There are three types of requisite statements in Salt, require, watch, and prereq.

These requisite statements are applied to a specific state declaration:

httpd:
  pkg.installed: []
  file.managed:
    - name: /etc/httpd/conf/httpd.conf
    - source: salt://httpd/httpd.conf
    - require:
      - pkg: httpd

In this example, the require requisite is used to declare that the file /etc/httpd/conf/httpd.conf should only be set up if the pkg state executes successfully.

The requisite system works by finding the states that are required and executing them before the state that requires them. Then the required states can be evaluated to see if they have executed correctly.

Require statements can refer to any state defined in Salt. The basic examples are pkg, service, and file, but any used state can be referenced.

In addition to state declarations such as pkg, file, etc., sls type requisites are also recognized, and essentially allow 'chaining' of states. This provides a mechanism to ensure the proper sequence for complex state formulas, especially when the discrete states are split or groups into separate sls files:

include:
  - network

httpd:
  pkg.installed: []
  service.running:
    - require:
      - pkg: httpd
      - sls: network

In this example, the httpd service running state will not be applied (i.e., the httpd service will not be started) unless both the httpd package is installed AND the network state is satisfied.

Note

Requisite matching

Requisites match on both the ID Declaration and the name parameter. Therefore, if using the pkgs or sources argument to install a list of packages in a pkg state, it's important to note that it is impossible to match an individual package in the list, since all packages are installed as a single state.

Multiple Requisites

The requisite statement is passed as a list, allowing for the easy addition of more requisites. Both requisite types can also be separately declared:

httpd:
  pkg.installed: []
  service.running:
    - enable: True
    - watch:
      - file: /etc/httpd/conf/httpd.conf
    - require:
      - pkg: httpd
      - user: httpd
      - group: httpd
  file.managed:
    - name: /etc/httpd/conf/httpd.conf
    - source: salt://httpd/httpd.conf
    - require:
      - pkg: httpd
  user.present: []
  group.present: []

In this example, the httpd service is only going to be started if the package, user, group, and file are executed successfully.

Requisite Documentation

For detailed information on each of the individual requisites, please look here.

The Order Option

Before using the order option, remember that the majority of state ordering should be done with a Requisite declaration, and that a requisite declaration will override an order option, so a state with order option should not require or required by other states.

The order option is used by adding an order number to a state declaration with the option order:

vim:
  pkg.installed:
    - order: 1

By adding the order option to 1 this ensures that the vim package will be installed in tandem with any other state declaration set to the order 1.

Any state declared without an order option will be executed after all states with order options are executed.

But this construct can only handle ordering states from the beginning. Certain circumstances will present a situation where it is desirable to send a state to the end of the line. To do this, set the order to last:

vim:
  pkg.installed:
    - order: last

OverState System

Note

This documentation has been moved here.

State Providers

New in version 0.9.8.

Salt predetermines what modules should be mapped to what uses based on the properties of a system. These determinations are generally made for modules that provide things like package and service management.

Sometimes in states, it may be necessary to use an alternative module to provide the needed functionality. For instance, an older Arch Linux system may not be running systemd, so instead of using the systemd service module, you can revert to the default service module:

httpd:
  service.running:
    - enable: True
    - provider: service

In this instance, the basic service module (which manages sysvinit-based services) will replace the systemd module which is used by default on Arch Linux.

However, if it is necessary to make this override for most or every service, it is better to just override the provider in the minion config file, as described in the section below.

Setting a Provider in the Minion Config File

Sometimes, when running Salt on custom Linux spins, or distribution that are derived from other distributions, Salt does not successfully detect providers. The providers which are most likely to be affected by this are:

  • pkg
  • service
  • user
  • group

When something like this happens, rather than specifying the provider manually in each state, it easier to use the providers parameter in the minion config file to set the provider.

If you end up needing to override a provider because it was not detected, please let us know! File an issue on the issue tracker, and provide the output from the grains.items function, taking care to sanitize any sensitive information.

Below are tables that should help with deciding which provider to use if one needs to be overridden.

Provider: pkg
Execution Module Used for
apt Debian/Ubuntu-based distros which use apt-get(8) for package management
brew Mac OS software management using Homebrew
ebuild Gentoo-based systems (utilizes the portage python module as well as emerge(1))
freebsdpkg FreeBSD-based OSes using pkg_add(1)
openbsdpkg OpenBSD-based OSes using pkg_add(1)
pacman Arch Linux-based distros using pacman(8)
pkgin NetBSD-based OSes using pkgin(1)
pkgng FreeBSD-based OSes using pkg(8)
pkgutil Solaris-based OSes using OpenCSW's pkgutil(1)
solarispkg Solaris-based OSes using pkgadd(1M)
solarisips Solaris-based OSes using IPS pkg(1)
win_pkg Windows
yumpkg RedHat-based distros and derivatives (wraps yum(8))
zypper SUSE-based distros using zypper(8)
Provider: service
Execution Module Used for
debian_service Debian (non-systemd)
freebsdservice FreeBSD-based OSes using service(8)
gentoo_service Gentoo Linux using sysvinit and rc-update(8)
launchctl Mac OS hosts using launchctl(1)
netbsdservice NetBSD-based OSes
openbsdservice OpenBSD-based OSes
rh_service RedHat-based distros and derivatives using service(8) and chkconfig(8). Supports both pure sysvinit and mixed sysvinit/upstart systems.
service Fallback which simply wraps sysvinit scripts
smf Solaris-based OSes which use SMF
systemd Linux distros which use systemd
upstart Ubuntu-based distros using upstart
win_service Windows
Provider: user
Execution Module Used for
useradd Linux, NetBSD, and OpenBSD systems using useradd(8), userdel(8), and usermod(8)
pw_user FreeBSD-based OSes using pw(8)
solaris_user Solaris-based OSes using useradd(1M), userdel(1M), and usermod(1M)
win_useradd Windows
Provider: group
Execution Module Used for
groupadd Linux, NetBSD, and OpenBSD systems using groupadd(8), groupdel(8), and groupmod(8)
pw_group FreeBSD-based OSes using pw(8)
solaris_group Solaris-based OSes using groupadd(1M), groupdel(1M), and groupmod(1M)
win_groupadd Windows
Arbitrary Module Redirects

The provider statement can also be used for more powerful means, instead of overwriting or extending the module used for the named service an arbitrary module can be used to provide certain functionality.

emacs:
  pkg.installed:
    - provider:
      - cmd: customcmd

In this example, the state is being instructed to use a custom module to invoke commands.

Arbitrary module redirects can be used to dramatically change the behavior of a given state.

Requisites and Other Global State Arguments

Fire Event Notifications

New in version Beryllium.

The fire_event option in a state will cause the minion to send an event to the Salt Master upon completion of that individual state.

The following example will cause the minion to send an event to the Salt Master with a tag of salt/state_result/20150505121517276431/dasalt/nano and the result of the state will be the data field of the event. Notice that the name of the state gets added to the tag.

nano_stuff:
  pkg.installed:
    - name: nano
    - fire_event: True

In the following example instead of setting fire_event to True, fire_event is set to an arbitrary string, which will cause the event to be sent with this tag: salt/state_result/20150505121725642845/dasalt/custom/tag/nano/finished

nano_stuff:
  pkg.installed:
    - name: nano
    - fire_event: custom/tag/nano/finished
Requisites

The Salt requisite system is used to create relationships between states. The core idea being that, when one state is dependent somehow on another, that inter-dependency can be easily defined.

Requisites come in two types: Direct requisites (such as require), and requisite_ins (such as require_in). The relationships are directional: a direct requisite requires something from another state. However, a requisite_in inserts a requisite into the targeted state pointing to the targeting state. The following example demonstrates a direct requisite:

vim:
  pkg.installed: []

/etc/vimrc:
  file.managed:
    - source: salt://edit/vimrc
    - require:
      - pkg: vim

In the example above, the file /etc/vimrc depends on the vim package.

Requisite_in statements are the opposite. Instead of saying "I depend on something", requisite_ins say "Someone depends on me":

vim:
  pkg.installed:
    - require_in:
      - file: /etc/vimrc

/etc/vimrc:
  file.managed:
    - source: salt://edit/vimrc

So here, with a requisite_in, the same thing is accomplished as in the first example, but the other way around. The vim package is saying "/etc/vimrc depends on me". This will result in a require being inserted into the /etc/vimrc state which targets the vim state.

In the end, a single dependency map is created and everything is executed in a finite and predictable order.

Note

Requisite matching

Requisites match on both the ID Declaration and the name parameter. This means that, in the example above, the require_in requisite would also have been matched if the /etc/vimrc state was written as follows:

vimrc:
  file.managed:
    - name: /etc/vimrc
    - source: salt://edit/vimrc
Direct Requisite and Requisite_in types

There are several direct requisite statements that can be used in Salt:

  • require
  • watch
  • prereq
  • use
  • onchanges
  • onfail

Each direct requisite also has a corresponding requisite_in:

  • require_in
  • watch_in
  • prereq_in
  • use_in
  • onchanges_in
  • onfail_in

All of the requisites define specific relationships and always work with the dependency logic defined above.

require

The use of require demands that the dependent state executes before the depending state. The state containing the require requisite is defined as the depending state. The state specified in the require statement is defined as the dependent state. If the dependent state's execution succeeds, the depending state will then execute. If the dependent state's execution fails, the depending state will not execute. In the first example above, the file /etc/vimrc will only execute after the vim package is installed successfully.

Require an entire sls file

As of Salt 0.16.0, it is possible to require an entire sls file. Do this first by including the sls file and then setting a state to require the included sls file:

include:
  - foo

bar:
  pkg.installed:
    - require:
      - sls: foo
watch

watch statements are used to add additional behavior when there are changes in other states.

Note

If a state should only execute when another state has changes, and otherwise do nothing, the new onchanges requisite should be used instead of watch. watch is designed to add additional behavior when there are changes, but otherwise execute normally.

The state containing the watch requisite is defined as the watching state. The state specified in the watch statement is defined as the watched state. When the watched state executes, it will return a dictionary containing a key named "changes". Here are two examples of state return dictionaries, shown in json for clarity:

"local": {
    "file_|-/tmp/foo_|-/tmp/foo_|-directory": {
        "comment": "Directory /tmp/foo updated",
        "__run_num__": 0,
        "changes": {
            "user": "bar"
        },
        "name": "/tmp/foo",
        "result": true
    }
}

"local": {
    "pkgrepo_|-salt-minion_|-salt-minion_|-managed": {
        "comment": "Package repo 'salt-minion' already configured",
        "__run_num__": 0,
        "changes": {},
        "name": "salt-minion",
        "result": true
    }
}

If the "result" of the watched state is True, the watching state will execute normally. This part of watch mirrors the functionality of the require requisite. If the "result" of the watched state is False, the watching state will never run, nor will the watching state's mod_watch function execute.

However, if the "result" of the watched state is True, and the "changes" key contains a populated dictionary (changes occurred in the watched state), then the watch requisite can add additional behavior. This additional behavior is defined by the mod_watch function within the watching state module. If the mod_watch function exists in the watching state module, it will be called in addition to the normal watching state. The return data from the mod_watch function is what will be returned to the master in this case; the return data from the main watching function is discarded.

If the "changes" key contains an empty dictionary, the watch requisite acts exactly like the require requisite (the watching state will execute if "result" is True, and fail if "result" is False in the watched state).

Note

Not all state modules contain mod_watch. If mod_watch is absent from the watching state module, the watch requisite behaves exactly like a require requisite.

A good example of using watch is with a service.running state. When a service watches a state, then the service is reloaded/restarted when the watched state changes, in addition to Salt ensuring that the service is running.

ntpd:
  service.running:
    - watch:
      - file: /etc/ntp.conf
  file.managed:
    - name: /etc/ntp.conf
    - source: salt://ntp/files/ntp.conf
prereq

New in version 0.16.0.

prereq allows for actions to be taken based on the expected results of a state that has not yet been executed. The state containing the prereq requisite is defined as the pre-requiring state. The state specified in the prereq statement is defined as the pre-required state.

When a prereq requisite is evaluated, the pre-required state reports if it expects to have any changes. It does this by running the pre-required single state as a test-run by enabling test=True. This test-run will return a dictionary containing a key named "changes". (See the watch section above for examples of "changes" dictionaries.)

If the "changes" key contains a populated dictionary, it means that the pre-required state expects changes to occur when the state is actually executed, as opposed to the test-run. The pre-requiring state will now actually run. If the pre-requiring state executes successfully, the pre-required state will then execute. If the pre-requiring state fails, the pre-required state will not execute.

If the "changes" key contains an empty dictionary, this means that changes are not expected by the pre-required state. Neither the pre-required state nor the pre-requiring state will run.

The best way to define how prereq operates is displayed in the following practical example: When a service should be shut down because underlying code is going to change, the service should be off-line while the update occurs. In this example, graceful-down is the pre-requiring state and site-code is the pre-required state.

graceful-down:
  cmd.run:
    - name: service apache graceful
    - prereq:
      - file: site-code

site-code:
  file.recurse:
    - name: /opt/site_code
    - source: salt://site/code

In this case the apache server will only be shutdown if the site-code state expects to deploy fresh code via the file.recurse call. The site-code deployment will only be executed if the graceful-down run completes successfully.

onfail

New in version 2014.7.0.

The onfail requisite allows for reactions to happen strictly as a response to the failure of another state. This can be used in a number of ways, such as executing a second attempt to set up a service or begin to execute a separate thread of states because of a failure.

The onfail requisite is applied in the same way as require as watch:

primary_mount:
  mount.mounted:
    - name: /mnt/share
    - device: 10.0.0.45:/share
    - fstype: nfs

backup_mount:
  mount.mounted:
    - name: /mnt/share
    - device: 192.168.40.34:/share
    - fstype: nfs
    - onfail:
      - mount: primary_mount
onchanges

New in version 2014.7.0.

The onchanges requisite makes a state only apply if the required states generate changes, and if the watched state's "result" is True. This can be a useful way to execute a post hook after changing aspects of a system.

use

The use requisite is used to inherit the arguments passed in another id declaration. This is useful when many files need to have the same defaults.

/etc/foo.conf:
  file.managed:
    - source: salt://foo.conf
    - template: jinja
    - mkdirs: True
    - user: apache
    - group: apache
    - mode: 755

/etc/bar.conf
  file.managed:
    - source: salt://bar.conf
    - use:
      - file: /etc/foo.conf

The use statement was developed primarily for the networking states but can be used on any states in Salt. This makes sense for the networking state because it can define a long list of options that need to be applied to multiple network interfaces.

The use statement does not inherit the requisites arguments of the targeted state. This means also a chain of use requisites would not inherit inherited options.

The _in versions of requisites

All of the requisites also have corresponding requisite_in versions, which do the reverse of their normal counterparts. The examples below all use require_in as the example, but note that all of the _in requisites work the same way: They result in a normal requisite in the targeted state, which targets the state which has defines the requisite_in. Thus, a require_in causes the target state to require the targeting state. Similarly, a watch_in causes the target state to watch the targeting state. This pattern continues for the rest of the requisites.

If a state declaration needs to be required by another state declaration then require_in can accommodate it. Therefore, these two sls files would be the same in the end:

Using require

httpd:
  pkg.installed: []
  service.running:
    - require:
      - pkg: httpd

Using require_in

httpd:
  pkg.installed:
    - require_in:
      - service: httpd
  service.running: []

The require_in statement is particularly useful when assigning a require in a separate sls file. For instance it may be common for httpd to require components used to set up PHP or mod_python, but the HTTP state does not need to be aware of the additional components that require it when it is set up:

http.sls

httpd:
  pkg.installed: []
  service.running:
    - require:
      - pkg: httpd

php.sls

include:
  - http

php:
  pkg.installed:
    - require_in:
      - service: httpd

mod_python.sls

include:
  - http

mod_python:
  pkg.installed:
    - require_in:
      - service: httpd

Now the httpd server will only start if php or mod_python are first verified to be installed. Thus allowing for a requisite to be defined "after the fact".

Altering States

The state altering system is used to make sure that states are evaluated exactly as the user expects. It can be used to double check that a state preformed exactly how it was expected to, or to make 100% sure that a state only runs under certain conditions. The use of unless or onlyif options help make states even more stateful. The check_cmds option helps ensure that the result of a state is evaluated correctly.

Unless

New in version 2014.7.0.

The unless requisite specifies that a state should only run when any of the specified commands return False. The unless requisite operates as NOR and is useful in giving more granular control over when a state should execute.

NOTE: Under the hood unless calls cmd.retcode with python_shell=True. This means the commands referenced by unless will be parsed by a shell, so beware of side-effects as this shell will be run with the same privileges as the salt-minion.

vim:
  pkg.installed:
    - unless:
      - rpm -q vim-enhanced
      - ls /usr/bin/vim

In the example above, the state will only run if either the vim-enhanced package is not installed (returns False) or if /usr/bin/vim does not exist (returns False). The state will run if both commands return False.

However, the state will not run if both commands return True.

Unless checks are resolved for each name to which they are associated.

For example:

deploy_app:
  cmd.run:
    - names:
      - first_deploy_cmd
      - second_deploy_cmd
    - unless: some_check

In the above case, some_check will be run prior to _each_ name -- once for first_deploy_cmd and a second time for second_deploy_cmd.

Onlyif

New in version 2014.7.0.

onlyif is the opposite of unless. If all of the commands in onlyif return True, then the state is run. If any of the specified commands return False, the state will not run.

NOTE: Under the hood onlyif calls cmd.retcode with python_shell=True. This means the commands referenced by unless will be parsed by a shell, so beware of side-effects as this shell will be run with the same privileges as the salt-minion.

stop-volume:
  module.run:
    - name: glusterfs.stop_volume
    - m_name: work
    - onlyif:
      - gluster volume status work
    - order: 1

remove-volume:
  module.run:
    - name: glusterfs.delete
    - m_name: work
    - onlyif:
      - gluster volume info work
    - watch:
      - cmd: stop-volume

The above example ensures that the stop_volume and delete modules only run if the gluster commands return a 0 ret value.

Listen/Listen_in

New in version 2014.7.0.

listen and its counterpart listen_in trigger mod_wait functions for states, when those states succeed and result in changes, similar to how watch its counterpart watch_in. Unlike watch and watch_in, listen, and listen_in will not modify the order of states and can be used to ensure your states are executed in the order they are defined. All listen/listen_in actions will occur at the end of a state run, after all states have completed.

restart-apache2:
  service.running:
    - name: apache2
    - listen:
      - file: /etc/apache2/apache2.conf

configure-apache2:
  file.managed:
    - path: /etc/apache2/apache2.conf
    - source: salt://apache2/apache2.conf

This example will cause apache2 to be restarted when the apache2.conf file is changed, but the apache2 restart will happen at the end of the state run.

restart-apache2:
  service.running:
    - name: apache2

configure-apache2:
  file.managed:
    - path: /etc/apache2/apache2.conf
    - source: salt://apache2/apache2.conf
    - listen_in:
      - service: apache2

This example does the same as the above example, but puts the state argument on the file resource, rather than the service resource.

check_cmd

New in version 2014.7.0.

Check Command is used for determining that a state did or did not run as expected.

NOTE: Under the hood check_cmd calls cmd.retcode with python_shell=True. This means the commands referenced by unless will be parsed by a shell, so beware of side-effects as this shell will be run with the same privileges as the salt-minion.

comment-repo:
  file.replace:
    - path: /etc/yum.repos.d/fedora.repo
    - pattern: ^enabled=0
    - repl: enabled=1
    - check_cmd:
      - grep 'enabled=0' /etc/yum.repos.d/fedora.repo && return 1 || return 0

This will attempt to do a replace on all enabled=0 in the .repo file, and replace them with enabled=1. The check_cmd is just a bash command. It will do a grep for enabled=0 in the file, and if it finds any, it will return a 0, which will prompt the && portion of the command to return a 1, causing check_cmd to set the state as failed. If it returns a 1, meaning it didn't find any 'enabled=0' it will hit the || portion of the command, returning a 0, and declaring the function succeeded.

Overriding Checks

There are two commands used for the above checks.

mod_run_check is used to check for onlyif and unless. If the goal is to override the global check for these to variables, include a mod_run_check in the salt/states/ file.

mod_run_check_cmd is used to check for the check_cmd options. To override this one, include a mod_run_check_cmd in the states file for the state.

Startup States

Sometimes it may be desired that the salt minion execute a state run when it is started. This alleviates the need for the master to initiate a state run on a new minion and can make provisioning much easier.

As of Salt 0.10.3 the minion config reads options that allow for states to be executed at startup. The options are startup_states, sls_list, and top_file.

The startup_states option can be passed one of a number of arguments to define how to execute states. The available options are:

highstate
Execute state.highstate
sls
Read in the sls_list option and execute the named sls files
top
Read in the top_file option and execute states based on that top file on the Salt Master
Examples:

Execute state.highstate when starting the minion:

startup_states: highstate

Execute the sls files edit.vim and hyper:

startup_states: sls

sls_list:
  - edit.vim
  - hyper

State Testing

Executing a Salt state run can potentially change many aspects of a system and it may be desirable to first see what a state run is going to change before applying the run.

Salt has a test interface to report on exactly what will be changed, this interface can be invoked on any of the major state run functions:

salt '*' state.highstate test=True
salt '*' state.sls test=True
salt '*' state.single test=True

The test run is mandated by adding the test=True option to the states. The return information will show states that will be applied in yellow and the result is reported as None.

Default Test

If the value test is set to True in the minion configuration file then states will default to being executed in test mode. If this value is set then states can still be run by calling test=False:

salt '*' state.highstate test=False
salt '*' state.sls test=False
salt '*' state.single test=False

The Top File

The top file (top.sls) is used to map what SLS modules get loaded onto what minions via the state system. The top file creates a few general abstractions. First it maps what nodes should pull from which environments, next it defines which matches systems should draw from.

Environments

Environments allow conceptually organizing state tree directories. Environments can be made to be self-contained or state trees can be made to bleed through environments.

Note

Environments in Salt are very flexible. This section defines how the top file can be used to define what states from what environments are to be used for specific minions.

If the intent is to bind minions to specific environments, then the environment option can be set in the minion configuration file.

The environments in the top file corresponds with the environments defined in the file_roots variable. In a simple, single environment setup you only have the base environment, and therefore only one state tree. Here is a simple example of file_roots in the master configuration:

file_roots:
  base:
    - /srv/salt

This means that the top file will only have one environment to pull from, here is a simple, single environment top file:

base:
  '*':
    - core
    - edit

This also means that /srv/salt has a state tree. But if you want to use multiple environments, or partition the file server to serve more than just the state tree, then the file_roots option can be expanded:

file_roots:
  base:
    - /srv/salt/base
  dev:
    - /srv/salt/dev
  qa:
    - /srv/salt/qa
  prod:
    - /srv/salt/prod

Then our top file could reference the environments:

dev:
  'webserver*dev*':
    - webserver
  'db*dev*':
    - db
qa:
  'webserver*qa*':
    - webserver
  'db*qa*':
    - db
prod:
  'webserver*prod*':
    - webserver
  'db*prod*':
    - db

In this setup we have state trees in three of the four environments, and no state tree in the base environment. Notice that the targets for the minions specify environment data. In Salt the master determines who is in what environment, and many environments can be crossed together. For instance, a separate global state tree could be added to the base environment if it suits your deployment:

base:
  '*':
    - global
dev:
  'webserver*dev*':
    - webserver
  'db*dev*':
    - db
qa:
  'webserver*qa*':
    - webserver
  'db*qa*':
    - db
prod:
  'webserver*prod*':
    - webserver
  'db*prod*':
    - db

In this setup all systems will pull the global SLS from the base environment, as well as pull from their respective environments. If you assign only one SLS to a system, as in this example, a shorthand is also available:

base:
  '*': global
dev:
  'webserver*dev*': webserver
  'db*dev*':        db
qa:
  'webserver*qa*': webserver
  'db*qa*':        db
prod:
  'webserver*prod*': webserver
  'db*prod*':        db

Note

The top files from all defined environments will be compiled into a single top file for all states. Top files are environment agnostic.

Remember, that since everything is a file in Salt, the environments are primarily file server environments, this means that environments that have nothing to do with states can be defined and used to distribute other files.

A clean and recommended setup for multiple environments would look like this:

# Master file_roots configuration:
file_roots:
  base:
    - /srv/salt/base
  dev:
    - /srv/salt/dev
  qa:
    - /srv/salt/qa
  prod:
    - /srv/salt/prod

Then only place state trees in the dev, qa, and prod environments, leaving the base environment open for generic file transfers. Then the top.sls file would look something like this:

dev:
  'webserver*dev*':
    - webserver
  'db*dev*':
    - db
qa:
  'webserver*qa*':
    - webserver
  'db*qa*':
    - db
prod:
  'webserver*prod*':
    - webserver
  'db*prod*':
    - db
Other Ways of Targeting Minions

In addition to globs, minions can be specified in top files a few other ways. Some common ones are compound matches and node groups.

Here is a slightly more complex top file example, showing the different types of matches you can perform:

base:
    '*':
        - ldap-client
        - networking
        - salt.minion

    'salt-master*':
        - salt.master

    '^(memcache|web).(qa|prod).loc$':
        - match: pcre
        - nagios.mon.web
        - apache.server

    'os:Ubuntu':
        - match: grain
        - repos.ubuntu

    'os:(RedHat|CentOS)':
        - match: grain_pcre
        - repos.epel

    'foo,bar,baz':
        - match: list
        - database

    'somekey:abc':
        - match: pillar
        - xyz

    'nag1* or G@role:monitoring':
        - match: compound
        - nagios.server

In this example top.sls, all minions get the ldap-client, networking, and salt.minion states. Any minion with an id matching the salt-master* glob will get the salt.master state. Any minion with ids matching the regular expression ^(memcache|web).(qa|prod).loc$ will get the nagios.mon.web and apache.server states. All Ubuntu minions will receive the repos.ubuntu state, while all RHEL and CentOS minions will receive the repos.epel state. The minions foo, bar, and baz will receive the database state. Any minion with a pillar named somekey, having a value of abc will receive the xyz state. Finally, minions with ids matching the nag1* glob or with a grain named role equal to monitoring will receive the nagios.server state.

How Top Files Are Compiled

Warning

There is currently a known issue with the topfile compilation. The below may not be completely valid until https://github.com/saltstack/salt/issues/12483#issuecomment-64181598 is closed.

As mentioned earlier, the top files in the different environments are compiled into a single set of data. The way in which this is done follows a few rules, which are important to understand when arranging top files in different environments. The examples below all assume that the file_roots are set as in the above multi-environment example.

  1. The base environment's top file is processed first. Any environment which is defined in the base top.sls as well as another environment's top file, will use the instance of the environment configured in base and ignore all other instances. In other words, the base top file is authoritative when defining environments. Therefore, in the example below, the dev section in /srv/salt/dev/top.sls would be completely ignored.

/srv/salt/base/top.sls:

base:
  '*':
    - common
dev:
  'webserver*dev*':
    - webserver
  'db*dev*':
    - db

/srv/salt/dev/top.sls:

dev:
  '10.10.100.0/24':
    - match: ipcidr
    - deployments.dev.site1
  '10.10.101.0/24':
    - match: ipcidr
    - deployments.dev.site2

Note

The rules below assume that the environments being discussed were not defined in the base top file.

  1. If, for some reason, the base environment is not configured in the base environment's top file, then the other environments will be checked in alphabetical order. The first top file found to contain a section for the base environment wins, and the other top files' base sections are ignored. So, provided there is no base section in the base top file, with the below two top files the dev environment would win out, and the common.centos SLS would not be applied to CentOS hosts.

/srv/salt/dev/top.sls:

base:
  'os:Ubuntu':
    - common.ubuntu
dev:
  'webserver*dev*':
    - webserver
  'db*dev*':
    - db

/srv/salt/qa/top.sls:

base:
  'os:Ubuntu':
    - common.ubuntu
  'os:CentOS':
    - common.centos
qa:
  'webserver*qa*':
    - webserver
  'db*qa*':
    - db
  1. For environments other than base, the top file in a given environment will be checked for a section matching the environment's name. If one is found, then it is used. Otherwise, the remaining (non-base) environments will be checked in alphabetical order. In the below example, the qa section in /srv/salt/dev/top.sls will be ignored, but if /srv/salt/qa/top.sls were cleared or removed, then the states configured for the qa environment in /srv/salt/dev/top.sls will be applied.

/srv/salt/dev/top.sls:

dev:
  'webserver*dev*':
    - webserver
  'db*dev*':
    - db
qa:
  '10.10.200.0/24':
    - match: ipcidr
    - deployments.qa.site1
  '10.10.201.0/24':
    - match: ipcidr
    - deployments.qa.site2

/srv/salt/qa/top.sls:

qa:
  'webserver*qa*':
    - webserver
  'db*qa*':
    - db

Note

When in doubt, the simplest way to configure your states is with a single top.sls in the base environment.

SLS Template Variable Reference

The template engines available to sls files and file templates come loaded with a number of context variables. These variables contain information and functions to assist in the generation of templates. See each variable below for its availability -- not all variables are available in all templating contexts.

Salt

The salt variable is available to abstract the salt library functions. This variable is a python dictionary containing all of the functions available to the running salt minion. It is available in all salt templates.

{% for file in salt['cmd.run']('ls -1 /opt/to_remove').splitlines() %}
/opt/to_remove/{{ file }}:
  file.absent
{% endfor %}
Opts

The opts variable abstracts the contents of the minion's configuration file directly to the template. The opts variable is a dictionary. It is available in all templates.

{{ opts['cachedir'] }}

The config.get function also searches for values in the opts dictionary.

Pillar

The pillar dictionary can be referenced directly, and is available in all templates:

{{ pillar['key'] }}

Using the pillar.get function via the salt variable is generally recommended since a default can be safely set in the event that the value is not available in pillar and dictionaries can be traversed directly:

{{ salt['pillar.get']('key', 'failover_value') }}
{{ salt['pillar.get']('stuff:more:deeper') }}
Grains

The grains dictionary makes the minion's grains directly available, and is available in all templates:

{{ grains['os'] }}

The grains.get function can be used to traverse deeper grains and set defaults:

{{ salt['grains.get']('os') }}
env

The env variable is available in only in sls files when gathering the sls from an environment.

{{ env }}
sls

The sls variable contains the sls reference value, and is only available in the actual SLS file (not in any files referenced in that SLS). The sls reference value is the value used to include the sls in top files or via the include option.

{{ sls }}

State Modules

State Modules are the components that map to actual enforcement and management of Salt states.

States are Easy to Write!

State Modules should be easy to write and straightforward. The information passed to the SLS data structures will map directly to the states modules.

Mapping the information from the SLS data is simple, this example should illustrate:

/etc/salt/master: # maps to "name"
  file.managed: # maps to <filename>.<function> - e.g. "managed" in https://github.com/saltstack/salt/tree/develop/salt/states/file.py
    - user: root # one of many options passed to the manage function
    - group: root
    - mode: 644
    - source: salt://salt/master

Therefore this SLS data can be directly linked to a module, function, and arguments passed to that function.

This does issue the burden, that function names, state names and function arguments should be very human readable inside state modules, since they directly define the user interface.

Keyword Arguments

Salt passes a number of keyword arguments to states when rendering them, including the environment, a unique identifier for the state, and more. Additionally, keep in mind that the requisites for a state are part of the keyword arguments. Therefore, if you need to iterate through the keyword arguments in a state, these must be considered and handled appropriately. One such example is in the pkgrepo.managed state, which needs to be able to handle arbitrary keyword arguments and pass them to module execution functions. An example of how these keyword arguments can be handled can be found here.

Using Custom State Modules

Place your custom state modules inside a _states directory within the file_roots specified by the master config file. These custom state modules can then be distributed in a number of ways. Custom state modules are distributed when state.highstate is run, or by executing the saltutil.sync_states or saltutil.sync_all functions.

Any custom states which have been synced to a minion, that are named the same as one of Salt's default set of states, will take the place of the default state with the same name. Note that a state's default name is its filename (i.e. foo.py becomes state foo), but that its name can be overridden by using a __virtual__ function.

Cross Calling Modules

As with Execution Modules, State Modules can also make use of the __salt__ and __grains__ data.

It is important to note that the real work of state management should not be done in the state module unless it is needed. A good example is the pkg state module. This module does not do any package management work, it just calls the pkg execution module. This makes the pkg state module completely generic, which is why there is only one pkg state module and many backend pkg execution modules.

On the other hand some modules will require that the logic be placed in the state module, a good example of this is the file module. But in the vast majority of cases this is not the best approach, and writing specific execution modules to do the backend work will be the optimal solution.

Return Data

A State Module must return a dict containing the following keys/values:

  • name: The same value passed to the state as "name".
  • changes: A dict describing the changes made. Each thing changed should be a key, with its value being another dict with keys called "old" and "new" containing the old/new values. For example, the pkg state's changes dict has one key for each package changed, with the "old" and "new" keys in its sub-dict containing the old and new versions of the package.
  • result: A boolean value. True if the action was successful, otherwise False.
  • comment: A string containing a summary of the result.
Test State

All states should check for and support test being passed in the options. This will return data about what changes would occur if the state were actually run. An example of such a check could look like this:

# Return comment of changes if test.
if __opts__['test']:
    ret['result'] = None
    ret['comment'] = 'State Foo will execute with param {0}'.format(bar)
    return ret

Make sure to test and return before performing any real actions on the minion.

Watcher Function

If the state being written should support the watch requisite then a watcher function needs to be declared. The watcher function is called whenever the watch requisite is invoked and should be generic to the behavior of the state itself.

The watcher function should accept all of the options that the normal state functions accept (as they will be passed into the watcher function).

A watcher function typically is used to execute state specific reactive behavior, for instance, the watcher for the service module restarts the named service and makes it useful for the watcher to make the service react to changes in the environment.

The watcher function also needs to return the same data that a normal state function returns.

Mod_init Interface

Some states need to execute something only once to ensure that an environment has been set up, or certain conditions global to the state behavior can be predefined. This is the realm of the mod_init interface.

A state module can have a function called mod_init which executes when the first state of this type is called. This interface was created primarily to improve the pkg state. When packages are installed the package metadata needs to be refreshed, but refreshing the package metadata every time a package is installed is wasteful. The mod_init function for the pkg state sets a flag down so that the first, and only the first, package installation attempt will refresh the package database (the package database can of course be manually called to refresh via the refresh option in the pkg state).

The mod_init function must accept the Low State Data for the given executing state as an argument. The low state data is a dict and can be seen by executing the state.show_lowstate function. Then the mod_init function must return a bool. If the return value is True, then the mod_init function will not be executed again, meaning that the needed behavior has been set up. Otherwise, if the mod_init function returns False, then the function will be called the next time.

A good example of the mod_init function is found in the pkg state module:

def mod_init(low):
    '''
    Refresh the package database here so that it only needs to happen once
    '''
    if low['fun'] == 'installed' or low['fun'] == 'latest':
        rtag = __gen_rtag()
        if not os.path.exists(rtag):
            open(rtag, 'w+').write('')
        return True
    else:
        return False

The mod_init function in the pkg state accepts the low state data as low and then checks to see if the function being called is going to install packages, if the function is not going to install packages then there is no need to refresh the package database. Therefore if the package database is prepared to refresh, then return True and the mod_init will not be called the next time a pkg state is evaluated, otherwise return False and the mod_init will be called next time a pkg state is evaluated.

Full State Module Example

The following is a simplistic example of a full state module and function. Remember to call out to execution modules to perform all the real work. The state module should only perform "before" and "after" checks.

  1. Make a custom state module by putting the code into a file at the following path: /srv/salt/_states/my_custom_state.py.

  2. Distribute the custom state module to the minions:

    salt '*' saltutil.sync_states
    
  3. Write a new state to use the custom state by making a new state file, for instance /srv/salt/my_custom_state.sls.

  4. Add the following SLS configuration to the file created in Step 3:

    human_friendly_state_id:        # An arbitrary state ID declaration.
      my_custom_state:              # The custom state module name.
        - enforce_custom_thing      # The function in the custom state module.
        - name: a_value             # Maps to the ``name`` parameter in the custom function.
        - foo: Foo                  # Specify the required ``foo`` parameter.
        - bar: False                # Override the default value for the ``bar`` parameter.
    
Example state module
import salt.exceptions

def enforce_custom_thing(name, foo, bar=True):
    '''
    Enforce the state of a custom thing

    This state module does a custom thing. It calls out to the execution module
    ``my_custom_module`` in order to check the current system and perform any
    needed changes.

    name
        The thing to do something to
    foo
        A required argument
    bar : True
        An argument with a default value
    '''
    ret = {'name': name, 'changes': {}, 'result': False, 'comment': ''}

    # Start with basic error-checking. Do all the passed parameters make sense
    # and agree with each-other?
    if bar == True and foo.startswith('Foo'):
        raise salt.exceptions.SaltInvocationError(
            'Argument "foo" cannot start with "Foo" if argument "bar" is True.')

    # Check the current state of the system. Does anything need to change?
    current_state = __salt__['my_custom_module.current_state'](name)

    if current_state == foo:
        ret['result'] = True
        ret['comment'] = 'System already in the correct state'
        return ret

    # The state of the system does need to be changed. Check if we're running
    # in ``test=true`` mode.
    if __opts__['test'] == True:
        ret['comment'] = 'The state of "{0}" will be changed.'.format(name)
        ret['changes'] = {
            'old': current_state,
            'new': 'Description, diff, whatever of the new state',
        }

        # Return ``None`` when running with ``test=true``.
        ret['result'] = None

        return ret

    # Finally, make the actual change and return the result.
    new_state = __salt__['my_custom_module.change_state'](name, foo)

    ret['comment'] = 'The state of "{0}" was changed!'.format(name)

    ret['changes'] = {
        'old': current_state,
        'new': new_state,
    }

    ret['result'] = True

    return ret

State Management

State management, also frequently called Software Configuration Management (SCM), is a program that puts and keeps a system into a predetermined state. It installs software packages, starts or restarts services or puts configuration files in place and watches them for changes.

Having a state management system in place allows one to easily and reliably configure and manage a few servers or a few thousand servers. It allows configurations to be kept under version control.

Salt States is an extension of the Salt Modules that we discussed in the previous remote execution tutorial. Instead of calling one-off executions the state of a system can be easily defined and then enforced.

Understanding the Salt State System Components

The Salt state system is comprised of a number of components. As a user, an understanding of the SLS and renderer systems are needed. But as a developer, an understanding of Salt states and how to write the states is needed as well.

Note

States are compiled and executed only on minions that have been targeted. To execute functions directly on masters, see runners.

Salt SLS System

The primary system used by the Salt state system is the SLS system. SLS stands for SaLt State.

The Salt States are files which contain the information about how to configure Salt minions. The states are laid out in a directory tree and can be written in many different formats.

The contents of the files and they way they are laid out is intended to be as simple as possible while allowing for maximum flexibility. The files are laid out in states and contains information about how the minion needs to be configured.

SLS File Layout

SLS files are laid out in the Salt file server.

A simple layout can look like this:

top.sls
ssh.sls
sshd_config
users/init.sls
users/admin.sls
salt/master.sls
web/init.sls

The top.sls file is a key component. The top.sls files is used to determine which SLS files should be applied to which minions.

The rest of the files with the .sls extension in the above example are state files.

Files without a .sls extensions are seen by the Salt master as files that can be downloaded to a Salt minion.

States are translated into dot notation. For example, the ssh.sls file is seen as the ssh state and the users/admin.sls file is seen as the users.admin state.

Files named init.sls are translated to be the state name of the parent directory, so the web/init.sls file translates to the web state.

In Salt, everything is a file; there is no "magic translation" of files and file types. This means that a state file can be distributed to minions just like a plain text or binary file.

SLS Files

The Salt state files are simple sets of data. Since SLS files are just data they can be represented in a number of different ways.

The default format is YAML generated from a Jinja template. This allows for the states files to have all the language constructs of Python and the simplicity of YAML.

State files can then be complicated Jinja templates that translate down to YAML, or just plain and simple YAML files.

The State files are simply common data structures such as dictionaries and lists, constructed using a templating language such as YAML.

Here is an example of a Salt State:

vim:
  pkg.installed: []

salt:
  pkg.latest:
    - name: salt
  service.running:
    - names:
      - salt-master
      - salt-minion
    - require:
      - pkg: salt
    - watch:
      - file: /etc/salt/minion

/etc/salt/minion:
  file.managed:
    - source: salt://salt/minion
    - user: root
    - group: root
    - mode: 644
    - require:
      - pkg: salt

This short stanza will ensure that vim is installed, Salt is installed and up to date, the salt-master and salt-minion daemons are running and the Salt minion configuration file is in place. It will also ensure everything is deployed in the right order and that the Salt services are restarted when the watched file updated.

The Top File

The top file controls the mapping between minions and the states which should be applied to them.

The top file specifies which minions should have which SLS files applied and which environments they should draw those SLS files from.

The top file works by specifying environments on the top-level.

Each environment contains globs to match minions. Finally, each glob contains a list of lists of Salt states to apply to matching minions:

base:
  '*':
    - salt
    - users
    - users.admin
  'saltmaster.*':
    - match: pcre
    - salt.master

This above example uses the base environment which is built into the default Salt setup.

The base environment has two globs. First, the '*' glob contains a list of SLS files to apply to all minions.

The second glob contains a regular expression that will match all minions with an ID matching saltmaster.* and specifies that for those minions, the salt.master state should be applied.

Reloading Modules

Some Salt states require that specific packages be installed in order for the module to load. As an example the pip state module requires the pip package for proper name and version parsing.

In most of the common cases, Salt is clever enough to transparently reload the modules. For example, if you install a package, Salt reloads modules because some other module or state might require just that package which was installed.

On some edge-cases salt might need to be told to reload the modules. Consider the following state file which we'll call pep8.sls:

python-pip:
  cmd.run:
    - name: |
        easy_install --script-dir=/usr/bin -U pip
    - cwd: /

pep8:
  pip.installed:
    - require:
      - cmd: python-pip

The above example installs pip using easy_install from setuptools and installs pep8 using pip, which, as told earlier, requires pip to be installed system-wide. Let's execute this state:

salt-call state.sls pep8

The execution output would be something like:

----------
    State: - pip
    Name:      pep8
    Function:  installed
        Result:    False
        Comment:   State pip.installed found in sls pep8 is unavailable

        Changes:

Summary
------------
Succeeded: 1
Failed:    1
------------
Total:     2

If we executed the state again the output would be:

----------
    State: - pip
    Name:      pep8
    Function:  installed
        Result:    True
        Comment:   Package was successfully installed
        Changes:   pep8==1.4.6: Installed

Summary
------------
Succeeded: 2
Failed:    0
------------
Total:     2

Since we installed pip using cmd, Salt has no way to know that a system-wide package was installed.

On the second execution, since the required pip package was installed, the state executed correctly.

Note

Salt does not reload modules on every state run because doing so would greatly slow down state execution.

So how do we solve this edge-case? reload_modules!

reload_modules is a boolean option recognized by salt on all available states which forces salt to reload its modules once a given state finishes.

The modified state file would now be:

python-pip:
  cmd.run:
    - name: |
        easy_install --script-dir=/usr/bin -U pip
    - cwd: /
    - reload_modules: true

pep8:
  pip.installed:
    - require:
      - cmd: python-pip

Let's run it, once:

salt-call state.sls pep8

The output is:

----------
    State: - pip
    Name:      pep8
    Function:  installed
        Result:    True
        Comment:   Package was successfully installed
        Changes:   pep8==1.4.6: Installed

Summary
------------
Succeeded: 2
Failed:    0
------------
Total:     2

Full list of builtin state modules

alias Configuration of email aliases
alternatives Configuration of the alternatives system
apache Apache state
apache_module Manage Apache Modules
apt Package management operations specific to APT- and DEB-based systems
archive Extract an archive
artifactory This state downloads artifacts from artifactory.
at Configuration disposable regularly scheduled tasks for at.
augeas Configuration management using Augeas
aws_sqs Manage SQS Queues
blockdev Management of Block Devices
boto_asg Manage Autoscale Groups
boto_cfn Connection module for Amazon Cloud Formation
boto_cloudwatch_alarm Manage Cloudwatch alarms
boto_dynamodb Manage DynamoDB Tables
boto_ec2 Manage EC2
boto_elasticache Manage Elasticache ================== replication_group_description ..
boto_elb Manage ELBs
boto_iam Manage IAM roles.
boto_iam_role Manage IAM roles
boto_kms Manage KMS keys, key policies and grants.
boto_lc Manage Launch Configurations
boto_rds Manage RDSs
boto_route53 Manage Route53 records
boto_secgroup Manage Security Groups
boto_sns Manage SNS Topics
boto_sqs Manage SQS Queues
boto_vpc Manage VPCs
bower Installation of Bower Packages
cabal Installation of Cabal Packages
chef Execute Chef client runs
cloud Using states instead of maps to deploy clouds
cmd Execution of arbitrary commands
composer Installation of Composer Packages
cron Management of cron, the Unix command scheduler
cyg Installation of Cygwin packages.
ddns Dynamic DNS updates
debconfmod Management of debconf selections
disk Disk monitoring state
dockerio Manage Docker containers
dockerng Management of Docker containers
drac Management of Dell DRAC
environ Support for getting and setting the environment variables of the current salt process.
eselect Management of Gentoo configuration using eselect
event Send events through Salt's event system during state runs
file Operations on regular files, special files, directories, and symlinks
gem Installation of Ruby modules packaged as gems
git Interaction with Git repositories
glusterfs Manage glusterfs pool.
gnomedesktop Configuration of the GNOME desktop
grafana Manage Grafana Dashboards
grains Manage grains on the minion
group Management of user groups
hg Interaction with Mercurial repositories
hipchat Send a message to Hipchat
host Management of addresses and names in hosts file
htpasswd Support for htpasswd module
http HTTP monitoring states
incron Management of incron, the inotify cron
influxdb_database Management of InfluxDB databases
influxdb_user Management of InfluxDB users
ini_manage Manage ini files
ipmi Manage IPMI devices over LAN
ipset Management of ipsets
iptables Management of iptables
jboss7 Manage JBoss 7 Application Server via CLI interface
keyboard Management of keyboard layouts
keystone Management of Keystone users
kmod Loading and unloading of kernel modules
layman Management of Gentoo Overlays using layman
libvirt Manage libvirt certificates
linux_acl Linux File Access Control Lists
locale Management of languages/locales
lvm Management of Linux logical volumes
lvs_server Management of LVS (Linux Virtual Server) Real Server
lvs_service Management of LVS (Linux Virtual Server) Service
lxc Manage Linux Containers
makeconf Management of Gentoo make.conf
mdadm Managing software RAID with mdadm
memcached States for Management of Memcached Keys
modjk State to control Apache modjk
modjk_worker Manage modjk workers
module Execution of Salt modules from within states
mongodb_database Management of Mongodb databases
mongodb_user Management of Mongodb users
monit Monit state
mount Mounting of filesystems
mysql_database Management of MySQL databases (schemas)
mysql_grants Management of MySQL grants (user permissions)
mysql_query Execution of MySQL queries
mysql_user Management of MySQL users
network Configuration of network interfaces
nftables Management of nftables
npm Installation of NPM Packages
ntp Management of NTP servers
openstack_config Manage OpenStack configuration file settings.
pagerduty Create an Event in PagerDuty
pagerduty_escalation_policy Manage PagerDuty escalation policies.
pagerduty_schedule Manage PagerDuty schedules.
pagerduty_service Manage PagerDuty services
pagerduty_user Manage PagerDuty users.
pecl Installation of PHP Extensions Using pecl
pip_state Installation of Python Packages Using pip
pkg Installation of packages using OS package managers such as yum or apt-get
pkgng Manage package remote repo using FreeBSD pkgng
pkgrepo Management of APT/YUM package repos
portage_config Management of Portage package configuration on Gentoo
ports Manage software from FreeBSD ports
postgres_database Management of PostgreSQL databases
postgres_extension Management of PostgreSQL extensions (e.g.: postgis)
postgres_group Management of PostgreSQL groups (roles)
postgres_schema Management of PostgreSQL schemas
postgres_user Management of PostgreSQL users (roles)
powerpath Powerpath configuration support
process Process Management
pushover Send a message to PushOver
pyenv Managing python installations with pyenv
pyrax_queues Manage Rackspace Queues
quota Management of POSIX Quotas
rabbitmq_cluster Manage RabbitMQ Clusters
rabbitmq_plugin Manage RabbitMQ Plugins
rabbitmq_policy Manage RabbitMQ Policies
rabbitmq_user Manage RabbitMQ Users
rabbitmq_vhost Manage RabbitMQ Virtual Hosts
rbenv Managing Ruby installations with rbenv
rdp Manage RDP Service on Windows servers
redismod Management of Redis server
reg Manage the registry on Windows
rvm Managing Ruby installations and gemsets with Ruby Version Manager (RVM)
saltmod Control the Salt command interface
schedule Management of the Salt scheduler
selinux Management of SELinux rules
serverdensity_device Monitor Server with Server Density
service Starting or restarting of services and daemons
slack Send a message to Slack
smtp Sending Messages via SMTP
splunk_search Splunk Search State Module
ssh_auth Control of entries in SSH authorized_key files
ssh_known_hosts Control of SSH known_hosts entries
stateconf Stateconf System
status Minion status monitoring
supervisord Interaction with the Supervisor daemon
svn Manage SVN repositories
sysctl Configuration of the Linux kernel using sysctl
syslog_ng State module for syslog_ng
sysrc
test Test States
timezone Management of timezones
tomcat This state uses the manager webapp to manage Apache tomcat webapps
tuned
maintainer:Syed Ali <alicsyed@gmail.com>
uptime Monitor Web Server with Uptime
user Management of user accounts
vbox_guest VirtualBox Guest Additions installer state
virtualenv_mod Setup of Python virtualenv sandboxes
win_dacl Windows Object Access Control Lists
win_dns_client Module for configuring DNS Client on Windows systems
win_firewall State for configuring Windows Firewall
win_network Configuration of network interfaces on Windows hosts
win_path Manage the Windows System PATH
win_servermanager Manage Windows features via the ServerManager powershell module
win_system Management of Windows system information
win_update Management of the windows update agent
winrepo Manage Windows Package Repository
x509 Manage X509 Certificates
xmpp Sending Messages over XMPP
zcbuildout Management of zc.buildout
zk_concurrency Control concurrency of steps within state execution using zookeeper

Execution Modules

Salt execution modules are the functions called by the salt command.

Note

Salt execution modules are different from state modules and cannot be called directly within state files. You must use the module state module to call execution modules within state runs.

Salt ships with many modules that cover a wide variety of tasks.

Modules Are Easy to Write!

Writing Salt execution modules is straightforward.

A Salt execution modules is a Python or Cython module placed in a directory called _modules/ within the file_roots as specified by the master config file. By default this is /srv/salt/_modules on Linux systems.

Modules placed in _modules/ will be synced to the minions when any of the following Salt functions are called:

Note that a module's default name is its filename (i.e. foo.py becomes module foo), but that its name can be overridden by using a __virtual__ function.

If a Salt module has errors and cannot be imported, the Salt minion will continue to load without issue and the module with errors will simply be omitted.

If adding a Cython module the file must be named <modulename>.pyx so that the loader knows that the module needs to be imported as a Cython module. The compilation of the Cython module is automatic and happens when the minion starts, so only the *.pyx file is required.

Cross-Calling Modules

All of the Salt execution modules are available to each other and modules can call functions available in other execution modules.

The variable __salt__ is packed into the modules after they are loaded into the Salt minion.

The __salt__ variable is a Python dictionary containing all of the Salt functions. Dictionary keys are strings representing the names of the modules and the values are the functions themselves.

Salt modules can be cross-called by accessing the value in the __salt__ dict:

def foo(bar):
    return __salt__['cmd.run'](bar)

This code will call the run function in the cmd and pass the argument bar to it.

Preloaded Execution Module Data

When interacting with execution modules often it is nice to be able to read information dynamically about the minion or to load in configuration parameters for a module.

Salt allows for different types of data to be loaded into the modules by the minion.

Grains Data

The values detected by the Salt Grains on the minion are available in a dict named __grains__ and can be accessed from within callable objects in the Python modules.

To see the contents of the grains dictionary for a given system in your deployment run the grains.items() function:

salt 'hostname' grains.items --output=pprint

Any value in a grains dictionary can be accessed as any other Python dictionary. For example, the grain representing the minion ID is stored in the id key and from an execution module, the value would be stored in __grains__['id'].

Module Configuration

Since parameters for configuring a module may be desired, Salt allows for configuration information from the minion configuration file to be passed to execution modules.

Since the minion configuration file is a YAML document, arbitrary configuration data can be passed in the minion config that is read by the modules. It is therefore strongly recommended that the values passed in the configuration file match the module name. A value intended for the test execution module should be named test.<value>.

The test execution module contains usage of the module configuration and the default configuration file for the minion contains the information and format used to pass data to the modules. salt.modules.test, conf/minion.

Printout Configuration

Since execution module functions can return different data, and the way the data is printed can greatly change the presentation, Salt has a printout configuration.

When writing a module the __outputter__ dictionary can be declared in the module. The __outputter__ dictionary contains a mapping of function name to Salt Outputter.

__outputter__ = {
                'run': 'txt'
                }

This will ensure that the text outputter is used.

Virtual Modules

Sometimes an execution module should be presented in a generic way. A good example of this can be found in the package manager modules. The package manager changes from one operating system to another, but the Salt execution module that interfaces with the package manager can be presented in a generic way.

The Salt modules for package managers all contain a __virtual__ function which is called to define what systems the module should be loaded on.

The __virtual__ function is used to return either a string or False. If False is returned then the module is not loaded, if a string is returned then the module is loaded with the name of the string.

Note

Optionally, modules may additionally return a list of reasons that a module could not be loaded. For example, if a dependency for 'my_mod' was not met, a __virtual__ function could do as follows:

return False, ['My Module must be installed before this module can be used.']

This means that the package manager modules can be presented as the pkg module regardless of what the actual module is named.

Since __virtual__ is called before the module is loaded, __salt__ will be unavailable as it will not have been packed into the module at this point in time.

The package manager modules are among the best example of using the __virtual__ function. Some examples:

Note

Modules which return a string from __virtual__ that is already used by a module that ships with Salt will _override_ the stock module.

Documentation

Salt execution modules are documented. The sys.doc() function will return the documentation for all available modules:

salt '*' sys.doc

The sys.doc function simply prints out the docstrings found in the modules; when writing Salt execution modules, please follow the formatting conventions for docstrings as they appear in the other modules.

Adding Documentation to Salt Modules

It is strongly suggested that all Salt modules have documentation added.

To add documentation add a Python docstring to the function.

def spam(eggs):
    '''
    A function to make some spam with eggs!

    CLI Example::

        salt '*' test.spam eggs
    '''
    return eggs

Now when the sys.doc call is executed the docstring will be cleanly returned to the calling terminal.

Documentation added to execution modules in docstrings will automatically be added to the online web-based documentation.

Add Execution Module Metadata

When writing a Python docstring for an execution module, add information about the module using the following field lists:

:maintainer:    Thomas Hatch <thatch@saltstack.com, Seth House <shouse@saltstack.com>
:maturity:      new
:depends:       python-mysqldb
:platform:      all

The maintainer field is a comma-delimited list of developers who help maintain this module.

The maturity field indicates the level of quality and testing for this module. Standard labels will be determined.

The depends field is a comma-delimited list of modules that this module depends on.

The platform field is a comma-delimited list of platforms that this module is known to run on.

Private Functions

In Salt, Python callable objects contained within an execution module are made available to the Salt minion for use. The only exception to this rule is a callable object with a name starting with an underscore _.

Objects Loaded Into the Salt Minion
def foo(bar):
    return bar

class baz:
    def __init__(self, quo):
        pass
Objects NOT Loaded into the Salt Minion
def _foobar(baz): # Preceded with an _
    return baz

cheese = {} # Not a callable Python object

Note

Some callable names also end with an underscore _, to avoid keyword clashes with Python keywords. When using execution modules, or state modules, with these in them the trailing underscore should be omitted.

Useful Decorators for Modules

Depends Decorator

When writing execution modules there are many times where some of the module will work on all hosts but some functions have an external dependency, such as a service that needs to be installed or a binary that needs to be present on the system.

Instead of trying to wrap much of the code in large try/except blocks, a decorator can be used.

If the dependencies passed to the decorator don't exist, then the salt minion will remove those functions from the module on that host.

If a "fallback_function" is defined, it will replace the function instead of removing it

import logging

from salt.utils.decorators import depends

log = logging.getLogger(__name__)

try:
    import dependency_that_sometimes_exists
except ImportError as e:
    log.trace('Failed to import dependency_that_sometimes_exists: {0}'.format(e))

@depends('dependency_that_sometimes_exists')
def foo():
    '''
    Function with a dependency on the "dependency_that_sometimes_exists" module,
    if the "dependency_that_sometimes_exists" is missing this function will not exist
    '''
    return True

def _fallback():
    '''
    Fallback function for the depends decorator to replace a function with
    '''
    return '"dependency_that_sometimes_exists" needs to be installed for this function to exist'

@depends('dependency_that_sometimes_exists', fallback_function=_fallback)
def foo():
    '''
    Function with a dependency on the "dependency_that_sometimes_exists" module.
    If the "dependency_that_sometimes_exists" is missing this function will be
    replaced with "_fallback"
    '''
    return True

In addition to global dependancies the depends decorator also supports raw booleans.

from salt.utils.decorators import depends

HAS_DEP = False
try:
    import dependency_that_sometimes_exists
    HAS_DEP = True
except ImportError:
    pass

@depends(HAS_DEP)
def foo():
    return True

Master Tops

Salt includes a number of built-in subsystems to generate top file data, they are listed listed at Full list of builtin master tops modules.

The source for the built-in Salt master tops can be found here: https://github.com/saltstack/salt/blob/develop/salt/tops

Full list of builtin master tops modules

cobbler Cobbler Tops
ext_nodes External Nodes Classifier
mongo Read tops data from a mongodb collection
reclass_adapter Read tops data from a reclass database

Full list of builtin wheel modules

config Manage the master configuration file
error Error generator to enable integration testing of salt wheel error handling
file_roots Read in files from the file_root and save files to the file root
key Wheel system wrapper for key system
minions Wheel system wrapper for connected minions
pillar_roots The pillar_roots wheel module is used to manage files under the pillar roots directories on the master server.

Full list of builtin beacon modules

btmp Beacon to fire events at failed login of users
diskusage Beacon to monitor disk usage.
inotify Watch files and translate the changes into salt events
journald A simple beacon to watch journald for specific entries
load Beacon to emit system load averages
network_info Beacon to monitor statistics from ethernet adapters
service Send events covering service status
sh Watch the shell commands being executed actively.
twilio_txt_msg Beacon to emit Twilio text messages
wtmp Beacon to fire events at login of users as registered in the wtmp file

Full list of builtin engine modules

logstash An engine that reads messages from the salt event bus and pushes them onto a logstash endpoint.
sqs_events An engine that continuously reads messages from SQS and fires them as events.
test A simple test engine, not intended for real use but as an example

Full list of builtin sdb modules

couchdb CouchDB sdb Module
etcd_db etcd Database Module
keyring_db Keyring Database Module
memcached Memcached sdb Module
sqlite3 SQLite sdb Module

Salt Best Practices

Salt's extreme flexibility leads to many questions concerning the structure of configuration files.

This document exists to clarify these points through examples and code.

General rules

  1. Modularity and clarity should be emphasized whenever possible.
  2. Create clear relations between pillars and states.
  3. Use variables when it makes sense but don't overuse them.
  4. Store sensitive data in pillar.
  5. Don't use grains for matching in your pillar top file for any sensitive pillars.

Structuring States and Formulas

When structuring Salt States and Formulas it is important to begin with the directory structure. A proper directory structure clearly defines the functionality of each state to the user via visual inspection of the state's name.

Reviewing the MySQL Salt Formula it is clear to see the benefits to the end-user when reviewing a sample of the available states:

/srv/salt/mysql/files/
/srv/salt/mysql/client.sls
/srv/salt/mysql/map.jinja
/srv/salt/mysql/python.sls
/srv/salt/mysql/server.sls

This directory structure would lead to these states being referenced in a top file in the following way:

base:
  'web*':
    - mysql.client
    - mysql.python
  'db*':
    - mysql.server

This clear definition ensures that the user is properly informed of what each state will do.

Another example comes from the vim-formula:

/srv/salt/vim/files/
/srv/salt/vim/absent.sls
/srv/salt/vim/init.sls
/srv/salt/vim/map.jinja
/srv/salt/vim/nerdtree.sls
/srv/salt/vim/pyflakes.sls
/srv/salt/vim/salt.sls

Once again viewing how this would look in a top file:

/srv/salt/top.sls:

base:
  'web*':
    - vim
    - vim.nerdtree
    - vim.pyflakes
    - vim.salt
  'db*':
    - vim.absent

The usage of a clear top-level directory as well as properly named states reduces the overall complexity and leads a user to both understand what will be included at a glance and where it is located.

In addition Formulas should be used as often as possible.

Note

Formulas repositories on the saltstack-formulas GitHub organization should not be pointed to directly from systems that automatically fetch new updates such as GitFS or similar tooling. Instead formulas repositories should be forked on GitHub or cloned locally, where unintended, automatic changes will not take place.

Structuring Pillar Files

Pillars are used to store secure and insecure data pertaining to minions. When designing the structure of the /srv/pillar directory, the pillars contained within should once again be focused on clear and concise data which users can easily review, modify, and understand.

The /srv/pillar/ directory is primarily controlled by top.sls. It should be noted that the pillar top.sls is not used as a location to declare variables and their values. The top.sls is used as a way to include other pillar files and organize the way they are matched based on environments or grains.

An example top.sls may be as simple as the following:

/srv/pillar/top.sls:

base:
  '*':
    - packages

Or much more complicated, using a variety of matchers:

/srv/pillar/top.sls:

base:
  '*':
    - apache
dev:
  'os:Debian':
    - match: grain
    - vim
test:
  '* and not G@os: Debian':
    - match: compound
    - emacs

It is clear to see through these examples how the top file provides users with power but when used incorrectly it can lead to confusing configurations. This is why it is important to understand that the top file for pillar is not used for variable definitions.

Each SLS file within the /srv/pillar/ directory should correspond to the states which it matches.

This would mean that the apache pillar file should contain data relevant to Apache. Structuring files in this way once again ensures modularity, and creates a consistent understanding throughout our Salt environment. Users can expect that pillar variables found in an Apache state will live inside of an Apache pillar:

/srv/salt/pillar/apache.sls:

apache:
  lookup:
    name: httpd
    config:
      tmpl: /etc/httpd/httpd.conf

While this pillar file is simple, it shows how a pillar file explicitly relates to the state it is associated with.

Variable Flexibility

Salt allows users to define variables in SLS files. When creating a state variables should provide users with as much flexibility as possible. This means that variables should be clearly defined and easy to manipulate, and that sane defaults should exist in the event a variable is not properly defined. Looking at several examples shows how these different items can lead to extensive flexibility.

Although it is possible to set variables locally, this is generally not preferred:

/srv/salt/apache/conf.sls:

{% set name = 'httpd' %}
{% set tmpl = 'salt://apache/files/httpd.conf' %}

include:
  - apache

apache_conf:
  file.managed:
    - name: {{ name }}
    - source: {{ tmpl }}
    - template: jinja
    - user: root
    - watch_in:
      - service: apache

When generating this information it can be easily transitioned to the pillar where data can be overwritten, modified, and applied to multiple states, or locations within a single state:

/srv/pillar/apache.sls:

apache:
  lookup:
    name: httpd
    config:
      tmpl: salt://apache/files/httpd.conf

/srv/salt/apache/conf.sls:

{% from "apache/map.jinja" import apache with context %}

include:
  - apache

apache_conf:
  file.managed:
    - name: {{ salt['pillar.get']('apache:lookup:name') }}
    - source: {{ salt['pillar.get']('apache:lookup:config:tmpl') }}
    - template: jinja
    - user: root
    - watch_in:
      - service: apache

This flexibility provides users with a centralized location to modify variables, which is extremely important as an environment grows.

Modularity Within States

Ensuring that states are modular is one of the key concepts to understand within Salt. When creating a state a user must consider how many times the state could be re-used, and what it relies on to operate. Below are several examples which will iteratively explain how a user can go from a state which is not very modular to one that is:

/srv/salt/apache/init.sls:

httpd:
  pkg.installed: []
  service.running:
    - enable: True

/etc/httpd/httpd.conf:
  file.managed:
    - source: salt://apache/files/httpd.conf
    - template: jinja
    - watch_in:
      - service: httpd

The example above is probably the worst-case scenario when writing a state. There is a clear lack of focus by naming both the pkg/service, and managed file directly as the state ID. This would lead to changing multiple requires within this state, as well as others that may depend upon the state.

Imagine if a require was used for the httpd package in another state, and then suddenly it's a custom package. Now changes need to be made in multiple locations which increases the complexity and leads to a more error prone configuration.

There is also the issue of having the configuration file located in the init, as a user would be unable to simply install the service and use the default conf file.

Our second revision begins to address the referencing by using - name, as opposed to direct ID references:

/srv/salt/apache/init.sls:

apache:
  pkg.installed:
    - name: httpd
  service.running:
    - name: httpd
    - enable: True

apache_conf:
  file.managed:
    - name: /etc/httpd/httpd.conf
    - source: salt://apache/files/httpd.conf
    - template: jinja
    - watch_in:
      - service: apache

The above init file is better than our original, yet it has several issues which lead to a lack of modularity. The first of these problems is the usage of static values for items such as the name of the service, the name of the managed file, and the source of the managed file. When these items are hard coded they become difficult to modify and the opportunity to make mistakes arises. It also leads to multiple edits that need to occur when changing these items (imagine if there were dozens of these occurrences throughout the state!). There is also still the concern of the configuration file data living in the same state as the service and package.

In the next example steps will be taken to begin addressing these issues. Starting with the addition of a map.jinja file (as noted in the Formula documentation), and modification of static values:

/srv/salt/apache/map.jinja:

{% set apache = salt['grains.filter_by']({
    'Debian': {
        'server': 'apache2',
        'service': 'apache2',
         'conf': '/etc/apache2/apache.conf',
    },
    'RedHat': {
        'server': 'httpd',
        'service': 'httpd',
        'conf': '/etc/httpd/httpd.conf',
    },
}, merge=salt['pillar.get']('apache:lookup')) %}

/srv/pillar/apache.sls:

apache:
  lookup:
    config:
      tmpl: salt://apache/files/httpd.conf

/srv/salt/apache/init.sls:

{% from "apache/map.jinja" import apache with context %}

apache:
  pkg.installed:
    - name: {{ apache.server }}
  service.running:
    - name: {{ apache.service }}
    - enable: True

apache_conf:
  file.managed:
    - name: {{ apache.conf }}
    - source: {{ salt['pillar.get']('apache:lookup:config:tmpl') }}
    - template: jinja
    - user: root
    - watch_in:
      - service: apache

The changes to this state now allow us to easily identify the location of the variables, as well as ensuring they are flexible and easy to modify. While this takes another step in the right direction, it is not yet complete. Suppose the user did not want to use the provided conf file, or even their own configuration file, but the default apache conf. With the current state setup this is not possible. To attain this level of modularity this state will need to be broken into two states.

/srv/salt/apache/map.jinja:

{% set apache = salt['grains.filter_by']({
    'Debian': {
        'server': 'apache2',
        'service': 'apache2',
         'conf': '/etc/apache2/apache.conf',
    },
    'RedHat': {
        'server': 'httpd',
        'service': 'httpd',
        'conf': '/etc/httpd/httpd.conf',
    },
}, merge=salt['pillar.get']('apache:lookup')) %}

/srv/pillar/apache.sls:

apache:
  lookup:
    config:
      tmpl: salt://apache/files/httpd.conf

/srv/salt/apache/init.sls:

{% from "apache/map.jinja" import apache with context %}

apache:
  pkg.installed:
    - name: {{ apache.server }}
  service.running:
    - name: {{ apache.service }}
    - enable: True

/srv/salt/apache/conf.sls:

{% from "apache/map.jinja" import apache with context %}

include:
  - apache

apache_conf:
  file.managed:
    - name: {{ apache.conf }}
    - source: {{ salt['pillar.get']('apache:lookup:config:tmpl') }}
    - template: jinja
    - user: root
    - watch_in:
      - service: apache

This new structure now allows users to choose whether they only wish to install the default Apache, or if they wish, overwrite the default package, service, configuration file location, or the configuration file itself. In addition to this the data has been broken between multiple files allowing for users to identify where they need to change the associated data.

Storing Secure Data

Secure data refers to any information that you would not wish to share with anyone accessing a server. This could include data such as passwords, keys, or other information.

As all data within a state is accessible by EVERY server that is connected it is important to store secure data within pillar. This will ensure that only those servers which require this secure data have access to it. In this example a use can go from an insecure configuration to one which is only accessible by the appropriate hosts:

/srv/salt/mysql/testerdb.sls:

testdb:
  mysql_database.present::
    - name: testerdb

/srv/salt/mysql/user.sls:

include:
  - mysql.testerdb

testdb_user:
  mysql_user.present:
    - name: frank
    - password: "test3rdb"
    - host: localhost
    - require:
      - sls: mysql.testerdb

Many users would review this state and see that the password is there in plain text, which is quite problematic. It results in several issues which may not be immediately visible.

The first of these issues is clear to most users -- the password being visible in this state. This means that any minion will have a copy of this, and therefore the password which is a major security concern as minions may not be locked down as tightly as the master server.

The other issue that can be encountered is access by users on the master. If everyone has access to the states (or their repository), then they are able to review this password. Keeping your password data accessible by only a few users is critical for both security and peace of mind.

There is also the issue of portability. When a state is configured this way it results in multiple changes needing to be made. This was discussed in the sections above but it is a critical idea to drive home. If states are not portable it may result in more work later!

Fixing this issue is relatively simple, the content just needs to be moved to the associated pillar:

/srv/pillar/mysql.sls:

mysql:
  lookup:
    name: testerdb
    password: test3rdb
    user: frank
    host: localhost

/srv/salt/mysql/testerdb.sls:

testdb:
  mysql_database.present:
    - name: {{ salt['pillar.get']('mysql:lookup:name') }}

/srv/salt/mysql/user.sls:

include:
  - mysql.testerdb

testdb_user:
  mysql_user.present:
    - name: {{ salt['pillar.get']('mysql:lookup:user') }}
    - password: {{ salt['pillar.get']('mysql:lookup:password') }}
    - host: {{ salt['pillar.get']('mysql:lookup:host') }}
    - require:
      - sls: mysql.testerdb

Now that the database details have been moved to the associated pillar file, only machines which are targeted via pillar will have access to these details. Access to users who should not be able to review these details can also be prevented while ensuring that they are still able to write states which take advantage of this information.

Troubleshooting

The intent of the troubleshooting section is to introduce solutions to a number of common issues encountered by users and the tools that are available to aid in developing States and Salt code.

Troubleshooting the Salt Master

If your Salt master is having issues such as minions not returning data, slow execution times, or a variety of other issues, the following links contain details on troubleshooting the most common issues encountered:

Troubleshooting the Salt Master

Running in the Foreground

A great deal of information is available via the debug logging system, if you are having issues with minions connecting or not starting run the master in the foreground:

# salt-master -l debug

Anyone wanting to run Salt daemons via a process supervisor such as monit, runit, or supervisord, should omit the -d argument to the daemons and run them in the foreground.

What Ports does the Master Need Open?

For the master, TCP ports 4505 and 4506 need to be open. If you've put both your Salt master and minion in debug mode and don't see an acknowledgment that your minion has connected, it could very well be a firewall interfering with the connection. See our firewall configuration page for help opening the firewall on various platforms.

If you've opened the correct TCP ports and still aren't seeing connections, check that no additional access control system such as SELinux or AppArmor is blocking Salt.

Too many open files

The salt-master needs at least 2 sockets per host that connects to it, one for the Publisher and one for response port. Thus, large installations may, upon scaling up the number of minions accessing a given master, encounter:

12:45:29,289 [salt.master    ][INFO    ] Starting Salt worker process 38
Too many open files
sock != -1 (tcp_listener.cpp:335)

The solution to this would be to check the number of files allowed to be opened by the user running salt-master (root by default):

[root@salt-master ~]# ulimit -n
1024

If this value is not equal to at least twice the number of minions, then it will need to be raised. For example, in an environment with 1800 minions, the nofile limit should be set to no less than 3600. This can be done by creating the file /etc/security/limits.d/99-salt.conf, with the following contents:

root        hard    nofile        4096
root        soft    nofile        4096

Replace root with the user under which the master runs, if different.

If your master does not have an /etc/security/limits.d directory, the lines can simply be appended to /etc/security/limits.conf.

As with any change to resource limits, it is best to stay logged into your current shell and open another shell to run ulimit -n again and verify that the changes were applied correctly. Additionally, if your master is running upstart, it may be necessary to specify the nofile limit in /etc/default/salt-master if upstart isn't respecting your resource limits:

limit nofile 4096 4096

Note

The above is simply an example of how to set these values, and you may wish to increase them even further if your Salt master is doing more than just running Salt.

Salt Master Stops Responding

There are known bugs with ZeroMQ versions less than 2.1.11 which can cause the Salt master to not respond properly. If you're running a ZeroMQ version greater than or equal to 2.1.9, you can work around the bug by setting the sysctls net.core.rmem_max and net.core.wmem_max to 16777216. Next, set the third field in net.ipv4.tcp_rmem and net.ipv4.tcp_wmem to at least 16777216.

You can do it manually with something like:

# echo 16777216 > /proc/sys/net/core/rmem_max
# echo 16777216 > /proc/sys/net/core/wmem_max
# echo "4096 87380 16777216" > /proc/sys/net/ipv4/tcp_rmem
# echo "4096 87380 16777216" > /proc/sys/net/ipv4/tcp_wmem

Or with the following Salt state:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
net.core.rmem_max:
  sysctl:
    - present
    - value: 16777216

net.core.wmem_max:
  sysctl:
    - present
    - value: 16777216

net.ipv4.tcp_rmem:
  sysctl:
    - present
    - value: 4096 87380 16777216

net.ipv4.tcp_wmem:
  sysctl:
    - present
    - value: 4096 87380 16777216
Live Python Debug Output

If the master seems to be unresponsive, a SIGUSR1 can be passed to the salt-master threads to display what piece of code is executing. This debug information can be invaluable in tracking down bugs.

To pass a SIGUSR1 to the master, first make sure the minion is running in the foreground. Stop the service if it is running as a daemon, and start it in the foreground like so:

# salt-master -l debug

Then pass the signal to the master when it seems to be unresponsive:

# killall -SIGUSR1 salt-master

When filing an issue or sending questions to the mailing list for a problem with an unresponsive daemon, be sure to include this information if possible.

Live Salt-Master Profiling

When faced with performance problems one can turn on master process profiling by sending it SIGUSR2.

# killall -SIGUSR2 salt-master

This will activate yappi profiler inside salt-master code, then after some time one must send SIGUSR2 again to stop profiling and save results to file. If run in foreground salt-master will report filename for the results, which are usually located under /tmp on Unix-based OSes and c:\temp on windows.

Results can then be analyzed with kcachegrind or similar tool.

Commands Time Out or Do Not Return Output

Depending on your OS (this is most common on Ubuntu due to apt-get) you may sometimes encounter times where your highstate, or other long running commands do not return output.

Note

A number of timing issues were resolved in the 2014.1 release of Salt. Upgrading to at least this version is strongly recommended if timeouts persist.

By default the timeout is set to 5 seconds. The timeout value can easily be increased by modifying the timeout line within your /etc/salt/master configuration file.

Passing the -c Option to Salt Returns a Permissions Error

Using the -c option with the Salt command modifies the configuration directory. When the configuration file is read it will still base data off of the root_dir setting. This can result in unintended behavior if you are expecting files such as /etc/salt/pki to be pulled from the location specified with -c. Modify the root_dir setting to address this behavior.

Salt Master Doesn't Return Anything While Running jobs

When a command being run via Salt takes a very long time to return (package installations, certain scripts, etc.) the master may drop you back to the shell. In most situations the job is still running but Salt has exceeded the set timeout before returning. Querying the job queue will provide the data of the job but is inconvenient. This can be resolved by either manually using the -t option to set a longer timeout when running commands (by default it is 5 seconds) or by modifying the master configuration file: /etc/salt/master and setting the timeout value to change the default timeout for all commands, and then restarting the salt-master service.

Salt Master Auth Flooding

In large installations, care must be taken not to overwhealm the master with authentication requests. Several options can be set on the master which mitigate the chances of an authentication flood from causing an interuption in service.

Note

recon_default:

The average number of seconds to wait between reconnection attempts.

recon_max:
The maximum number of seconds to wait between reconnection attempts.
recon_randomize:
A flag to indicate whether the recon_default value should be randomized.
acceptance_wait_time:
The number of seconds to wait for a reply to each authentication request.
random_reauth_delay:
The range of seconds across which the minions should attempt to randomize authentication attempts.
auth_timeout:
The total time to wait for the authentication process to complete, regardless of the number of attempts.

Running state locally

To debug the states, you can use call locally.

salt-call -l trace --local state.highstate

The top.sls file is used to map what SLS modules get loaded onto what minions via the state system.

It is located in the file defined in the file_roots variable of the salt master configuration file which is defined by found in CONFIG_DIR/master, normally /etc/salt/master

The default configuration for the file_roots is:

file_roots:
  base:
    - /srv/salt

So the top file is defaulted to the location /srv/salt/top.sls

Troubleshooting the Salt Minion

In the event that your Salt minion is having issues, a variety of solutions and suggestions are available. Please refer to the following links for more information:

Troubleshooting the Salt Minion

Running in the Foreground

A great deal of information is available via the debug logging system, if you are having issues with minions connecting or not starting run the minion in the foreground:

# salt-minion -l debug

Anyone wanting to run Salt daemons via a process supervisor such as monit, runit, or supervisord, should omit the -d argument to the daemons and run them in the foreground.

What Ports does the Minion Need Open?

No ports need to be opened on the minion, as it makes outbound connections to the master. If you've put both your Salt master and minion in debug mode and don't see an acknowledgment that your minion has connected, it could very well be a firewall interfering with the connection. See our firewall configuration page for help opening the firewall on various platforms.

If you have netcat installed, you can check port connectivity from the minion with the nc command:

$ nc -v -z salt.master.ip.addr 4505
Connection to salt.master.ip.addr 4505 port [tcp/unknown] succeeded!
$ nc -v -z salt.master.ip.addr 4506
Connection to salt.master.ip.addr 4506 port [tcp/unknown] succeeded!

The Nmap utility can also be used to check if these ports are open:

# nmap -sS -q -p 4505-4506 salt.master.ip.addr

Starting Nmap 6.40 ( http://nmap.org ) at 2013-12-29 19:44 CST
Nmap scan report for salt.master.ip.addr (10.0.0.10)
Host is up (0.0026s latency).
PORT     STATE  SERVICE
4505/tcp open   unknown
4506/tcp open   unknown
MAC Address: 00:11:22:AA:BB:CC (Intel)

Nmap done: 1 IP address (1 host up) scanned in 1.64 seconds

If you've opened the correct TCP ports and still aren't seeing connections, check that no additional access control system such as SELinux or AppArmor is blocking Salt.

Using salt-call

The salt-call command was originally developed for aiding in the development of new Salt modules. Since then, many applications have been developed for running any Salt module locally on a minion. These range from the original intent of salt-call, development assistance, to gathering more verbose output from calls like state.highstate.

When initially creating your state tree, it is generally recommended to invoke state.highstate from the minion with salt-call. This displays far more information about the highstate execution than calling it remotely. For even more verbosity, increase the loglevel with the same argument as salt-minion:

# salt-call -l debug state.highstate

The main difference between using salt and using salt-call is that salt-call is run from the minion, and it only runs the selected function on that minion. By contrast, salt is run from the master, and requires you to specify the minions on which to run the command using salt's targeting system.

Live Python Debug Output

If the minion seems to be unresponsive, a SIGUSR1 can be passed to the process to display what piece of code is executing. This debug information can be invaluable in tracking down bugs.

To pass a SIGUSR1 to the minion, first make sure the minion is running in the foreground. Stop the service if it is running as a daemon, and start it in the foreground like so:

# salt-minion -l debug

Then pass the signal to the minion when it seems to be unresponsive:

# killall -SIGUSR1 salt-minion

When filing an issue or sending questions to the mailing list for a problem with an unresponsive daemon, be sure to include this information if possible.

Multiprocessing in Execution Modules

As is outlined in github issue #6300, Salt cannot use python's multiprocessing pipes and queues from execution modules. Multiprocessing from the execution modules is perfectly viable, it is just necessary to use Salt's event system to communicate back with the process.

The reason for this difficulty is that python attempts to pickle all objects in memory when communicating, and it cannot pickle function objects. Since the Salt loader system creates and manages function objects this causes the pickle operation to fail.

Salt Minion Doesn't Return Anything While Running Jobs Locally

When a command being run via Salt takes a very long time to return (package installations, certain scripts, etc.) the minion may drop you back to the shell. In most situations the job is still running but Salt has exceeded the set timeout before returning. Querying the job queue will provide the data of the job but is inconvenient. This can be resolved by either manually using the -t option to set a longer timeout when running commands (by default it is 5 seconds) or by modifying the minion configuration file: /etc/salt/minion and setting the timeout value to change the default timeout for all commands, and then restarting the salt-minion service.

Note

Modifying the minion timeout value is not required when running commands from a Salt Master. It is only required when running commands locally on the minion.

Running in the Foreground

A great deal of information is available via the debug logging system, if you are having issues with minions connecting or not starting run the minion and/or master in the foreground:

salt-master -l debug
salt-minion -l debug

Anyone wanting to run Salt daemons via a process supervisor such as monit, runit, or supervisord, should omit the -d argument to the daemons and run them in the foreground.

What Ports do the Master and Minion Need Open?

No ports need to be opened up on each minion. For the master, TCP ports 4505 and 4506 need to be open. If you've put both your Salt master and minion in debug mode and don't see an acknowledgment that your minion has connected, it could very well be a firewall.

You can check port connectivity from the minion with the nc command:

nc -v -z salt.master.ip 4505
nc -v -z salt.master.ip 4506

There is also a firewall configuration document that might help as well.

If you've enabled the right TCP ports on your operating system or Linux distribution's firewall and still aren't seeing connections, check that no additional access control system such as SELinux or AppArmor is blocking Salt.

Using salt-call

The salt-call command was originally developed for aiding in the development of new Salt modules. Since then, many applications have been developed for running any Salt module locally on a minion. These range from the original intent of salt-call, development assistance, to gathering more verbose output from calls like state.highstate.

When creating your state tree, it is generally recommended to invoke state.highstate with salt-call. This displays far more information about the highstate execution than calling it remotely. For even more verbosity, increase the loglevel with the same argument as salt-minion:

salt-call -l debug state.highstate

The main difference between using salt and using salt-call is that salt-call is run from the minion, and it only runs the selected function on that minion. By contrast, salt is run from the master, and requires you to specify the minions on which to run the command using salt's targeting system.

Too many open files

The salt-master needs at least 2 sockets per host that connects to it, one for the Publisher and one for response port. Thus, large installations may, upon scaling up the number of minions accessing a given master, encounter:

12:45:29,289 [salt.master    ][INFO    ] Starting Salt worker process 38
Too many open files
sock != -1 (tcp_listener.cpp:335)

The solution to this would be to check the number of files allowed to be opened by the user running salt-master (root by default):

[root@salt-master ~]# ulimit -n
1024

And modify that value to be at least equal to the number of minions x 2. This setting can be changed in limits.conf as the nofile value(s), and activated upon new a login of the specified user.

So, an environment with 1800 minions, would need 1800 x 2 = 3600 as a minimum.

Salt Master Stops Responding

There are known bugs with ZeroMQ versions less than 2.1.11 which can cause the Salt master to not respond properly. If you're running a ZeroMQ version greater than or equal to 2.1.9, you can work around the bug by setting the sysctls net.core.rmem_max and net.core.wmem_max to 16777216. Next, set the third field in net.ipv4.tcp_rmem and net.ipv4.tcp_wmem to at least 16777216.

You can do it manually with something like:

# echo 16777216 > /proc/sys/net/core/rmem_max
# echo 16777216 > /proc/sys/net/core/wmem_max
# echo "4096 87380 16777216" > /proc/sys/net/ipv4/tcp_rmem
# echo "4096 87380 16777216" > /proc/sys/net/ipv4/tcp_wmem

Or with the following Salt state:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
net.core.rmem_max:
  sysctl:
    - present
    - value: 16777216

net.core.wmem_max:
  sysctl:
    - present
    - value: 16777216

net.ipv4.tcp_rmem:
  sysctl:
    - present
    - value: 4096 87380 16777216

net.ipv4.tcp_wmem:
  sysctl:
    - present
    - value: 4096 87380 16777216

Salt and SELinux

Currently there are no SELinux policies for Salt. For the most part Salt runs without issue when SELinux is running in Enforcing mode. This is because when the minion executes as a daemon the type context is changed to initrc_t. The problem with SELinux arises when using salt-call or running the minion in the foreground, since the type context stays unconfined_t.

This problem is generally manifest in the rpm install scripts when using the pkg module. Until a full SELinux Policy is available for Salt the solution to this issue is to set the execution context of salt-call and salt-minion to rpm_exec_t:

# CentOS 5 and RHEL 5:
chcon -t system_u:system_r:rpm_exec_t:s0 /usr/bin/salt-minion
chcon -t system_u:system_r:rpm_exec_t:s0 /usr/bin/salt-call

# CentOS 6 and RHEL 6:
chcon system_u:object_r:rpm_exec_t:s0 /usr/bin/salt-minion
chcon system_u:object_r:rpm_exec_t:s0 /usr/bin/salt-call

This works well, because the rpm_exec_t context has very broad control over other types.

Red Hat Enterprise Linux 5

Salt requires Python 2.6 or 2.7. Red Hat Enterprise Linux 5 and its variants come with Python 2.4 installed by default. When installing on RHEL 5 from the EPEL repository this is handled for you. But, if you run Salt from git, be advised that its dependencies need to be installed from EPEL and that Salt needs to be run with the python26 executable.

Common YAML Gotchas

An extensive list of YAML idiosyncrasies has been compiled:

YAML Idiosyncrasies

One of Salt's strengths, the use of existing serialization systems for representing SLS data, can also backfire. YAML is a general purpose system and there are a number of things that would seem to make sense in an sls file that cause YAML issues. It is wise to be aware of these issues. While reports or running into them are generally rare they can still crop up at unexpected times.

Spaces vs Tabs

YAML uses spaces, period. Do not use tabs in your SLS files! If strange errors are coming up in rendering SLS files, make sure to check that no tabs have crept in! In Vim, after enabling search highlighting with: :set hlsearch, you can check with the following key sequence in normal mode(you can hit ESC twice to be sure): /, Ctrl-v, Tab, then hit Enter. Also, you can convert tabs to 2 spaces by these commands in Vim: :set tabstop=2 expandtab and then :retab.

Indentation

The suggested syntax for YAML files is to use 2 spaces for indentation, but YAML will follow whatever indentation system that the individual file uses. Indentation of two spaces works very well for SLS files given the fact that the data is uniform and not deeply nested.

Nested Dictionaries

When dicts are nested within other data structures (particularly lists), the indentation logic sometimes changes. Examples of where this might happen include context and default options from the file.managed state:

/etc/http/conf/http.conf:
  file:
    - managed
    - source: salt://apache/http.conf
    - user: root
    - group: root
    - mode: 644
    - template: jinja
    - context:
        custom_var: "override"
    - defaults:
        custom_var: "default value"
        other_var: 123

Notice that while the indentation is two spaces per level, for the values under the context and defaults options there is a four-space indent. If only two spaces are used to indent, then those keys will be considered part of the same dictionary that contains the context key, and so the data will not be loaded correctly. If using a double indent is not desirable, then a deeply-nested dict can be declared with curly braces:

/etc/http/conf/http.conf:
  file:
    - managed
    - source: salt://apache/http.conf
    - user: root
    - group: root
    - mode: 644
    - template: jinja
    - context: {
      custom_var: "override" }
    - defaults: {
      custom_var: "default value",
      other_var: 123 }

Here is a more concrete example of how YAML actually handles these indentations, using the Python interpreter on the command line:

>>> import yaml
>>> yaml.safe_load('''mystate:
...   file.managed:
...     - context:
...         some: var''')
{'mystate': {'file.managed': [{'context': {'some': 'var'}}]}}
>>> yaml.safe_load('''mystate:
...   file.managed:
...     - context:
...       some: var''')
{'mystate': {'file.managed': [{'some': 'var', 'context': None}]}}

Note that in the second example, some is added as another key in the same dictionary, whereas in the first example, it's the start of a new dictionary. That's the distinction. context is a common example because it is a keyword arg for many functions, and should contain a dictionary.

True/False, Yes/No, On/Off

PyYAML will load these values as boolean True or False. Un-capitalized versions will also be loaded as booleans (true, false, yes, no, on, and off). This can be especially problematic when constructing Pillar data. Make sure that your Pillars which need to use the string versions of these values are enclosed in quotes.

Integers are Parsed as Integers

NOTE: This has been fixed in salt 0.10.0, as of this release passing an integer that is preceded by a 0 will be correctly parsed

When passing integers into an SLS file, they are passed as integers. This means that if a state accepts a string value and an integer is passed, that an integer will be sent. The solution here is to send the integer as a string.

This is best explained when setting the mode for a file:

/etc/vimrc:
  file:
    - managed
    - source: salt://edit/vimrc
    - user: root
    - group: root
    - mode: 644

Salt manages this well, since the mode is passed as 644, but if the mode is zero padded as 0644, then it is read by YAML as an integer and evaluated as an octal value, 0644 becomes 420. Therefore, if the file mode is preceded by a 0 then it needs to be passed as a string:

/etc/vimrc:
  file:
    - managed
    - source: salt://edit/vimrc
    - user: root
    - group: root
    - mode: '0644'
YAML does not like "Double Short Decs"

If I can find a way to make YAML accept "Double Short Decs" then I will, since I think that double short decs would be awesome. So what is a "Double Short Dec"? It is when you declare a multiple short decs in one ID. Here is a standard short dec, it works great:

vim:
  pkg.installed

The short dec means that there are no arguments to pass, so it is not required to add any arguments, and it can save space.

YAML though, gets upset when declaring multiple short decs, for the record...

THIS DOES NOT WORK:

vim:
  pkg.installed
  user.present

Similarly declaring a short dec in the same ID dec as a standard dec does not work either...

ALSO DOES NOT WORK:

fred:
  user.present
  ssh_auth.present:
    - name: AAAAB3NzaC...
    - user: fred
    - enc: ssh-dss
    - require:
      - user: fred

The correct way is to define them like this:

vim:
  pkg.installed: []
  user.present: []

fred:
  user.present: []
  ssh_auth.present:
    - name: AAAAB3NzaC...
    - user: fred
    - enc: ssh-dss
    - require:
      - user: fred

Alternatively, they can be defined the "old way", or with multiple "full decs":

vim:
  pkg:
    - installed
  user:
    - present

fred:
  user:
    - present
  ssh_auth:
    - present
    - name: AAAAB3NzaC...
    - user: fred
    - enc: ssh-dss
    - require:
      - user: fred
YAML support only plain ASCII

According to YAML specification, only ASCII characters can be used.

Within double-quotes, special characters may be represented with C-style escape sequences starting with a backslash ( \ ).

Examples:

- micro: "\u00b5"
- copyright: "\u00A9"
- A: "\x41"
- alpha: "\u0251"
- Alef: "\u05d0"

List of usable Unicode characters will help you to identify correct numbers.

Python can also be used to discover the Unicode number for a character:

repr(u"Text with wrong characters i need to figure out")

This shell command can find wrong characters in your SLS files:

find . -name '*.sls'  -exec  grep --color='auto' -P -n '[^\x00-\x7F]' \{} \;
Alternatively you can toggle the yaml_utf8 setting in your master configuration
file. This is still an experimental setting but it should manage the right encoding conversion in salt after yaml states compilations.
Underscores stripped in Integer Definitions

If a definition only includes numbers and underscores, it is parsed by YAML as an integer and all underscores are stripped. To ensure the object becomes a string, it should be surrounded by quotes. More information here.

Here's an example:

>>> import yaml
>>> yaml.safe_load('2013_05_10')
20130510
>>> yaml.safe_load('"2013_05_10"')
'2013_05_10'
Automatic datetime conversion

If there is a value in a YAML file formatted 2014-01-20 14:23:23 or similar, YAML will automatically convert this to a Python datetime object. These objects are not msgpack serializable, and so may break core salt functionality. If values such as these are needed in a salt YAML file (specifically a configuration file), they should be formatted with surrounding strings to force YAML to serialize them as strings:

>>> import yaml
>>> yaml.safe_load('2014-01-20 14:23:23')
datetime.datetime(2014, 1, 20, 14, 23, 23)
>>> yaml.safe_load('"2014-01-20 14:23:23"')
'2014-01-20 14:23:23'

Additionally, numbers formatted like XXXX-XX-XX will also be converted (or YAML will attempt to convert them, and error out if it doesn't think the date is a real one). Thus, for example, if a minion were to have an ID of 4017-16-20 the minion would not start because YAML would complain that the date was out of range. The workaround is the same, surround the offending string with quotes:

>>> import yaml
>>> yaml.safe_load('4017-16-20')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/site-packages/yaml/__init__.py", line 93, in safe_load
    return load(stream, SafeLoader)
  File "/usr/local/lib/python2.7/site-packages/yaml/__init__.py", line 71, in load
    return loader.get_single_data()
  File "/usr/local/lib/python2.7/site-packages/yaml/constructor.py", line 39, in get_single_data
    return self.construct_document(node)
  File "/usr/local/lib/python2.7/site-packages/yaml/constructor.py", line 43, in construct_document
    data = self.construct_object(node)
  File "/usr/local/lib/python2.7/site-packages/yaml/constructor.py", line 88, in construct_object
    data = constructor(self, node)
  File "/usr/local/lib/python2.7/site-packages/yaml/constructor.py", line 312, in construct_yaml_timestamp
    return datetime.date(year, month, day)
ValueError: month must be in 1..12
>>> yaml.safe_load('"4017-16-20"')
'4017-16-20'

Live Python Debug Output

If the minion or master seems to be unresponsive, a SIGUSR1 can be passed to the processes to display where in the code they are running. If encountering a situation like this, this debug information can be invaluable. First make sure the master of minion are running in the foreground:

salt-master -l debug
salt-minion -l debug

Then pass the signal to the master or minion when it seems to be unresponsive:

killall -SIGUSR1 salt-master
killall -SIGUSR1 salt-minion

Also under BSD and Mac OS X in addition to SIGUSR1 signal, debug subroutine set up for SIGINFO which has an advantage of being sent by Ctrl+T shortcut.

When filing an issue or sending questions to the mailing list for a problem with an unresponsive daemon this information can be invaluable.

Salt 0.16.x minions cannot communicate with a 0.17.x master

As of release 0.17.1 you can no longer run different versions of Salt on your Master and Minion servers. This is due to a protocol change for security purposes. The Salt team will continue to attempt to ensure versions are as backwards compatible as possible.

Debugging the Master and Minion

A list of common master and minion troubleshooting steps provide a starting point for resolving issues you may encounter.

Developing Salt

Overview

In its most typical use, Salt is a software application in which clients, called "minions" can be commanded and controlled from a central command server called a "master".

Commands are normally issued to the minions (via the master) by calling a client script simply called, 'salt'.

Salt features a pluggable transport system to issue commands from a master to minions. The default transport is ZeroMQ.

Salt Client

Overview

The salt client is run on the same machine as the Salt Master and communicates with the salt-master to issue commands and to receive the results and display them to the user.

The primary abstraction for the salt client is called 'LocalClient'.

When LocalClient wants to publish a command to minions, it connects to the master by issuing a request to the master's ReqServer (TCP: 4506)

The LocalClient system listens to responses for its requests by listening to the master event bus publisher (master_event_pub.ipc).

Salt Master

Overview

The salt-master deamon runs on the designated Salt master and performs functions such as authenticating minions, sending, and receiving requests from connected minions and sending and receiving requests and replies to the 'salt' CLI.

Moving Pieces

When a Salt master starts up, a number of processes are started, all of which are called 'salt-master' in a process-list but have various role categories.

Among those categories are:

  • Publisher
  • EventPublisher
  • MWorker

Publisher

The Publisher process is responsible for sending commands over the designated transport to connected minions. The Publisher is bound to the following:

  • TCP: port 4505
  • IPC: publish_pull.ipc

Each salt minion establishes a connection to the master Publisher.

EventPublisher

The EventPublisher publishes events onto the event bus. It is bound to the following:

  • IPC: master_event_pull.ipc
  • IPC: master_event_pub.ipc

MWorker

Worker processes manage the back-end operations for the Salt Master.

The number of workers is equivalent to the number of 'worker_threads' specified in the master configuration and is always at least one.

Workers are bound to the following:

  • IPC: workers.ipc

ReqServer

The Salt request server takes requests and distributes them to available MWorker processes for processing. It also receives replies back from minions.

The ReqServer is bound to the following:
  • TCP: 4506
  • IPC: workers.ipc

Each salt minion establishes a connection to the master ReqServer.

Job Flow

The Salt master works by always publishing commands to all connected minions and the minions decide if the command is meant for them by checking themselves against the command target.

The typical lifecycle of a salt job from the perspective of the master might be as follows:

  1. A command is issued on the CLI. For example, 'salt my_minion test.ping'.

2) The 'salt' command uses LocalClient to generate a request to the salt master by connecting to the ReqServer on TCP:4506 and issuing the job.

3) The salt-master ReqServer sees the request and passes it to an available MWorker over workers.ipc.

4) A worker picks up the request and handles it. First, it checks to ensure that the requested user has permissions to issue the command. Then, it sends the publish command to all connected minions. For the curious, this happens in ClearFuncs.publish().

5) The worker announces on the master event bus that it is about to publish a job to connected minions. This happens by placing the event on the master event bus (master_event_pull.ipc) where the EventPublisher picks it up and distributes it to all connected event listeners on master_event_pub.ipc.

6) The message to the minions is encrypted and sent to the Publisher via IPC on publish_pull.ipc.

7) Connected minions have a TCP session established with the Publisher on TCP port 4505 where they await commands. When the Publisher receives the job over publish_pull, it sends the jobs across the wire to the minions for processing.

8) After the minions receive the request, they decrypt it and perform any requested work, if they determine that they are targeted to do so.

9) When the minion is ready to respond, it publishes the result of its job back to the master by sending the encrypted result back to the master on TCP 4506 where it is again picked up by the ReqServer and forwarded to an available MWorker for processing. (Again, this happens by passing this message across workers.ipc to an available worker.)

10) When the MWorker receives the job it decrypts it and fires an event onto the master event bus (master_event_pull.ipc). (Again for the curious, this happens in AESFuncs._return().

11) The EventPublisher sees this event and re-publishes it on the bus to all connected listeners of the master event bus (on master_event_pub.ipc). This is where the LocalClient has been waiting, listening to the event bus for minion replies. It gathers the job and stores the result.

12) When all targeted minions have replied or the timeout has been exceeded, the salt client displays the results of the job to the user on the CLI.

Salt Minion

Overview

The salt-minion is a single process that sits on machines to be managed by Salt. It can either operate as a stand-alone daemon which accepts commands locally via 'salt-call' or it can connect back to a master and receive commands remotely.

When starting up, salt minions connect _back_ to a master defined in the minion config file. The connect to two ports on the master:

  • TCP: 4505

    This is the connection to the master Publisher. It is on this port that the minion receives jobs from the master.

  • TCP: 4506

    This is the connection to the master ReqServer. It is on this port that the minion sends job results back to the master.

Event System

Similar to the master, a salt-minion has its own event system that operates over IPC by default. The minion event system operates on a push/pull system with IPC files at minion_event_<unique_id>_pub.ipc and minion_event_<unique_id>_pull.ipc.

The astute reader might ask why have an event bus at all with a single-process daemon. The answer is that the salt-minion may fork other processes as required to do the work without blocking the main salt-minion process and this necessitates a mechanism by which those processes can communicate with each other. Secondarily, this provides a bus by which any user with sufficient permissions can read or write to the bus as a common interface with the salt minion.

Job Flow

When a salt minion starts up, it attempts to connect to the Publisher and the ReqServer on the salt master. It then attempts to authenticate and once the minion has successfully authenticated, it simply listens for jobs.

Jobs normally come either come from the 'salt-call' script run by a local user on the salt minion or they can come directly from a master.

Master Job Flow

1) A master publishes a job that is received by a minion as outlined by the master's job flow above.

2) The minion is polling its receive socket that's connected to the master Publisher (TCP 4505 on master). When it detects an incoming message, it picks it up from the socket and decrypts it.

3) A new minion process or thread is created and provided with the contents of the decrypted message. The _thread_return() method is provided with the contents of the received message.

4) The new minion thread is created. The _thread_return() function starts up and actually calls out to the requested function contained in the job.

  1. The requested function runs and returns a result. [Still in thread.]

6) The result of the function that's run is encrypted and returned to the master's ReqServer (TCP 4506 on master). [Still in thread.]

7) Thread exits. Because the main thread was only blocked for the time that it took to initialize the worker thread, many other requests could have been received and processed during this time.

A Note on ClearFuncs vs. AESFuncs

A common source of confusion is determining when messages are passed in the clear and when they are passed using encryption. There are two rules governing this behaviour:

1) ClearFuncs is used for intra-master communication and during the initial authentication handshake between a minion and master during the key exhange.

  1. AESFuncs is used everywhere else.

Contributing

There is a great need for contributions to Salt and patches are welcome! The goal here is to make contributions clear, make sure there is a trail for where the code has come from, and most importantly, to give credit where credit is due!

There are a number of ways to contribute to Salt development.

For details on how to contribute documentation improvements please review Writing Salt Documentation.

Sending a GitHub pull request

Sending pull requests on GitHub is the preferred method for receiving contributions. The workflow advice below mirrors GitHub's own guide and is well worth reading.

  1. Fork saltstack/salt on GitHub.

  2. Make a local clone of your fork.

    git clone git@github.com:my-account/salt.git
    cd salt
    
  3. Add saltstack/salt as a git remote.

    git remote add upstream https://github.com/saltstack/salt.git
    
  4. Create a new branch in your clone.

    Note

    A branch should have one purpose. For example, "Fix bug X," or "Add feature Y". Multiple unrelated fixes and/or features should be isolated into separate branches.

    If you're working on a fix, create your branch from the oldest release branch having the bug. See Which Salt Branch?.

    git fetch upstream
    git checkout -b fix-broken-thing upstream/2015.5
    

    If you're working on a feature, create your branch from the develop branch.

    git fetch upstream
    git checkout -b add-cool-feature upstream/develop
    
  5. Edit and commit changes to your branch.

    vim path/to/file1 path/to/file2
    git diff
    git add path/to/file1 path/to/file2
    git commit
    

    Write a short, descriptive commit title and a longer commit message if necessary.

    Note

    If your change fixes a bug or implements a feature already filed in the issue tracker, be sure to reference the issue number in the commit message body.

    fix broken things in file1 and file2
    
    Fixes #31337.  The issue is now eradicated from file1 and file2.
    
    # Please enter the commit message for your changes. Lines starting
    # with '#' will be ignored, and an empty message aborts the commit.
    # On branch fix-broken-thing
    # Changes to be committed:
    #       modified:   path/to/file1
    #       modified:   path/to/file2
    

    If you get stuck, there are many introductory Git resources on http://help.github.com.

  6. Push your locally-committed changes to your GitHub fork,

    Note

    You may want to rebase before pushing to work out any potential conflicts.

    git fetch upstream
    git rebase upstream/2015.5 fix-broken-thing
    git push --set-upstream origin fix-broken-thing
    

    or,

    git fetch upstream
    git rebase upstream/develop add-cool-feature
    git push --set-upstream origin add-cool-feature
    
  7. Find the branch on your GitHub salt fork.

    https://github.com/my-account/salt/branches/fix-broken-thing

  8. Open a new pull request.

    Click on Pull Request on the right near the top of the page,

    https://github.com/my-account/salt/pull/new/fix-broken-thing

    1. If your branch is a fix for a release branch, choose that as the base branch (e.g. 2015.5),

      https://github.com/my-account/salt/compare/saltstack:2015.5...fix-broken-thing

      If your branch is a feature, choose develop as the base branch,

      https://github.com/my-account/salt/compare/saltstack:develop...add-cool-feature

    2. Review that the proposed changes are what you expect.

    3. Write a descriptive comment. Include links to related issues (e.g. 'Fixes #31337.') in the comment field.

    4. Click Create pull request.

  9. Salt project members will review your pull request and automated tests will run on it.

    If you recognize any test failures as being related to your proposed changes or if a reviewer asks for modifications:

    1. Make the new changes in your local clone on the same local branch.
    2. Push the branch to GitHub again using the same commands as before.
    3. New and updated commits will be added to the pull request automatically.
    4. Feel free to add a comment to the discussion.

Note

Jenkins

Pull request against saltstack/salt are automatically tested on a variety of operating systems and configurations. On average these tests take 30 minutes. Depending on your GitHub notification settings you may also receive an email message about the test results.

Test progress and results can be found at http://jenkins.saltstack.com/.

Which Salt branch?

GitHub will open pull requests against Salt's main branch, develop, by default. Ideally features should go into develop and bug fixes should go into the oldest supported release branch affected by the bug. See Sending a GitHub pull request.

If you have a bug fix and have already forked your working branch from develop and do not know how to rebase your commits against another branch, then submit it to develop anyway and we'll be sure to backport it to the correct place.

The current release branch

The current release branch is the most recent stable release. Pull requests containing bug fixes should be made against the release branch.

The branch name will be a date-based name such as 2015.5.

Bug fixes are made on this branch so that minor releases can be cut from this branch without introducing surprises and new features. This approach maximizes stability.

The Salt development team will "merge-forward" any fixes made on the release branch to the develop branch once the pull request has been accepted. This keeps the fix in isolation on the release branch and also keeps the develop branch up-to-date.

Note

Closing GitHub issues from commits

This "merge-forward" strategy requires that the magic keywords to close a GitHub issue appear in the commit message text directly. Only including the text in a pull request will not close the issue.

GitHub will close the referenced issue once the commit containing the magic text is merged into the default branch (develop). Any magic text input only into the pull request description will not be seen at the Git-level when those commits are merged-forward. In other words, only the commits are merged-forward and not the pull request.

The develop branch

The develop branch is unstable and bleeding-edge. Pull requests containing feature additions or non-bug-fix changes should be made against the develop branch.

The Salt development team will back-port bug fixes made to develop to the current release branch if the contributor cannot create the pull request against that branch.

Keeping Salt Forks in Sync

Salt is advancing quickly. It is therefore critical to pull upstream changes from upstream into your fork on a regular basis. Nothing is worse than putting hard work into a pull request only to see bunches of merge conflicts because it has diverged too far from upstream.

The following assumes origin is the name of your fork and upstream is the name of the main saltstack/salt repository.

  1. View existing remotes.

    git remote -v
    
  2. Add the upstream remote.

    # For ssh github
    git remote add upstream git@github.com:saltstack/salt.git
    
    # For https github
    git remote add upstream https://github.com/saltstack/salt.git
    
  3. Pull upstream changes into your clone.

    git fetch upstream
    
  4. Update your copy of the develop branch.

    git checkout develop
    git merge --ff-only upstream/develop
    

    If Git complains that a fast-forward merge is not possible, you have local commits.

    • Run git pull --rebase origin develop to rebase your changes on top of the upstream changes.
    • Or, run git branch <branch-name> to create a new branch with your commits. You will then need to reset your develop branch before updating it with the changes from upstream.

    If Git complains that local files will be overwritten, you have changes to files in your working directory. Run git status to see the files in question.

  5. Update your fork.

    git push origin develop
    
  6. Repeat the previous two steps for any other branches you work with, such as the current release branch.

Posting patches to the mailing list

Patches will also be accepted by email. Format patches using git format-patch and send them to the salt-users mailing list. The contributor will then get credit for the patch, and the Salt community will have an archive of the patch and a place for discussion.

Backporting Pull Requests

If a bug is fixed on develop and the bug is also present on a currently-supported release branch it will need to be back-ported to all applicable branches.

Note

Most Salt contributors can skip these instructions

These instructions do not need to be read in order to contribute to the Salt project! The SaltStack team will back-port fixes on behalf of contributors in order to keep the contribution process easy.

These instructions are intended for frequent Salt contributors, advanced Git users, SaltStack employees, or independent souls who wish to back-port changes themselves.

It is often easiest to fix a bug on the oldest supported release branch and then merge that branch forward into develop (as described earlier in this document). When that is not possible the fix must be back-ported, or copied, into any other affected branches.

These steps assume a pull request #1234 has been merged into develop. And upstream is the name of the remote pointing to the main Salt repo.

  1. Identify the oldest supported release branch that is affected by the bug.

  2. Create a new branch for the back-port by reusing the same branch from the original pull request.

    Name the branch bp-<NNNN> and use the number of the original pull request.

    git fetch upstream refs/pull/1234/head:bp-1234
    git checkout bp-1234
    
  3. Find the parent commit of the original pull request.

    The parent commit of the original pull request must be known in order to rebase onto a release branch. The easiest way to find this is on GitHub.

    Open the original pull request on GitHub and find the first commit in the list of commits. Select and copy the SHA for that commit. The parent of that commit can be specified by appending ~1 to the end.

  4. Rebase the new branch on top of the release branch.

    • <release-branch> is the branch identified in step #1.
    • <orig-base> is the SHA identified in step #3 -- don't forget to add ~1 to the end!
    git rebase --onto <release-branch> <orig-base> bp-1234
    

    Note, release branches prior to 2015.5 will not be able to make use of rebase and must use cherry-picking instead.

  5. Push the back-port branch to GitHub and open a new pull request.

    Opening a pull request for the back-port allows for the test suite and normal code-review process.

    git push -u origin bp-1234
    

Issue and Pull Request Labeling System

SaltStack uses several labeling schemes to help facilitate code contributions and bug resolution. See the <labels-and-milestones> documentation for more information.

Deprecating Code

Salt should remain backwards compatible, though sometimes, this backwards compatibility needs to be broken because a specific feature and/or solution is no longer necessary or required. At first one might think, let me change this code, it seems that it's not used anywhere else so it should be safe to remove. Then, once there's a new release, users complain about functionality which was removed and they where using it, etc. This should, at all costs, be avoided, and, in these cases, that specific code should be deprecated.

Depending on the complexity and usage of a specific piece of code, the deprecation time frame should be properly evaluated. As an example, a deprecation warning which is shown for 2 major releases, for example 0.17.0 and 2014.1.0, gives users enough time to stop using the deprecated code and adapt to the new one.

For example, if you're deprecating the usage of a keyword argument to a function, that specific keyword argument should remain in place for the full deprecation time frame and if that keyword argument is used, a deprecation warning should be shown to the user.

To help in this deprecation task, salt provides salt.utils.warn_until. The idea behind this helper function is to show the deprecation warning until salt reaches the provided version. Once that provided version is equaled salt.utils.warn_until will raise a RuntimeError making salt stop its execution. This stoppage is unpleasant and will remind the developer that the deprecation limit has been reached and that the code can then be safely removed.

Consider the following example:

def some_function(bar=False, foo=None):
    if foo is not None:
        salt.utils.warn_until(
            (0, 18),
            'The \'foo\' argument has been deprecated and its '
            'functionality removed, as such, its usage is no longer '
            'required.'
        )

Consider that the current salt release is 0.16.0. Whenever foo is passed a value different from None that warning will be shown to the user. This will happen in versions 0.16.2 to 2014.1.0, after which a RuntimeError will be raised making us aware that the deprecated code should now be removed.

Dunder Dictionaries

Salt provides several special "dunder" dictionaries as a convenience for Salt development. These include __opts__, __context__, __salt__, and others. This document will describe each dictionary and detail where they exist and what information and/or functionality they provide.

__opts__

Available in
  • All loader modules

The __opts__ dictionary contains all of the options passed in the configuration file for the master or minion.

Note

In many places in salt, instead of pulling raw data from the __opts__ dict, configuration data should be pulled from the salt get functions such as config.get, aka - __salt__['config.get']('foo:bar') The get functions also allow for dict traversal via the : delimiter. Consider using get functions whenever using __opts__ or __pillar__ and __grains__ (when using grains for configuration data)

The configuration file data made available in the __opts__ dictionary is the configuration data relative to the running daemon. If the modules are loaded and executed by the master, then the master configuration data is available, if the modules are executed by the minion, then the minion configuration is available. Any additional information passed into the respective configuration files is made available

__salt__

Available in
  • Execution Modules
  • State Modules
  • Returners

__salt__ contains the execution module functions. This allows for all functions to be called as they have been set up by the salt loader.

__salt__['cmd.run']('fdisk -l')
__salt__['network.ip_addrs']()

__grains__

Available in
  • Execution Modules
  • State Modules
  • Returners
  • External Pillar

The __grains__ dictionary contains the grains data generated by the minion that is currently being worked with. In execution modules, state modules and returners this is the grains of the minion running the calls, when generating the external pillar the __grains__ is the grains data from the minion that the pillar is being generated for.

__pillar__

Available in
  • Execution Modules
  • State Modules
  • Returners

The __pillar__ dictionary contains the pillar for the respective minion.

__context__

__context__ exists in state modules and execution modules.

During a state run the __context__ dictionary persists across all states that are run and then is destroyed when the state ends.

When running an execution module __context__ persists across all module executions until the modules are refreshed; such as when saltutils.sync_all or state.highstate are executed.

A great place to see how to use __context__ is in the cp.py module in salt/modules/cp.py. The fileclient authenticates with the master when it is instantiated and then is used to copy files to the minion. Rather than create a new fileclient for each file that is to be copied down, one instance of the fileclient is instantiated in the __context__ dictionary and is reused for each file. Here is an example from salt/modules/cp.py:

if not 'cp.fileclient' in __context__:
    __context__['cp.fileclient'] = salt.fileclient.get_file_client(__opts__)

Note

Because __context__ may or may not have been destroyed, always be sure to check for the existence of the key in __context__ and generate the key before using it.

External Pillars

Salt provides a mechanism for generating pillar data by calling external pillar interfaces. This document will describe an outline of an ext_pillar module.

Location

Salt expects to find your ext_pillar module in the same location where it looks for other python modules. If the extension_modules option in your Salt master configuration is set, Salt will look for a pillar directory under there and load all the modules it finds. Otherwise, it will look in your Python site-packages salt/pillar directory.

Configuration

The external pillars that are called when a minion refreshes its pillars is controlled by the ext_pillar option in the Salt master configuration. You can pass a single argument, a list of arguments or a dictionary of arguments to your pillar:

ext_pillar:
  - example_a: some argument
  - example_b:
    - argumentA
    - argumentB
  - example_c:
      keyA: valueA
      keyB: valueB

The Module

Imports and Logging

Import modules your external pillar module needs. You should first include generic modules that come with stock Python:

import logging

And then start logging. This is an idiomatic way of setting up logging in Salt:

log = logging.getLogger(__name__)

Finally, load modules that are specific to what you are doing. You should catch import errors and set a flag that the __virtual__ function can use later.

try:
    import weird_thing
    EXAMPLE_A_LOADED = True
except ImportError:
    EXAMPLE_A_LOADED = False

Options

If you define an __opts__ dictionary, it will be merged into the __opts__ dictionary handed to the ext_pillar function later. This is a good place to put default configuration items. The convention is to name things modulename.option.

__opts__ = { 'example_a.someconfig': 137 }

Initialization

If you define an __init__ function, it will be called with the following signature:

def __init__( __opts__ ):
    # Do init work here

Note: The __init__ function is ran every time a particular minion causes the external pillar to be called, so don't put heavy initialization code here. The __init__ functionality is a side-effect of the Salt loader, so it may not be as useful in pillars as it is in other Salt items.

__virtual__

If you define a __virtual__ function, you can control whether or not this module is visible. If it returns False then Salt ignores this module. If it returns a string, then that string will be how Salt identifies this external pillar in its ext_pillar configuration. If you're not renaming the module, simply return True in the __virtual__ function, which is the same as if this function did not exist, then, the name Salt's ext_pillar will use to identify this module is its conventional name in Python.

This is useful to write modules that can be installed on all Salt masters, but will only be visible if a particular piece of software your module requires is installed.

# This external pillar will be known as `example_a`
def __virtual__():
    if EXAMPLE_A_LOADED:
        return True
    return False
# This external pillar will be known as `something_else`
__virtualname__ = 'something_else'

def __virtual__():
    if EXAMPLE_A_LOADED:
        return __virtualname__
    return False

ext_pillar

This is where the real work of an external pillar is done. If this module is active and has a function called ext_pillar, whenever a minion updates its pillar this function is called.

How it is called depends on how it is configured in the Salt master configuration. The first argument is always the current pillar dictionary, this contains pillar items that have already been added, starting with the data from pillar_roots, and then from any already-ran external pillars.

Using our example above:

ext_pillar( id, pillar, 'some argument' )                   # example_a
ext_pillar( id, pillar, 'argumentA', 'argumentB' )          # example_b
ext_pillar( id, pillar, keyA='valueA', keyB='valueB' } )    # example_c

In the example_a case, pillar will contain the items from the pillar_roots, in example_b pillar will contain that plus the items added by example_a, and in example_c pillar will contain that plus the items added by example_b. In all three cases, id will contain the ID of the minion making the pillar request.

This function should return a dictionary, the contents of which are merged in with all of the other pillars and returned to the minion. Note: this function is called once for each minion that fetches its pillar data.

def ext_pillar( minion_id, pillar, *args, **kwargs ):

    my_pillar = {}

    # Do stuff

    return my_pillar

You shouldn't just add items to pillar and return that, since that will cause Salt to merge data that already exists. Rather, just return the items you are adding or changing. You could, however, use pillar in your module to make some decision based on pillar data that already exists.

This function has access to some useful globals:

__opts__:A dictionary of mostly Salt configuration options. If you had an __opts__ dictionary defined in your module, those values will be included.
__salt__:A dictionary of Salt module functions, useful so you don't have to duplicate functions that already exist. E.g. __salt__['cmd.run']( 'ls -l' ) Note, runs on the master
__grains__:A dictionary of the grains of the minion making this pillar call.

Example configuration

As an example, if you wanted to add external pillar via the cmd_json external pillar, add something like this to your master config:

ext_pillar:
  - cmd_json: 'echo {\"arg\":\"value\"}'

Reminder

Just as with traditional pillars, external pillars must be refreshed in order for minions to see any fresh data:

salt '*' saltutil.refresh_pillar

Installing Salt for development

Clone the repository using:

git clone https://github.com/saltstack/salt

Note

tags

Just cloning the repository is enough to work with Salt and make contributions. However, fetching additional tags from git is required to have Salt report the correct version for itself. To do this, first add the git repository as an upstream source:

git remote add upstream https://github.com/saltstack/salt

Fetching tags is done with the git 'fetch' utility:

git fetch --tags upstream

Create a new virtualenv:

virtualenv /path/to/your/virtualenv

Avoid making your virtualenv path too long. On Arch Linux, where Python 3 is the default installation of Python, use the virtualenv2 command instead of virtualenv.

Note

Using system Python modules in the virtualenv

To use already-installed python modules in virtualenv (instead of having pip download and compile new ones), run virtualenv --system-site-packages Using this method eliminates the requirement to install the salt dependencies again, although it does assume that the listed modules are all installed in the system PYTHONPATH at the time of virtualenv creation.

Activate the virtualenv:

source /path/to/your/virtualenv/bin/activate

Install Salt (and dependencies) into the virtualenv:

pip install M2Crypto    # Don't install on Debian/Ubuntu (see below)
pip install pyzmq PyYAML pycrypto msgpack-python jinja2 psutil
pip install -e ./salt   # the path to the salt git clone from above

Note

Installing M2Crypto

swig and libssl-dev are required to build M2Crypto. To fix the error command 'swig' failed with exit status 1 while installing M2Crypto, try installing it with the following command:

env SWIG_FEATURES="-cpperraswarn -includeall -D__`uname -m`__ -I/usr/include/openssl" pip install M2Crypto

Debian and Ubuntu systems have modified openssl libraries and mandate that a patched version of M2Crypto be installed. This means that M2Crypto needs to be installed via apt:

apt-get install python-m2crypto

This also means that pulling in the M2Crypto installed using apt requires using --system-site-packages when creating the virtualenv.

If you're using a platform other than Debian or Ubuntu, and you are installing M2Crypto via pip instead of a system package, then you will also need the gcc compiler.

Note

Installing psutil

Python header files are required to build this module, otherwise the pip install will fail. If your distribution separates binaries and headers into separate packages, make sure that you have the headers installed. In most Linux distributions which split the headers into their own package, this can be done by installing the python-dev or python-devel package. For other platforms, the package will likely be similarly named.

Note

Installing dependencies on OS X.

You can install needed dependencies on OS X using homebrew or macports. See OS X Installation

Warning

Installing on RedHat-based Distros

If installing from pip (or from source using setup.py install), be advised that the yum-utils package is needed for Salt to manage packages on RedHat-based systems.

Running a self-contained development version

During development it is easiest to be able to run the Salt master and minion that are installed in the virtualenv you created above, and also to have all the configuration, log, and cache files contained in the virtualenv as well.

Copy the master and minion config files into your virtualenv:

mkdir -p /path/to/your/virtualenv/etc/salt
cp ./salt/conf/master ./salt/conf/minion /path/to/your/virtualenv/etc/salt/

Edit the master config file:

  1. Uncomment and change the user: root value to your own user.
  2. Uncomment and change the root_dir: / value to point to /path/to/your/virtualenv.
  3. If you are running version 0.11.1 or older, uncomment, and change the pidfile: /var/run/salt-master.pid value to point to /path/to/your/virtualenv/salt-master.pid.
  4. If you are also running a non-development version of Salt you will have to change the publish_port and ret_port values as well.

Edit the minion config file:

  1. Repeat the edits you made in the master config for the user and root_dir values as well as any port changes.
  2. If you are running version 0.11.1 or older, uncomment, and change the pidfile: /var/run/salt-minion.pid value to point to /path/to/your/virtualenv/salt-minion.pid.
  3. Uncomment and change the master: salt value to point at localhost.
  4. Uncomment and change the id: value to something descriptive like "saltdev". This isn't strictly necessary but it will serve as a reminder of which Salt installation you are working with.
  5. If you changed the ret_port value in the master config because you are also running a non-development version of Salt, then you will have to change the master_port value in the minion config to match.

Note

Using salt-call with a Standalone Minion

If you plan to run salt-call with this self-contained development environment in a masterless setup, you should invoke salt-call with -c /path/to/your/virtualenv/etc/salt so that salt can find the minion config file. Without the -c option, Salt finds its config files in /etc/salt.

Start the master and minion, accept the minion's key, and verify your local Salt installation is working:

cd /path/to/your/virtualenv
salt-master -c ./etc/salt -d
salt-minion -c ./etc/salt -d
salt-key -c ./etc/salt -L
salt-key -c ./etc/salt -A
salt -c ./etc/salt '*' test.ping

Running the master and minion in debug mode can be helpful when developing. To do this, add -l debug to the calls to salt-master and salt-minion. If you would like to log to the console instead of to the log file, remove the -d.

Note

Too long socket path?

Once the minion starts, you may see an error like the following:

zmq.core.error.ZMQError: ipc path "/path/to/your/virtualenv/
var/run/salt/minion/minion_event_7824dcbcfd7a8f6755939af70b96249f_pub.ipc"
is longer than 107 characters (sizeof(sockaddr_un.sun_path)).

This means that the path to the socket the minion is using is too long. This is a system limitation, so the only workaround is to reduce the length of this path. This can be done in a couple different ways:

  1. Create your virtualenv in a path that is short enough.
  2. Edit the sock_dir minion config variable and reduce its length. Remember that this path is relative to the value you set in root_dir.

NOTE: The socket path is limited to 107 characters on Solaris and Linux, and 103 characters on BSD-based systems.

Note

File descriptor limits

Ensure that the system open file limit is raised to at least 2047:

# check your current limit
ulimit -n

# raise the limit. persists only until reboot
# use 'limit descriptors 2047' for c-shell
ulimit -n 2047

To set file descriptors on OSX, refer to the OS X Installation instructions.

Installing Salt from the Python Package Index

If you are installing using easy_install, you will need to define a USE_SETUPTOOLS environment variable, otherwise dependencies will not be installed:

USE_SETUPTOOLS=1 easy_install salt

Editing and previewing the documentation

You need sphinx-build command to build the docs. In Debian/Ubuntu this is provided in the python-sphinx package. Sphinx can also be installed to a virtualenv using pip:

pip install Sphinx==1.3b2

Change to salt documentation directory, then:

cd doc; make html
  • This will build the HTML docs. Run make without any arguments to see the available make targets, which include html, man, and text.
  • The docs then are built within the docs/_build/ folder. To update the docs after making changes, run make again.
  • The docs use reStructuredText for markup. See a live demo at http://rst.ninjs.org/.
  • The help information on each module or state is culled from the python code that runs for that piece. Find them in salt/modules/ or salt/states/.
  • To build the docs on Arch Linux, the python2-sphinx package is required. Additionally, it is necessary to tell make where to find the proper sphinx-build binary, like so:
make SPHINXBUILD=sphinx-build2 html
  • To build the docs on RHEL/CentOS 6, the python-sphinx10 package must be installed from EPEL, and the following make command must be used:
make SPHINXBUILD=sphinx-1.0-build html

Once you've updated the documentation, you can run the following command to launch a simple Python HTTP server to see your changes:

cd _build/html; python -m SimpleHTTPServer

Running unit and integration tests

Run the test suite with following command:

./setup.py test

See here for more information regarding the test suite.

Issue and Pull Request Labeling System

SaltStack uses several labeling schemes to help facilitate code contributions and bug resolution. See the <labels-and-milestones> documentation for more information.

GitHub Labels and Milestones

SaltStack uses several labeling schemes, as well as applying milestones, to triage incoming issues and pull requests in the GitHub Issue Tracker. Most of the labels and milestones are used for internal tracking, but the following definitions might prove useful for the community to discover the best issues to help resolve.

Milestones

Milestones are most often applied to issues, as a milestone is assigned to every issue that has been triaged. However, milestones can also be applied to pull requests. SaltStack uses milestones to track bugs or features that should be included in the next major feature release, or even the next bug-fix release, as well as what issues are ready to be worked on or what might be blocked. All incoming issues must have a milestone associated with them.

Approved
Used to indicate that this issue has all of the needed information and is ready to be worked on.
Blocked
Used to indicate that the issue is not ready to be worked on yet. This typically applies to issues that have been labeled with “Info Needed”, “Question”, “Expected Behavior”, “Won’t Fix for Now”, etc.
Dot or Bug-fix Release
Used to help filter/identify what issues must be fixed before the release such as 2014.7.4 or 2015.2.3. This milestone is often used in conjunction with the Blocker label, but not always.
Feature Release
Similar to the Dot or Bug-fix Release milestone, but for upcoming feature releases such as Boron, Carbon, etc. This milestone is often used in conjunction with the Blocker label, but not always.

Labels

Labels are used to facilitate the resolution of new pull requests and open issues. Most labels are confined to being applied to either issues or pull requests, though some labels may be applied to both.

Issue Labels

All incoming issues should be triaged with at least one label and a milestone. When a new issue comes in, it should be determined if the issue is a bug or a feature request, and either of those labels should be applied accordingly. Bugs and Feature Requests have differing labeling schemes, detailed below, where other labels are applied to them to further help contributors find issues to fix or implement.

There are some labels, such as Question or some of the "Status" labels that may be applied as "stand alone" labels in which more information may be needed or a decision must be reached on how to proceed. (See the "Bug Status Labels" section below.)

Features

The Feature label should be applied when a user is requesting entirely new functionality. This can include new functions, modules, states, modular systems, flags for existing functions, etc. Features do not receive severity or priority labels, as those labels are only used for bugs. However, they may receive "Functional Area" labels or "ZD".

Feature request issues will be prioritized on an "as-needed" basis using milestones during SaltStack's feature release and sprint planning processes.

Bugs

All bugs should have the Bug label as well as a severity, priority, functional area, and a status, as applicable.

Severity

How severe is the bug? SaltStack uses four labels to determine the severity of a bug: Blocker, Critical, High, and Medium. This scale is intended to make the bug-triage process as objective as possible.

Blocker
Should be used sparingly to indicate must-have fixes for the impending release.
Critical
Applied to bugs that have data loss, crashes, hanging, unresponsive system, etc.
High Severity
Any bug report that contains incorrect functionality, bad functionality, a confusing user experience, etc.
Medium Severity
Applied to bugs that are about cosmetic items, spelling, spacing, colors, etc.
Priority

In addition to using a bug severity to classify issues, a priority is also assigned to each bug to give further granularity in searching for bugs to fix. In this way, a bug's priority is defined as follows:

P1
Very likely. Everyone will see the bug.
P2
Somewhat likely. Most will see the bug, but a few will not.
P3
Half will see the bug, about half will not.
P4
Most will not see the bug. Usually a very specific use case or corner case.

Note

A bug's priority is relative to its functional area. If a bug report, for example, about gitfs includes details indicating that everyone who gitfs will run into this bug, then a P1 label will be applied, even though Salt users who are not enabling gitfs will see the bug.

Functional Areas

All bugs should receive a "Functional Area" label to indicate what region of Salt the bug is mainly seen in. This will help internal developers as well as community members identify areas of expertise to find issues that can be fixed more easily. Functional Area labels can also be applied to Feature Requests.

Functional Area Labels, in alphabetical order, include:

  • Core
  • Documentation
  • Execution Module
  • File Servers
  • Multi-Master
  • Packaging
  • Pillar
  • Platform Mgmt.
  • RAET
  • Returners
  • Salt-API
  • Salt-Cloud
  • Salt-SSH
  • Salt-Syndic
  • State Module
  • Windows
  • ZMQ
Bug Status Labels

Status lables are used to define and track the state a bug is in at any given time. Not all bugs will have a status label, but if a SaltStack employee is able to apply a status label, he or she will. Status labels are somewhat unique in the fact that they might be the only label on an issue, such as Pending Discussion, Info Needed, or Expected Behavior until further action can be taken.

Cannot Reproduce
Someone from the SaltStack team has tried to reproduce the bug with the given information but they are unable to replicate the problem. More information will need to be provided from the original issue-filer before proceeding.
Confirmed
A SaltStack engineer has confirmed the reported bug and provided a simple way to reproduce the failure.
Duplicate
The issue has been reported already in another report. A link to the other bug report must be provided. At that point the new issue can be closed. Usually, the earliest bug on file is kept as that typically has the most discussion revolving around the issue, though not always. (This can be a "stand-alone" label.)
Expected Behavior
The issue reported is expected behavior and nothing needs to be fixed. (This can be a "stand-alone" label.)
Fixed Pending Verification
The bug has been fixed and a link to the applicable pull request(s) has been provided, but confirmation is being sought from the community member(s) involved in the bug to test and confirm the fix.
Info Needed
More information about the issue is needed before proceeding such as a versions report, a sample state, the command the user was running, or the operating system the error was occurring on, etc. (This can be a "stand-alone" label.)
Upstream Bug
The reported bug is something that cannot be fixed in the Salt code base but is instead a bug in another library such a bug in ZMQ or Python. When an issue is labeled with Upstream Bug then a bug report in the upstream project must be filed (or found if a report already exists) and a link to the report must be provided to the issue in Salt for tracking purposes. (This can be a stand-alone label.)
Won't Fix for Now
The SaltStack team has acknowledged the issue at hand is legitimate, but made the call that it’s not something they’re able or willing to fix at this time. These issues may be revisited in the future.
Other

There are a couple of other labels that are helpful in categorizing bugs that are not included in the categories above. These labels can either stand on their own such as Question or can be applied to bugs or feature requests as applicable.

Low Hanging Fruit
Applied to bugs that should be easy to fix. This is useful for new contributors to know where some simple things are to get involved in contributing to salt.
Question
Used when the issue isn’t a bug nor a feature, but the user has a question about expected behavior, how something works, is misunderstanding a concept, etc. This label is typically applied on its own with Blocked milestone.
Regression
Helps with additional filtering for bug fixing. If something previously worked and now does not work, as opposed to something that never worked in the first place, the issue should be treated with greater urgency.
ZD
Stands for “Zendesk” and is used to help track bugs that customers are seeing as well as community members. Bugs with this label should be treated with greater urgency.
Pull Request Labels

SaltStack also applies various labels to incoming pull requests. These are mainly used to help SaltStack engineers easily identify the nature the changes presented in a pull request and whether or not that pull request is ready to be reviewed and merged into the Salt codebase.

Type of Change

A "* Change" label is applied to each incoming pull request. The type of change label that is applied to a pull request is based on a scale that encompasses the number of lines affected by the change in conjunction with the area of code the change touches (i.e. core code areas vs. execution or state modules).

The conditions given for these labels are recommendations, as the pull request reviewer will also consult their intuition and experience regarding the magnitude of the impact of the proposed changes in the pull request.

Core code areas include: state compiler, crypto engine, master and minion, transport, pillar rendering, loader, transport layer, event system, salt.utils, client, cli, logging, netapi, runner engine, templating engine, top file compilation, file client, file server, mine, salt-ssh, test runner, etc.

  • Minor Change
    • Less than 64 lines changed, or
    • Less than 8 core lines changed
  • Medium Change
    • Less than 256 lines changed, or
    • Less than 64 core lines changed
  • Master Change
    • More than 256 lines changed, or
    • More than 64 core lines changed
  • Expert Change
    • Needs specialized, in-depth review
Back-port Labels

There are two labels that are used to keep track of what pull requests need to be back-ported to an older release branch and which pull requests have already been back-ported.

Bugfix - back-port
Indicates a pull request that needs to be back-ported. Once the back-port is completed, the back-porting pull request is linked to the original pull request and this label is removed.
Bugfix - [Done] back-ported
Indicates a pull request that has been back-ported to another branch. The pull request that is responsible for the backport should be linked to this original pull request.
Testing Labels

There are a couple of labels that the QA team uses to indicate the mergability of a pull request. If the pull request is legitimately passing or failing tests, then one or more of these labels may be applied.

Lint
If a pull request fails the test run, but the only failures are related pylint errors, this label will be applied to indicate that pylint needs to be fixed before proceeding.
Pending Changes
Indicates that additional commits should be added to the original pull request before the pull request is merged into the codebase. These changes are unrelated to fixing tests and are generally needed to round out any unfinished pull requests.
Tests Passed
Sometimes the Jenkins test run encounters problems, either tests that are known to have reliability issues or a test VM failed to build, but the problems are not related to the code changed in the pull request. This label is used to indicate that someone has reviewed the test failures and has deemed the failures to be non-pertinent.
Other Pull Request Labels
Awesome
Applied to pull requests that implemented a cool new feature or fixed a bug in an excellent way.

Labels that Bridge Issues and Pull Requests

Needs Testcase
Used by SaltStack's QA team to realize where pain points are and to bring special attention to where some test coverage needs to occur, especially in areas that have regressed. This label can apply to issues or pull requests, which can also be open or closed. Once tests are written, the pull request containing the tests should be linked to the issue or pull request that originally had the Needs Testcase label. At this point, the Needs Testcase label must be removed to indicate that tests no longer need to be written.
Pending Discussion
If this label is applied to an issue, the issue may or may not be a bug. Enough information was provided about the issue, but some other opinions on the issue are desirable before proceeding. (This can be a "stand-alone" label.) If the label is applied to a pull request, this is used to signal that further discussion must occur before a decision is made to either merge the pull request into the code base or to close it all together.

Logging Internals

TODO

Modular Systems

When first working with Salt, it is not always clear where all of the modular components are and what they do. Salt comes loaded with more modular systems than many users are aware of, making Salt very easy to extend in many places.

The most commonly used modular systems are execution modules and states. But the modular systems extend well beyond the more easily exposed components and are often added to Salt to make the complete system more flexible.

Execution Modules

Execution modules make up the core of the functionality used by Salt to interact with client systems. The execution modules create the core system management library used by all Salt systems, including states, which interact with minion systems.

Execution modules are completely open ended in their execution. They can be used to do anything required on a minion, from installing packages to detecting information about the system. The only restraint in execution modules is that the defined functions always return a JSON serializable object.

For a list of all built in execution modules, click here

For information on writing execution modules, see this page.

Interactive Debugging

Sometimes debugging with print() and extra logs sprinkled everywhere is not the best strategy.

IPython is a helpful debug tool that has an interactive python environment which can be embedded in python programs.

First the system will require IPython to be installed.

# Debian
apt-get install ipython

# Arch Linux
pacman -Syu ipython2

# RHEL/CentOS (via EPEL)
yum install python-ipython

Now, in the troubling python module, add the following line at a location where the debugger should be started:

test = 'test123'
import IPython; IPython.embed_kernel()

After running a Salt command that hits that line, the following will show up in the log file:

[CRITICAL] To connect another client to this kernel, use:
[IPKernelApp] --existing kernel-31271.json

Now on the system that invoked embed_kernel, run the following command from a shell:

# NOTE: use ipython2 instead of ipython for Arch Linux
ipython console --existing

This provides a console that has access to all the vars and functions, and even supports tab-completion.

print(test)
test123

To exit IPython and continue running Salt, press Ctrl-d to logout.

State Modules

State modules are used to define the state interfaces used by Salt States. These modules are restrictive in that they must follow a number of rules to function properly.

Note

State modules define the available routines in sls files. If calling an execution module directly is desired, take a look at the module state.

Auth

The auth module system allows for external authentication routines to be easily added into Salt. The auth function needs to be implemented to satisfy the requirements of an auth module. Use the pam module as an example.

Fileserver

The fileserver module system is used to create fileserver backends used by the Salt Master. These modules need to implement the functions used in the fileserver subsystem. Use the gitfs module as an example.

Grains

Grain modules define extra routines to populate grains data. All defined public functions will be executed and MUST return a Python dict object. The dict keys will be added to the grains made available to the minion.

Output

The output modules supply the outputter system with routines to display data in the terminal. These modules are very simple and only require the output function to execute. The default system outputter is the nested module.

Pillar

Used to define optional external pillar systems. The pillar generated via the filesystem pillar is passed into external pillars. This is commonly used as a bridge to database data for pillar, but is also the backend to the libvirt state used to generate and sign libvirt certificates on the fly.

Renderers

Renderers are the system used to render sls files into salt highdata for the state compiler. They can be as simple as the py renderer and as complex as stateconf and pydsl.

Returners

Returners are used to send data from minions to external sources, commonly databases. A full returner will implement all routines to be supported as an external job cache. Use the redis returner as an example.

Runners

Runners are purely master-side execution sequences. These range from simple reporting to orchestration engines like the overstate.

Tops

Tops modules are used to convert external data sources into top file data for the state system.

Wheel

The wheel system is used to manage master side management routines. These routines are primarily intended for the API to enable master configuration.

Package Providers

This page contains guidelines for writing package providers.

Package Functions

One of the most important features of Salt is package management. There is no shortage of package managers, so in the interest of providing a consistent experience in pkg states, there are certain functions that should be present in a package provider. Note that these are subject to change as new features are added or existing features are enhanced.

list_pkgs

This function should declare an empty dict, and then add packages to it by calling pkg_resource.add_pkg, like so:

__salt__['pkg_resource.add_pkg'](ret, name, version)

The last thing that should be done before returning is to execute pkg_resource.sort_pkglist. This function does not presently do anything to the return dict, but will be used in future versions of Salt.

__salt__['pkg_resource.sort_pkglist'](ret)

list_pkgs returns a dictionary of installed packages, with the keys being the package names and the values being the version installed. Example return data:

{'foo': '1.2.3-4',
 'bar': '5.6.7-8'}
latest_version

Accepts an arbitrary number of arguments. Each argument is a package name. The return value for a package will be an empty string if the package is not found or if the package is up-to-date. The only case in which a non-empty string is returned is if the package is available for new installation (i.e. not already installed) or if there is an upgrade available.

If only one argument was passed, this function return a string, otherwise a dict of name/version pairs is returned.

This function must also accept **kwargs, in order to receive the fromrepo and repo keyword arguments from pkg states. Where supported, these arguments should be used to find the install/upgrade candidate in the specified repository. The fromrepo kwarg takes precedence over repo, so if both of those kwargs are present, the repository specified in fromrepo should be used. However, if repo is used instead of fromrepo, it should still work, to preserve backwards compatibility with older versions of Salt.

version

Like latest_version, accepts an arbitrary number of arguments and returns a string if a single package name was passed, or a dict of name/value pairs if more than one was passed. The only difference is that the return values are the currently-installed versions of whatever packages are passed. If the package is not installed, an empty string is returned for that package.

upgrade_available

Deprecated and destined to be removed. For now, should just do the following:

return __salt__['pkg.latest_version'](name) != ''
install

The following arguments are required and should default to None:

  1. name (for single-package pkg states)
  2. pkgs (for multiple-package pkg states)
  3. sources (for binary package file installation)

The first thing that this function should do is call pkg_resource.parse_targets (see below). This function will convert the SLS input into a more easily parsed data structure. pkg_resource.parse_targets may need to be modified to support your new package provider, as it does things like parsing package metadata which cannot be done for every package management system.

pkg_params, pkg_type = __salt__['pkg_resource.parse_targets'](name,
                                                              pkgs,
                                                              sources)

Two values will be returned to the install function. The first of them will be a dictionary. The keys of this dictionary will be package names, though the values will differ depending on what kind of installation is being done:

  • If name was provided (and pkgs was not), then there will be a single key in the dictionary, and its value will be None. Once the data has been returned, if the version keyword argument was provided, then it should replace the None value in the dictionary.
  • If pkgs was provided, then name is ignored, and the dictionary will contain one entry for each package in the pkgs list. The values in the dictionary will be None if a version was not specified for the package, and the desired version if specified. See the Multiple Package Installation Options section of the pkg.installed state for more info.
  • If sources was provided, then name is ignored, and the dictionary values will be the path/URI for the package.

The second return value will be a string with two possible values: repository or file. The install function can use this value (if necessary) to build the proper command to install the targeted package(s).

Both before and after the installing the target(s), you should run list_pkgs to obtain a list of the installed packages. You should then return the output of salt.utils.compare_dicts()

return salt.utils.compare_dicts(old, new)
remove

Removes the passed package and return a list of the packages removed.

Package Repo Functions

There are some functions provided by pkg which are specific to package repositories, and not to packages themselves. When writing modules for new package managers, these functions should be made available as stated below, in order to provide compatibility with the pkgrepo state.

All repo functions should accept a basedir option, which defines which directory repository configuration should be found in. The default for this is dictated by the repo manager that is being used, and rarely needs to be changed.

basedir = '/etc/yum.repos.d'
__salt__['pkg.list_repos'](basedir)
list_repos

Lists the repositories that are currently configured on this system.

__salt__['pkg.list_repos']()

Returns a dictionary, in the following format:

{'reponame': 'config_key_1': 'config value 1',
             'config_key_2': 'config value 2',
             'config_key_3': ['list item 1 (when appropriate)',
                              'list item 2 (when appropriate)]}
get_repo

Displays all local configuration for a specific repository.

__salt__['pkg.get_repo'](repo='myrepo')

The information is formatted in much the same way as list_repos, but is specific to only one repo.

{'config_key_1': 'config value 1',
 'config_key_2': 'config value 2',
 'config_key_3': ['list item 1 (when appropriate)',
                  'list item 2 (when appropriate)]}
del_repo

Removes the local configuration for a specific repository. Requires a repo argument, which must match the locally configured name. This function returns a string, which informs the user as to whether or not the operation was a success.

__salt__['pkg.del_repo'](repo='myrepo')
mod_repo

Modify the local configuration for one or more option for a configured repo. This is also the way to create new repository configuration on the local system; if a repo is specified which does not yet exist, it will be created.

The options specified for this function are specific to the system; please refer to the documentation for your specific repo manager for specifics.

__salt__['pkg.mod_repo'](repo='myrepo', url='http://myurl.com/repo')

Low-Package Functions

In general, the standard package functions as describes above will meet your needs. These functions use the system's native repo manager (for instance, yum or the apt tools). In most cases, the repo manager is actually separate from the package manager. For instance, yum is usually a front-end for rpm, and apt is usually a front-end for dpkg. When possible, the package functions that use those package managers directly should do so through the low package functions.

It is normal and sane for pkg to make calls to lowpkgs, but lowpkg must never make calls to pkg. This is affects functions which are required by both pkg and lowpkg, but the technique in pkg is more performant than what is available to lowpkg. When this is the case, the lowpkg function that requires that technique must still use the lowpkg version.

list_pkgs

Returns a dict of packages installed, including the package name and version. Can accept a list of packages; if none are specified, then all installed packages will be listed.

installed = __salt__['lowpkg.list_pkgs']('foo', 'bar')

Example output:

{'foo': '1.2.3-4',
 'bar': '5.6.7-8'}
verify

Many (but not all) package management systems provide a way to verify that the files installed by the package manager have or have not changed. This function accepts a list of packages; if none are specified, all packages will be included.

installed = __salt__['lowpkg.verify']('httpd')

Example output:

{'/etc/httpd/conf/httpd.conf': {'mismatch': ['size', 'md5sum', 'mtime'],
                                'type': 'config'}}
file_list

Lists all of the files installed by all packages specified. If not packages are specified, then all files for all known packages are returned.

installed = __salt__['lowpkg.file_list']('httpd', 'apache')

This function does not return which files belong to which packages; all files are returned as one giant list (hence the file_list function name. However, This information is still returned inside of a dict, so that it can provide any errors to the user in a sane manner.

{'errors': ['package apache is not installed'],
  'files': ['/etc/httpd',
            '/etc/httpd/conf',
            '/etc/httpd/conf.d',
            '...SNIP...']}
file_dict

Lists all of the files installed by all packages specified. If not packages are specified, then all files for all known packages are returned.

installed = __salt__['lowpkg.file_dict']('httpd', 'apache', 'kernel')

Unlike file_list, this function will break down which files belong to which packages. It will also return errors in the same manner as file_list.

{'errors': ['package apache is not installed'],
 'packages': {'httpd': ['/etc/httpd',
                        '/etc/httpd/conf',
                        '...SNIP...'],
              'kernel': ['/boot/.vmlinuz-2.6.32-279.el6.x86_64.hmac',
                         '/boot/System.map-2.6.32-279.el6.x86_64',
                         '...SNIP...']}}

Reporting Bugs

Salt uses GitHub to track open issues and feature requests.

To file a bug, please navigate to the new issue page for the Salt project.

In an issue report, please include the following information:

  • The output of salt --versions-report from the relevant machines. This can also be gathered remotely by using salt <my_tgt> test.versions_report.
  • A description of the problem including steps taken to cause the issue to occur and the expected behaviour.
  • Any steps taken to attempt to remediate the problem.
  • Any configuration options set in a configuration file that may be relevent.
  • A reproduceable test case. This may be as simple as an SLS file that illustrates a problem or it may be a link to a repository that contains a number of SLS files that can be used together to re-produce a problem. If the problem is transitory, any information that can be used to try and reproduce the problem is helpful.
  • [Optional] The output of each salt component (master/minion/CLI) running with the -ldebug flag set.

Note

Please be certain to scrub any logs or SLS files for sensitive data!

Community Projects That Use Salt

Below is a list of repositories that show real world Salt applications that you can use to get started. Please note that these projects do not adhere to any standards and express a wide variety of ideas and opinions on how an action can be completed with Salt.

https://github.com/terminalmage/djangocon2013-sls

https://github.com/jesusaurus/hpcs-salt-state

https://github.com/gravyboat/hungryadmin-sls

https://github.com/wunki/django-salted

Salt Topology

Salt is based on a powerful, asynchronous, network topology using ZeroMQ. Many ZeroMQ systems are in place to enable communication. The central idea is to have the fastest communication possible.

Servers

The Salt Master runs 2 network services. First is the ZeroMQ PUB system. This service by default runs on port 4505 and can be configured via the publish_port option in the master configuration.

Second is the ZeroMQ REP system. This is a separate interface used for all bi-directional communication with minions. By default this system binds to port 4506 and can be configured via the ret_port option in the master.

PUB/SUB

The commands sent out via the salt client are broadcast out to the minions via ZeroMQ PUB/SUB. This is done by allowing the minions to maintain a connection back to the Salt Master and then all connections are informed to download the command data at once. The command data is kept extremely small (usually less than 1K) so it is not a burden on the network.

Return

The PUB/SUB system is a one way communication, so once a publish is sent out the PUB interface on the master has no further communication with the minion. The minion, after running the command, then sends the command's return data back to the master via the ret_port.

Translating Documentation

If you wish to help translate the Salt documentation to your language, please head over to the Transifex website and signup for an account.

Once registered, head over to the Salt Translation Project, and either click on Request Language if you can't find yours, or, select the language for which you wish to contribute and click Join Team.

Transifex provides some useful reading resources on their support domain, namely, some useful articles directed to translators.

Building A Localized Version of the Documentation

While you're working on your translation on Transifex, you might want to have a look at how it's rendering.

Install The Transifex Client

To interact with the Transifex web service you will need to install the transifex-client:

pip install transifex-client
Configure The Transifex Client

Once installed, you will need to set it up on your computer. We created a script to help you with that:

.scripts/setup-transifex-config
Download Remote Translations

There's a little script which simplifies the download process of the translations(which isn't that complicated in the first place). So, let's assume you're translating pt_PT, Portuguese(Portugal). To download the translations, execute from the doc/ directory of your Salt checkout:

make download-translations SPHINXLANG=pt_PT

To download pt_PT, Portuguese(Portugal), and nl, Dutch, you can use the helper script directly:

.scripts/download-translation-catalog pt_PT nl
Build Localized Documentation

After the download process finishes, which might take a while, the next step is to build a localized version of the documentation. Following the pt_PT example above:

make html SPHINXLANG=pt_PT
View Localized Documentation

Open your browser, point it to the local documentation path and check the localized output you've just build.

Running The Tests

There are requirements, in addition to Salt's requirements, which need to be installed in order to run the test suite. Install one of the lines below, depending on the relevant Python version:

pip install -r requirements/dev_python26.txt
pip install -r requirements/dev_python27.txt

Note

In Salt 0.17, testing libraries were migrated into their own repo. To install them:

pip install git+https://github.com/saltstack/salt-testing.git#egg=SaltTesting

Failure to install SaltTesting will result in import errors similar to the following:

ImportError: No module named salttesting

Once all require requirements are set, use tests/runtests.py to run all of the tests included in Salt's test suite. For more information, see --help.

An alternative way of invoking the test suite is available in setup.py:

./setup.py test

Instead of running the entire test suite, there are several ways to run only specific groups of tests or individual tests:

  • Run unit tests only: ./tests/runtests.py --unit-tests
  • Run unit and integration tests for states: ./tests/runtests.py --state
  • Run integration tests for an individual module: ./tests/runtests.py -n integration.modules.virt
  • Run unit tests for an individual module: ./tests/runtests.py -n unit.modules.virt_test
  • Run an individual test by using the class and test name (this example is for the test_default_kvm_profile test in the integration.module.virt): ./tests/runtests.py -n ingtegration.module.virt.VirtTest.test_default_kvm_profile

Running Unit Tests Without Integration Test Daemons

Since the unit tests do not require a master or minion to execute, it is often useful to be able to run unit tests individually, or as a whole group, without having to start up the integration testing daemons. Starting up the master, minion, and syndic daemons takes a lot of time before the tests can even start running and is unnecessary to run unit tests. To run unit tests without invoking the integration test daemons, simple remove the /tests portion of the runtests.py command:

./runtests.py --unit

All of the other options to run individual tests, entire classes of tests, or entire test modules still apply.

Running Destructive Integration Tests

Salt is used to change the settings and behavior of systems. In order to effectively test Salt's functionality, some integration tests are written to make actual changes to the underlying system. These tests are referred to as "destructive tests". Some examples of destructive tests are changes may be testing the addition of a user or installing packages. By default, destructive tests are disabled and will be skipped.

Generally, destructive tests should clean up after themselves by attempting to restore the system to its original state. For instance, if a new user is created during a test, the user should be deleted after the related test(s) have completed. However, no guarantees are made that test clean-up will complete successfully. Therefore, running destructive tests should be done with caution.

Note

Running destructive tests will change the underlying system. Use caution when running destructive tests.

To run tests marked as destructive, set the --run-destructive flag:

./tests/runtests.py --run-destructive

Running Cloud Provider Tests

Salt's testing suite also includes integration tests to assess the successful creation and deletion of cloud instances using Salt-Cloud for providers supported by Salt-Cloud.

The cloud provider tests are off by default and run on sample configuration files provided in tests/integration/files/conf/cloud.providers.d/. In order to run the cloud provider tests, valid credentials, which differ per provider, must be supplied. Each credential item that must be supplied is indicated by an empty string value and should be edited by the user before running the tests. For example, DigitalOcean requires a client key and an api key to operate. Therefore, the default cloud provider configuration file for DigitalOcean looks like this:

digitalocean-config:
  provider: digital_ocean
  client_key: ''
  api_key: ''
  location: New York 1

As indicated by the empty string values, the client_key and the api_key must be provided:

digitalocean-config:
  provider: digital_ocean
  client_key: wFGEwgregeqw3435gDger
  api_key: GDE43t43REGTrkilg43934t34qT43t4dgegerGEgg
  location: New York 1

Note

When providing credential information in cloud provider configuration files, do not include the single quotes.

Once all of the valid credentials for the cloud provider have been supplied, the cloud provider tests can be run by setting the --cloud-provider-tests flag:

./tests/runtests.py --cloud-provider-tests

Running The Tests In A Docker Container

The test suite can be executed under a docker container using the --docked option flag. The docker container must be properly configured on the system invoking the tests and the container must have access to the internet.

Here's a simple usage example:

tests/runtests.py --docked=ubuntu-12.04 -v

The full docker container repository can also be provided:

tests/runtests.py --docked=salttest/ubuntu-12.04 -v

The SaltStack team is creating some containers which will have the necessary dependencies pre-installed. Running the test suite on a container allows destructive tests to run without making changes to the main system. It also enables the test suite to run under a different distribution than the one the main system is currently using.

The current list of test suite images is on Salt's docker repository.

Custom docker containers can be provided by submitting a pull request against Salt's docker Salt test containers repository.

Automated Test Runs

SaltStack maintains a Jenkins server to allow for the execution of tests across supported platforms. The tests executed from Salt's Jenkins server create fresh virtual machines for each test run, then execute destructive tests on the new, clean virtual machine.

When a pull request is submitted to Salt's repository on GitHub, Jenkins runs Salt's test suite on a couple of virtual machines to gauge the pull request's viability to merge into Salt's develop branch. If these initial tests pass, the pull request can then merged into Salt's develop branch by one of Salt's core developers, pending their discretion. If the initial tests fail, core developers may request changes to the pull request. If the failure is unrelated to the changes in question, core developers may merge the pull request despite the initial failure.

Once the pull request is merged into Salt's develop branch, a new set of Jenkins virtual machines will begin executing the test suite. The develop branch tests have many more virtual machines to provide more comprehensive results.

There are a few other groups of virtual machines that Jenkins tests against, including past and current release branches. For a full list of currently running test environments, go to http://jenkins.saltstack.com.

Using Salt-Cloud on Jenkins

For testing Salt on Jenkins, SaltStack uses Salt-Cloud to spin up virtual machines. The script using Salt-Cloud to accomplish this is open source and can be found here: https://github.com/saltstack/salt/blob/develop/tests/jenkins.py

Writing Tests

The salt testing infrastructure is divided into two classes of tests, integration tests and unit tests. These terms may be defined differently in other contexts, but for salt they are defined this way:

  • Unit Test: Tests which validate isolated code blocks and do not require external interfaces such as salt-call or any of the salt daemons.
  • Integration Test: Tests which validate externally accessible features.

Salt testing uses unittest2 from the python standard library and MagicMock.

Naming Conventions

Any function in either integration test files or unit test files that is doing the actual testing, such as functions containing assertions, must start with test_:

def test_user_present(self):

When functions in test files are not prepended with test_, the function acts as a normal, helper function and is not run as a test by the test suite.

Integration Tests

The integration tests start up a number of salt daemons to test functionality in a live environment. These daemons include 2 salt masters, 1 syndic, and 2 minions. This allows the syndic interface to be tested and master/minion communication to be verified. All of the integration tests are executed as live salt commands sent through the started daemons.

Integration tests are particularly good at testing modules, states, and shell commands.

Unit Tests

Unit tests are good for ensuring consistent results for functions that do not require more than a few mocks.

Mocking all external dependencies for unit tests is encouraged but not required as sometimes the isolation provided by completely mocking the external dependencies is not worth the effort of mocking those dependencies.

Overly detailed mocking can also result in decreased test readability and brittleness as the tests are more likely to fail when the code or its dependencies legitimately change. In these cases, it is better to add dependencies to the test runner dependency state, https://github.com/saltstack/salt-jenkins/blob/master/git/salt.sls.

Integration Tests

The Salt integration tests come with a number of classes and methods which allow for components to be easily tested. These classes are generally inherited from and provide specific methods for hooking into the running integration test environment created by the integration tests.

It is noteworthy that since integration tests validate against a running environment that they are generally the preferred means to write tests.

The integration system is all located under tests/integration in the Salt source tree. Each directory within tests/integration corresponds to a directory in Salt's tree structure. For example, the integration tests for the test.py Salt module that is located in salt/modules should also be named test.py and reside in tests/integration/modules.

Adding New Directories

If the corresponding Salt directory does not exist within tests/integration, the new directory must be created along with the appropriate test file to maintain Salt's testing directory structure.

In order for Salt's test suite to recognize tests within the newly created directory, options to run the new integration tests must be added to tests/runtests.py. Examples of the necessary options that must be added can be found here: https://github.com/saltstack/salt/blob/develop/tests/runtests.py. The functions that need to be edited are setup_additional_options, validate_options, and run_integration_tests.

Integration Classes

The integration classes are located in tests/integration/__init__.py and can be extended therein. There are three classes available to extend:

ModuleCase

Used to define executions run via the master to minions and to call single modules and states.

The available methods are as follows:

run_function:
Run a single salt function and condition the return down to match the behavior of the raw function call. This will run the command and only return the results from a single minion to verify.
state_result:
Return the result data from a single state return
run_state:
Run the state.single command and return the state return structure
SyndicCase

Used to execute remote commands via a syndic, only used to verify the capabilities of the Syndic.

The available methods are as follows:

run_function:
Run a single salt function and condition the return down to match the behavior of the raw function call. This will run the command and only return the results from a single minion to verify.
ShellCase

Shell out to the scripts which ship with Salt.

The available methods are as follows:

run_script:
Execute a salt script with the given argument string
run_salt:
Execute the salt command, pass in the argument string as it would be passed on the command line.
run_run:
Execute the salt-run command, pass in the argument string as it would be passed on the command line.
run_run_plus:
Execute Salt run and the salt run function and return the data from each in a dict
run_key:
Execute the salt-key command, pass in the argument string as it would be passed on the command line.
run_cp:
Execute salt-cp, pass in the argument string as it would be passed on the command line.
run_call:
Execute salt-call, pass in the argument string as it would be passed on the command line.
Examples
Module Example via ModuleCase Class

Import the integration module, this module is already added to the python path by the test execution. Inherit from the integration.ModuleCase class.

Now the workhorse method run_function can be used to test a module:

import os
import integration


class TestModuleTest(integration.ModuleCase):
    '''
    Validate the test module
    '''
    def test_ping(self):
        '''
        test.ping
        '''
        self.assertTrue(self.run_function('test.ping'))

    def test_echo(self):
        '''
        test.echo
        '''
        self.assertEqual(self.run_function('test.echo', ['text']), 'text')
Shell Example via ShellCase

Validating the shell commands can be done via shell tests:

import sys
import shutil
import tempfile

import integration

class KeyTest(integration.ShellCase):
    '''
    Test salt-key script
    '''

    _call_binary_ = 'salt-key'

    def test_list(self):
        '''
        test salt-key -L
        '''
        data = self.run_key('-L')
        expect = [
                'Unaccepted Keys:',
                'Accepted Keys:',
                'minion',
                'sub_minion',
                'Rejected:', '']
        self.assertEqual(data, expect)

This example verifies that the salt-key command executes and returns as expected by making use of the run_key method.

Integration Test Files

Since using Salt largely involves configuring states, editing files, and changing system data, the integration test suite contains a directory named files to aid in testing functions that require files. Various Salt integration tests use these example files to test against instead of altering system files and data.

Each directory within tests/integration/files contain files that accomplish different tasks, based on the needs of the integration tests using those files. For example, tests/integration/files/ssh is used to bootstrap the test runner for salt-ssh testing, while tests/integration/files/pillar contains files storing data needed to test various pillar functions.

The tests/integration/files directory also includes an integration state tree. The integration state tree can be found at tests/integration/files/file/base.

The following example demonstrates how integration files can be used with ModuleCase to test states:

import os
import shutil
import integration

HFILE = os.path.join(integration.TMP, 'hosts')

class HostTest(integration.ModuleCase):
    '''
    Validate the host state
    '''

    def setUp(self):
        shutil.copyfile(os.path.join(integration.FILES, 'hosts'), HFILE)
        super(HostTest, self).setUp()

    def tearDown(self):
        if os.path.exists(HFILE):
            os.remove(HFILE)
        super(HostTest, self).tearDown()

    def test_present(self):
        '''
        host.present
        '''
        name = 'spam.bacon'
        ip = '10.10.10.10'
        ret = self.run_state('host.present', name=name, ip=ip)
        result = self.state_result(ret)
        self.assertTrue(result)
        with open(HFILE) as fp_:
            output = fp_.read()
            self.assertIn('{0}\t\t{1}'.format(ip, name), output)

To access the integration files, a variable named integration.FILES points to the tests/integration/files directory. This is where the referenced host.present sls file resides.

In addition to the static files in the integration state tree, the location integration.TMP can also be used to store temporary files that the test system will clean up when the execution finishes.

Destructive vs Non-Destructive Tests

Since Salt is used to change the settings and behavior of systems, one testing approach is to run tests that make actual changes to the underlying system. This is where the concept of destructive integration tests comes into play. Tests can be written to alter the system they are running on. This capability is what fills in the gap needed to properly test aspects of system management like package installation.

Any test that changes the underlying system in any way, such as creating or deleting users, installing packages, or changing permissions should include the @destructive decorator to signal system changes and should be written with care. System changes executed within a destructive test should also be restored once the related tests have completed. For example, if a new user is created to test a module, the same user should be removed after the test is completed to maintain system integrity.

To write a destructive test, import, and use the destructiveTest decorator for the test method:

import integration
from salttesting.helpers import destructiveTest

class DestructiveExampleModuleTest(integration.ModuleCase):
    '''
    Demonstrate a destructive test
    '''

    @destructiveTest
    @skipIf(os.geteuid() != 0, 'you must be root to run this test')
    def test_user_not_present(self):
        '''
        This is a DESTRUCTIVE TEST it creates a new user on the minion.
        And then destroys that user.
        '''
        ret = self.run_state('user.present', name='salt_test')
        self.assertSaltTrueReturn(ret)
        ret = self.run_state('user.absent', name='salt_test')
        self.assertSaltTrueReturn(ret)
Cloud Provider Tests

Cloud provider integration tests are used to assess Salt-Cloud's ability to create and destroy cloud instances for various supported cloud providers. Cloud provider tests inherit from the ShellCase Integration Class.

Any new cloud provider test files should be added to the tests/integration/cloud/providers/ directory. Each cloud provider test file also requires a sample cloud profile and cloud provider configuration file in the integration test file directory located at tests/integration/files/conf/cloud.*.d/.

The following is an example of the default profile configuration file for Digital Ocean, located at: tests/integration/files/conf/cloud.profiles.d/digital_ocean.conf:

digitalocean-test:
  provider: digitalocean-config
  image: Ubuntu 14.04 x64
  size: 512MB

Each cloud provider requires different configuration credentials. Therefore, sensitive information such as API keys or passwords should be omitted from the cloud provider configuration file and replaced with an empty string. The necessary credentials can be provided by the user by editing the provider configuration file before running the tests.

The following is an example of the default provider configuration file for Digital Ocean, located at: tests/integration/files/conf/cloud.providers.d/digital_ocean.conf:

digitalocean-config:
  provider: digital_ocean
  client_key: ''
  api_key: ''
  location: New York 1

In addition to providing the necessary cloud profile and provider files in the integration test suite file structure, appropriate checks for if the configuration files exist and contain valid information are also required in the test class's setUp function:

class LinodeTest(integration.ShellCase):
'''
Integration tests for the Linode cloud provider in Salt-Cloud
'''

def setUp(self):
    '''
    Sets up the test requirements
    '''
    super(LinodeTest, self).setUp()

    # check if appropriate cloud provider and profile files are present
    profile_str = 'linode-config:'
    provider = 'linode'
    providers = self.run_cloud('--list-providers')
    if profile_str not in providers:
        self.skipTest(
            'Configuration file for {0} was not found. Check {0}.conf files '
            'in tests/integration/files/conf/cloud.*.d/ to run these tests.'
            .format(provider)
        )

    # check if apikey and password are present
    path = os.path.join(integration.FILES,
                        'conf',
                        'cloud.providers.d',
                        provider + '.conf')
    config = cloud_providers_config(path)
    api = config['linode-config']['linode']['apikey']
    password = config['linode-config']['linode']['password']
    if api == '' or password == '':
        self.skipTest(
            'An api key and password must be provided to run these tests. Check '
            'tests/integration/files/conf/cloud.providers.d/{0}.conf'.format(
                provider
            )
        )

Repeatedly creating and destroying instances on cloud providers can be costly. Therefore, cloud provider tests are off by default and do not run automatically. To run the cloud provider tests, the --cloud-provider-tests flag must be provided:

./tests/runtests.py --cloud-provider-tests

Since cloud provider tests do not run automatically, all provider tests must be preceded with the @expensiveTest decorator. The expensive test decorator is necessary because it signals to the test suite that the --cloud-provider-tests flag is required to run the cloud provider tests.

To write a cloud provider test, import, and use the expensiveTest decorator for the test function:

from salttesting.helpers import expensiveTest

@expensiveTest
def test_instance(self):
    '''
    Test creating an instance on Linode
    '''
    name = 'linode-testing'

    # create the instance
    instance = self.run_cloud('-p linode-test {0}'.format(name))
    str = '        {0}'.format(name)

    # check if instance with salt installed returned as expected
    try:
        self.assertIn(str, instance)
    except AssertionError:
        self.run_cloud('-d {0} --assume-yes'.format(name))
        raise

    # delete the instance
    delete = self.run_cloud('-d {0} --assume-yes'.format(name))
    str = '            True'
    try:
        self.assertIn(str, delete)
    except AssertionError:
        raise
Writing Unit Tests
Introduction

Like many software projects, Salt has two broad-based testing approaches -- integration testing and unit testing. While integration testing focuses on the interaction between components in a sandboxed environment, unit testing focuses on the singular implementation of individual functions.

Preparing to Write a Unit Test

This guide assumes you've followed the directions for setting up salt testing.

Unit tests should be written to the following specifications:

  • Each raise and return statement needs to be independently tested.
  • Unit tests for salt/.../<module>.py are contained in a file called tests/unit/.../<module>_test.py, e.g. the tests for salt/modules/fib.py are in tests/unit/modules/fib_test.py.
  • Test functions are named test_<fcn>_<test-name> where <fcn> is the function being tested and <test-name> describes the raise or return being tested.
  • A reasonable effort needs to be made to mock external resources used in the code being tested, such as APIs, function calls, external data either globally available or passed in through function arguments, file data, etc.
  • Test functions should contain only one assertion and all necessary mock code and data for that assertion.

Most commonly, the following imports are necessary to create a unit test:

# Import Salt Testing libs
from salttesting import skipIf, TestCase
from salttesting.helpers import ensure_in_syspath

If you need mock support to your tests, please also import:

from salttesting.mock import NO_MOCK, NO_MOCK_REASON, MagicMock, patch, call
A Simple Example

Let's assume that we're testing a very basic function in an imaginary Salt execution module. Given a module called fib.py that has a function called calculate(num_of_results), which given a num_of_results, produces a list of sequential Fibonacci numbers of that length.

A unit test to test this function might be commonly placed in a file called tests/unit/modules/fib_test.py. The convention is to place unit tests for Salt execution modules in test/unit/modules/ and to name the tests module suffixed with _test.py.

Tests are grouped around test cases, which are logically grouped sets of tests against a piece of functionality in the tested software. Test cases are created as Python classes in the unit test module. To return to our example, here's how we might write the skeleton for testing fib.py:

# Import Salt Testing libs
from salttesting import TestCase

# Import Salt execution module to test
from salt.modules import fib

# Create test case class and inherit from Salt's customized TestCase
class FibTestCase(TestCase):
    '''
    This class contains a set of functions that test salt.modules.fib.
    '''
    def test_fib(self):
        '''
        To create a unit test, we should prefix the name with `test_' so
        that it's recognized by the test runner.
        '''
        fib_five = (0, 1, 1, 2, 3)
        self.assertEqual(fib.calculate(5), fib_five)

At this point, the test can now be run, either individually or as a part of a full run of the test runner. To ease development, a single test can be executed:

tests/runtests.py -v -n unit.modules.fib_test

This will report the status of the test: success, failure, or error. The -v flag increases output verbosity.

tests/runtests.py -n unit.modules.fib_test -v

To review the results of a particular run, take a note of the log location given in the output for each test:

Logging tests on /var/folders/nl/d809xbq577l3qrbj3ymtpbq80000gn/T/salt-runtests.log
Evaluating Truth

A longer discussion on the types of assertions one can make can be found by reading Python's documentation on unit testing.

Tests Using Mock Objects

In many cases, the purpose of a Salt module is to interact with some external system, whether it be to control a database, manipulate files on a filesystem or something else. In these varied cases, it's necessary to design a unit test which can test the function whilst replacing functions which might actually call out to external systems. One might think of this as "blocking the exits" for code under tests and redirecting the calls to external systems with our own code which produces known results during the duration of the test.

To achieve this behavior, Salt makes heavy use of the MagicMock package.

To understand how one might integrate Mock into writing a unit test for Salt, let's imagine a scenario in which we're testing an execution module that's designed to operate on a database. Furthermore, let's imagine two separate methods, here presented in pseduo-code in an imaginary execution module called 'db.py.

def create_user(username):
    qry = 'CREATE USER {0}'.format(username)
    execute_query(qry)

def execute_query(qry):
    # Connect to a database and actually do the query...

Here, let's imagine that we want to create a unit test for the create_user function. In doing so, we want to avoid any calls out to an external system and so while we are running our unit tests, we want to replace the actual interaction with a database with a function that can capture the parameters sent to it and return pre-defined values. Therefore, our task is clear -- to write a unit test which tests the functionality of create_user while also replacing 'execute_query' with a mocked function.

To begin, we set up the skeleton of our class much like we did before, but with additional imports for MagicMock:

# Import Salt Testing libs
from salttesting import TestCase

# Import Salt execution module to test
from salt.modules import db

# Import Mock libraries
from salttesting.mock import NO_MOCK, NO_MOCK_REASON, MagicMock, patch, call

# Create test case class and inherit from Salt's customized TestCase
# Skip this test case if we don't have access to mock!
@skipIf(NO_MOCK, NO_MOCK_REASON)
class DbTestCase(TestCase):
    def test_create_user(self):
        # First, we replace 'execute_query' with our own mock function
        db.execute_query = MagicMock()

        # Now that the exits are blocked, we can run the function under test.
        db.create_user('testuser')

        # We could now query our mock object to see which calls were made
        # to it.
        ## print db.execute_query.mock_calls

        # Construct a call object that simulates the way we expected
        # execute_query to have been called.
        expected_call = call('CREATE USER testuser')

        # Compare the expected call with the list of actual calls.  The
        # test will succeed or fail depending on the output of this
        # assertion.
        db.execute_query.assert_has_calls(expected_call)
Modifying __salt__ In Place

At times, it becomes necessary to make modifications to a module's view of functions in its own __salt__ dictionary. Luckily, this process is quite easy.

Below is an example that uses MagicMock's patch functionality to insert a function into __salt__ that's actually a MagicMock instance.

def show_patch(self):
    with patch.dict(my_module.__salt__,
                    {'function.to_replace': MagicMock()}:
        # From this scope, carry on with testing, with a modified __salt__!
A More Complete Example

Consider the following function from salt/modules/linux_sysctl.py.

def get(name):
    '''
    Return a single sysctl parameter for this minion

    CLI Example:

    .. code-block:: bash

        salt '*' sysctl.get net.ipv4.ip_forward
    '''
    cmd = 'sysctl -n {0}'.format(name)
    out = __salt__['cmd.run'](cmd)
    return out

This function is very simple, comprising only four source lines of code and having only one return statement, so we know only one test is needed. There are also two inputs to the function, the name function argument and the call to __salt__['cmd.run'](), both of which need to be appropriately mocked.

Mocking a function parameter is straightforward, whereas mocking a function call will require, in this case, the use of MagicMock. For added isolation, we will also redefine the __salt__ dictionary such that it only contains 'cmd.run'.

# Import Salt Libs
from salt.modules import linux_sysctl

# Import Salt Testing Libs
from salttesting import skipIf, TestCase
from salttesting.helpers import ensure_in_syspath
from salttesting.mock import (
    MagicMock,
    patch,
    NO_MOCK,
    NO_MOCK_REASON
)

ensure_in_syspath('../../')

# Globals
linux_sysctl.__salt__ = {}


@skipIf(NO_MOCK, NO_MOCK_REASON)
class LinuxSysctlTestCase(TestCase):
    '''
    TestCase for salt.modules.linux_sysctl module
    '''

    def test_get(self):
        '''
        Tests the return of get function
        '''
        mock_cmd = MagicMock(return_value=1)
        with patch.dict(linux_sysctl.__salt__, {'cmd.run': mock_cmd}):
            self.assertEqual(linux_sysctl.get('net.ipv4.ip_forward'), 1)


if __name__ == '__main__':
    from integration import run_tests
    run_tests(LinuxSysctlTestCase, needs_daemon=False)

Since get() has only one raise or return statement and that statement is a success condition, the test function is simply named test_get(). As described, the single function call parameter, name is mocked with net.ipv4.ip_forward and __salt__['cmd.run'] is replaced by a MagicMock function object. We are only interested in the return value of __salt__['cmd.run'], which MagicMock allows to be specified via return_value=1. Finally, the test itself tests for equality between the return value of get() and the expected return of 1. This assertion is expected to succeed because get() will determine its return value from __salt__['cmd.run'], which we have mocked to return 1.

A Complex Example

Now consider the assign() function from the same salt/modules/linux_sysctl.py source file.

def assign(name, value):
    '''
    Assign a single sysctl parameter for this minion

    CLI Example:

    .. code-block:: bash

        salt '*' sysctl.assign net.ipv4.ip_forward 1
    '''
    value = str(value)
    sysctl_file = '/proc/sys/{0}'.format(name.replace('.', '/'))
    if not os.path.exists(sysctl_file):
        raise CommandExecutionError('sysctl {0} does not exist'.format(name))

    ret = {}
    cmd = 'sysctl -w {0}="{1}"'.format(name, value)
    data = __salt__['cmd.run_all'](cmd)
    out = data['stdout']
    err = data['stderr']

    # Example:
    #    # sysctl -w net.ipv4.tcp_rmem="4096 87380 16777216"
    #    net.ipv4.tcp_rmem = 4096 87380 16777216
    regex = re.compile(r'^{0}\s+=\s+{1}$'.format(re.escape(name),
                                                 re.escape(value)))

    if not regex.match(out) or 'Invalid argument' in str(err):
        if data['retcode'] != 0 and err:
            error = err
        else:
            error = out
        raise CommandExecutionError('sysctl -w failed: {0}'.format(error))
    new_name, new_value = out.split(' = ', 1)
    ret[new_name] = new_value
    return ret

This function contains two raise statements and one return statement, so we know that we will need (at least) three tests. It has two function arguments and many references to non-builtin functions. In the tests below you will see that MagicMock's patch() method may be used as a context manager or as a decorator.

There are three test functions, one for each raise and return statement in the source function. Each function is self-contained and contains all and only the mocks and data needed to test the raise or return statement it is concerned with.

# Import Salt Libs
from salt.modules import linux_sysctl
from salt.exceptions import CommandExecutionError

# Import Salt Testing Libs
from salttesting import skipIf, TestCase
from salttesting.helpers import ensure_in_syspath
from salttesting.mock import (
    MagicMock,
    patch,
    NO_MOCK,
    NO_MOCK_REASON
)

ensure_in_syspath('../../')

# Globals
linux_sysctl.__salt__ = {}


@skipIf(NO_MOCK, NO_MOCK_REASON)
class LinuxSysctlTestCase(TestCase):
    '''
    TestCase for salt.modules.linux_sysctl module
    '''

    @patch('os.path.exists', MagicMock(return_value=False))
    def test_assign_proc_sys_failed(self):
        '''
        Tests if /proc/sys/<kernel-subsystem> exists or not
        '''
        cmd = {'pid': 1337, 'retcode': 0, 'stderr': '',
               'stdout': 'net.ipv4.ip_forward = 1'}
        mock_cmd = MagicMock(return_value=cmd)
        with patch.dict(linux_sysctl.__salt__, {'cmd.run_all': mock_cmd}):
            self.assertRaises(CommandExecutionError,
                              linux_sysctl.assign,
                              'net.ipv4.ip_forward', 1)

    @patch('os.path.exists', MagicMock(return_value=True))
    def test_assign_cmd_failed(self):
        '''
        Tests if the assignment was successful or not
        '''
        cmd = {'pid': 1337, 'retcode': 0, 'stderr':
               'sysctl: setting key "net.ipv4.ip_forward": Invalid argument',
               'stdout': 'net.ipv4.ip_forward = backward'}
        mock_cmd = MagicMock(return_value=cmd)
        with patch.dict(linux_sysctl.__salt__, {'cmd.run_all': mock_cmd}):
            self.assertRaises(CommandExecutionError,
                              linux_sysctl.assign,
                              'net.ipv4.ip_forward', 'backward')

    @patch('os.path.exists', MagicMock(return_value=True))
    def test_assign_success(self):
        '''
        Tests the return of successful assign function
        '''
        cmd = {'pid': 1337, 'retcode': 0, 'stderr': '',
               'stdout': 'net.ipv4.ip_forward = 1'}
        ret = {'net.ipv4.ip_forward': '1'}
        mock_cmd = MagicMock(return_value=cmd)
        with patch.dict(linux_sysctl.__salt__, {'cmd.run_all': mock_cmd}):
            self.assertEqual(linux_sysctl.assign(
                'net.ipv4.ip_forward', 1), ret)

if __name__ == '__main__':
    from integration import run_tests
    run_tests(LinuxSysctlTestCase, needs_daemon=False)

raet

# RAET # Reliable Asynchronous Event Transport Protocol

See also

RAET Overview

Protocol

Layering:

OSI Layers

7: Application: Format: Data (Stack to Application interface buffering etc) 6: Presentation: Format: Data (Encrypt-Decrypt convert to machine independent format) 5: Session: Format: Data (Interhost communications. Authentication. Groups) 4: Transport: Format: Segments (Reliable delivery of Message, Transactions, Segmentation, Error checking) 3: Network: Format: Packets/Datagrams (Addressing Routing) 2: Link: Format: Frames ( Reliable per frame communications connection, Media access controller ) 1: Physical: Bits (Transceiver communication connection not reliable)

Link is hidden from Raet Network is IP host address and Udp Port Transport is Raet transactions, service kind, tail error checking, Could include header signing as part of transport reliable delivery serialization of header Session is session id key exchange for signing. Grouping is Road (like 852 channel) Presentation is Encrypt Decrypt body Serialize Deserialize Body Application is body data dictionary

Header signing spans both the Transport and Session layers.

Packet

Header ASCII Safe JSON Header termination: Empty line given by double pair of carriage return linefeed /r/n/r/n 10 13 10 13 ADAD 1010 1101 1010 1101

In json carriage return and newline characters cannot appear in a json encoded string unless they are escaped with backslash, so the 4 byte combination is illegal in valid json that does not have multi-byte unicode characters.

These means the header must be ascii safe so no multibyte utf-8 strings allowed in header.

Following Header Terminator is variable length signature block. This is binary and the length is provided in the header.

Following the signature block is the packet body or data. This may either be JSON or packed binary. The format is given in the json header

Finally is an optional tail block for error checking or encryption details

Header Fields

In UDP header

sh = source host sp = source port dh = destination host dp = destination port

In RAET Header

hk = header kind hl = header length

vn = version number

sd = Source Device ID dd = Destination Device ID cf = Corresponder Flag mf = Multicast Flag

si = Session ID ti = Transaction ID

sk = Service Kind pk = Packet Kind bf = Burst Flag (Send all Segments or Ordered packets without interleaved acks)

oi = Order Index dt = DateTime Stamp

sn = Segment Number sc = Segment Count

pf = Pending Segment Flag af = All Flag (Resent all Segments not just one)

nk = Auth header kind nl = Auth header length

bk = body kind bl = body length

tk = tail kind tl = tail length

fg = flags packed (Flags) Default '00' hex string
2 byte Hex string with bits (0, 0, af, pf, 0, bf, mf, cf) Zeros are TBD flags

Session Bootstrap

Minion sends packet with SID of Zero with public key of minions Public Private Key pair Master acks packet with SID of Zero to let minion know it received the request

Some time later Master sends packet with SID of zero that accepts the Minion

Minion

Session

Session is important for security. Want one session opened and then multiple transactions within session.

Session ID SID sid

GUID hash to guarantee uniqueness since no guarantee of nonvolatile storage or require file storage to keep last session ID used.

Service Types or Modular Services

Four Service Types

  1. One or more maybe (unacknowledged repeat) maybe means no guarantee

  2. Exactly one at most (ack with retries) (duplicate detection idempotent)

    at most means fixed number of retries has finite probability of failing B1) finite retries B2) infinite retries with exponential back-off up to a maximum delay

  3. Exactly one of sequence at most (sequence numbered)

    Receiver requests retry of missing packet with same B1 or B2 retry type

  4. End to End (Application layer Request Response)

    This is two B sub transactions

Initially unicast messaging Eventually support for Multicast

The use case for C) is to fragment large packets as once a UDP packet exceeds the frame size its reliability goes way down So its more reliable to fragment large packets.

Better approach might be to have more modularity. Services Levels

  1. Maybe one or more
    1. Fire and forget

      no transaction either side

    2. Repeat, no ack, no dupdet

      repeat counter send side, no transaction on receive side

    3. Repeat, no Ack, dupdet

      repeat counter send side, dup detection transaction receive side

  2. More or Less Once
    1. retry finite, ack no dupdet

      retry timer send side, finite number of retires ack receive side no dupdet

  3. At most Once
    1. retry finite, ack, dupdet

      retry timer send side, finite number of retires ack receive side dupdet

  4. Exactly once
    1. ack retry

      retry timer send side, ack and duplicate detection receive side Infinite retries with exponential backoff

  5. Sequential sequence number
    1. reorder escrow
    2. Segmented packets
  6. request response to application layer

Service Features

  1. repeats
  2. ack retry transaction id
  3. sequence number duplicate detection out of order detection sequencing
  4. rep-req

Always include transaction id since multiple transactions on same port So get duplicate detection for free if keep transaction alive but if use

A) Maybe one or more B1) At Least One B2) Exactly One C) One of sequence D) End to End

A) Sender creates transaction id for number of repeats but receiver does not keep transaction alive

B1) Sender creates transaction id keeps it for retries. Receiver keeps it to send ack then kills so retry could be duplicate not detected

B2) Sender creates transaction id keeps for retries Receiver keeps tid for acks on any retires so no duplicates.

C) Sender creates TID and Sequence Number. Receiver checks for out of order sequence and can request retry.

D) Application layer sends response. So question is do we keep transaction open or have response be new transaction. No because then we need a rep-req ID so might as well use the same transaction id. Just keep alive until get response.

Little advantage to B1 vs B2 not having duplicates.

So 4 service types

  1. Maybe one or more (unacknowledged repeat)
  2. Exactly One (At most one) (ack with retry) (duplicate detection idempotent)
  3. One of Sequence (sequence numbered)
  4. End to End

Also multicast or unicast

Modular Transaction Table

Sender Side:
Transaction ID plus transaction source sender or receiver generated transaction id Repeat Counter Retry Timer Retry Counter (finite retries) Redo Timer (infinite redos with exponential backoff) Sequence number without acks (look for resend requests) Sequence with ack (wait for ack before sending next in sequence) Segmentation
Receiver Side:
Nothing just accept packet Acknowledge (can delete transaction after acknowledge) No duplicate detection Transaction timeout (keep transaction until timeout) Duplicate detection save transaction id duplicate detection timeout Request resend of missing packet in sequence Sequence reordering with escrow timeout wait escrow before requesting resend Unsegmentation (request resends of missing segment)

SaltStack Git Policy

The SaltStack team follows a git policy to maintain stability and consistency with the repository.

The git policy has been developed to encourage contributions and make contributing to Salt as easy as possible. Code contributors to SaltStack projects DO NOT NEED TO READ THIS DOCUMENT, because all contributions come into SaltStack via a single gateway to make it as easy as possible for contributors to give us code.

The primary rule of git management in SaltStack is to make life easy on contributors and developers to send in code. Simplicity is always a goal!

New Code Entry

All new SaltStack code is posted to the develop branch, which is the single point of entry. The only exception is when a bugfix to develop cannot be cleanly merged into a release branch and the bugfix needs to be rewritten for the release branch.

Release Branching

SaltStack maintains two types of releases, Feature Releases and Point Releases. A feature release is managed by incrementing the first or second release point number, so 0.10.5 -> 0.11.0 signifies a feature release and 0.11.0 -> 0.11.1 signifies a point release, also a hypothetical 0.42.7 -> 1.0.0 would also signify a feature release.

Feature Release Branching

Each feature release is maintained in a dedicated git branch derived from the last applicable release commit on develop. All file changes relevant to the feature release will be completed in the develop branch prior to the creation of the feature release branch. The feature release branch will be named after the relevant numbers to the feature release, which constitute the first two numbers. This means that the release branch for the 0.11.0 series is named 0.11.

A feature release branch is created with the following command:

# git checkout -b 0.11 # From the develop branch
# git push origin 0.11
Point Releases

Each point release is derived from its parent release branch. Constructing point releases is a critical aspect of Salt development and is managed by members of the core development team. Point releases comprise bug and security fixes which are cherry picked from develop onto the aforementioned release branch. At the time when a core developer accepts a pull request a determination needs to be made if the commits in the pull request need to be backported to the release branch. Some simple criteria are used to make this determination:

  • Is this commit fixing a bug? Backport
  • Does this commit change or add new features in any way? Don't backport
  • Is this a PEP8 or code cleanup commit? Don't backport
  • Does this commit fix a security issue? Backport

Determining when a point release is going to be made is up to the project leader (Thomas Hatch). Generally point releases are made every 1-2 weeks or if there is a security fix they can be made sooner.

The point release is only designated by tagging the commit on the release branch with release number using the existing convention (version 0.11.1 is tagged with v0.11.1). From the tag point a new source tarball is generated and published to PyPI, and a release announcement is made.

Salt Conventions

Writing Salt Documentation

Salt's documentation is built using the Sphinx documentation system. It can be build in a large variety of output formats including HTML, PDF, ePub, and manpage.

All the documentation is contained in the main Salt repository. Speaking broadly, most of the narrative documentation is contained within the https://github.com/saltstack/salt/blob/develop/doc subdirectory and most of the reference and API documentation is written inline with Salt's Python code and extracted using a Sphinx extension.

Style

The Salt project recommends the IEEE style guide as a general reference for writing guidelines. Those guidelines are not strictly enforced but rather serve as an excellent resource for technical writing questions. The NCBI style guide is another very approachable resource.

Point-of-view

Use third-person perspective and avoid "I", "we", "you" forms of address. Identify the addressee specifically e.g., "users should", "the compiler does", etc.

Active voice

Use active voice and present-tense. Avoid filler words.

Title capitalization

Document titles and section titles within a page should follow normal sentence capitalization rules. Words that are capitalized as part of a regular sentence should be capitalized in a title and otherwise left as lowercase. Punctuation can be omitted unless it aids the intent of the title (e.g., exclamation points or question marks).

For example:

This is a main heading
======================

Paragraph.

This is an exciting sub-heading!
--------------------------------

Paragraph.
Serial Commas

According to Wikipedia: In English punctuation, a serial comma or series comma (also called Oxford comma and Harvard comma) is a comma placed immediately before the coordinating conjunction (usually and, or, or nor) in a series of three or more terms. For example, a list of three countries might be punctuated either as "France, Italy, and Spain" (with the serial comma), or as "France, Italy and Spain" (without the serial comma)."

When writing a list that includes three or more items, the serial comma should always be used.

Documenting modules

Documentation for Salt's various module types is inline in the code. During the documentation build process it is extracted and formatted into the final HTML, PDF, etc format.

Inline documentation

Python has special multi-line strings called docstrings as the first element in a function or class. These strings allow documentation to live alongside the code and can contain special formatting. For example:

def myfunction(value):
    '''
    Upper-case the given value

    Usage:

    .. code-block:: python

        val = 'a string'
        new_val = myfunction(val)
        print(new_val) # 'A STRING'

    :param value: a string
    :return: a copy of ``value`` that has been upper-cased
    '''
    return value.upper()
Specify a release for additions or changes

New functions or changes to existing functions should include a marker that denotes what Salt release will be affected. For example:

def myfunction(value):
    '''
    Upper-case the given value

    .. versionadded:: 2014.7.0

    <...snip...>
    '''
    return value.upper()

For changes to a function:

def myfunction(value, strip=False):
    '''
    Upper-case the given value

    .. versionchanged:: Boron
        Added a flag to also strip whitespace from the string.

    <...snip...>
    '''
    if strip:
        return value.upper().strip()
    return value.upper()
Adding module documentation to the index

Each module type has an index listing all modules of that type. For example: Full list of builtin execution modules, Full list of builtin state modules, Full list of builtin renderer modules. New modules must be added to the index manually.

  1. Edit the file for the module type: execution modules, state modules, renderer modules, etc.
  2. Add the new module to the alphebetized list.
  3. Build the documentation which will generate an .rst file for the new module in the same directory as the index.rst.
  4. Commit the changes to index.rst and the new .rst file and send a pull request.
Cross-references

The Sphinx documentation system contains a wide variety of cross-referencing capabilities.

Glossary entries

Link to glossary entries using the term role. A cross-reference should be added the first time a Salt-specific term is used in a document.

A common way to encapsulate master-side functionality is by writing a
custom :term:`Runner Function`. Custom Runner Functions are easy to write.
Index entries

Sphinx automatically generates many kind of index entries but it is occasionally useful to manually add items to the index.

One method is to use the index directive above the document or section that should appear in the index.

.. index:: ! Event, event bus, event system
    see: Reactor; Event

Another method is to use the index role inline with the text that should appear in the index. The index entry is created and the target text is left otherwise intact.

Information about the :index:`Salt Reactor`
-------------------------------------------

Paragraph.
Documents and sections

Each document should contain a unique top-level label of the form:

.. _my-page:

My page
=======

Paragraph.

Unique labels can be linked using the ref role. This allows cross-references to survive document renames or movement.

For more information see :ref:`my-page`.

Note, the :doc: role should not be used to link documents together.

Modules

Cross-references to Salt modules can be added using Sphinx's Python domain roles. For example, to create a link to the test.ping function:

A useful execution module to test active communication with a minion is the
:py:func:`test.ping <salt.modules.test.ping>` function.

Salt modules can be referenced as well:

The :py:mod:`test module <salt.modules.test>` contains many useful
functions for inspecting an active Salt connection.

The same syntax works for all modules types:

One of the workhorse state module functions in Salt is the
:py:func:`file.managed <salt.states.file.managed>` function.
Settings

Individual settings in the Salt Master or Salt Minion configuration files are cross-referenced using two custom roles, conf_master, and conf_minion.

The :conf_minion:`minion ID <id>` setting is a unique identifier for a
single minion.
Building the documentation
  1. Install Sphinx using a system package manager or pip. The package name is often of the form python-sphinx. There are no other dependencies.

  2. Build the documentation using the provided Makefile or .bat file on Windows.

    cd /path/to/salt/doc
    make html
    
  3. The generated documentation will be written to the doc/_build/<format> directory.

  4. A useful method of viewing the HTML documentation locally is the start Python's built-in HTTP server:

    cd /path/to/salt/doc/_build/html
    python -m SimpleHTTPServer
    

    Then pull up the documentation in a web browser at http://localhost:8000/.

Salt Formulas

Formulas are pre-written Salt States. They are as open-ended as Salt States themselves and can be used for tasks such as installing a package, configuring, and starting a service, setting up users or permissions, and many other common tasks.

All official Salt Formulas are found as separate Git repositories in the "saltstack-formulas" organization on GitHub:

https://github.com/saltstack-formulas

As a simple example, to install the popular Apache web server (using the normal defaults for the underlying distro) simply include the apache-formula from a top file:

base:
  'web*':
    - apache
Installation

Each Salt Formula is an individual Git repository designed as a drop-in addition to an existing Salt State tree. Formulas can be installed in the following ways.

Adding a Formula as a GitFS remote

One design goal of Salt's GitFS fileserver backend was to facilitate reusable States. GitFS is a quick and natural way to use Formulas.

  1. Install and configure GitFS.

  2. Add one or more Formula repository URLs as remotes in the gitfs_remotes list in the Salt Master configuration file:

    gitfs_remotes:
      - https://github.com/saltstack-formulas/apache-formula
      - https://github.com/saltstack-formulas/memcached-formula
    

    We strongly recommend forking a formula repository into your own GitHub account to avoid unexpected changes to your infrastructure.

    Many Salt Formulas are highly active repositories so pull new changes with care. Plus any additions you make to your fork can be easily sent back upstream with a quick pull request!

  3. Restart the Salt master.

Adding a Formula directory manually

Formulas are simply directories that can be copied onto the local file system by using Git to clone the repository or by downloading and expanding a tarball or zip file of the repository. The directory structure is designed to work with file_roots in the Salt master configuration.

  1. Clone or download the repository into a directory:

    mkdir -p /srv/formulas
    cd /srv/formulas
    git clone https://github.com/saltstack-formulas/apache-formula.git
    
    # or
    
    mkdir -p /srv/formulas
    cd /srv/formulas
    wget https://github.com/saltstack-formulas/apache-formula/archive/master.tar.gz
    tar xf apache-formula-master.tar.gz
    
  2. Add the new directory to file_roots:

    file_roots:
      base:
        - /srv/salt
        - /srv/formulas/apache-formula
    
  3. Restart the Salt Master.

Usage

Each Formula is intended to be immediately usable with sane defaults without any additional configuration. Many formulas are also configurable by including data in Pillar; see the pillar.example file in each Formula repository for available options.

Including a Formula in an existing State tree

Formula may be included in an existing sls file. This is often useful when a state you are writing needs to require or extend a state defined in the formula.

Here is an example of a state that uses the epel-formula in a require declaration which directs Salt to not install the python26 package until after the EPEL repository has also been installed:

include:
  - epel

python26:
  pkg.installed:
    - require:
      - pkg: epel
Including a Formula from a Top File

Some Formula perform completely standalone installations that are not referenced from other state files. It is usually cleanest to include these Formula directly from a Top File.

For example the easiest way to set up an OpenStack deployment on a single machine is to include the openstack-standalone-formula directly from a top.sls file:

base:
  'myopenstackmaster':
    - openstack

Quickly deploying OpenStack across several dedicated machines could also be done directly from a Top File and may look something like this:

base:
  'controller':
    - openstack.horizon
    - openstack.keystone
  'hyper-*':
    - openstack.nova
    - openstack.glance
  'storage-*':
    - openstack.swift
Configuring Formula using Pillar

Salt Formulas are designed to work out of the box with no additional configuration. However, many Formula support additional configuration and customization through Pillar. Examples of available options can be found in a file named pillar.example in the root directory of each Formula repository.

Using Formula with your own states

Remember that Formula are regular Salt States and can be used with all Salt's normal state mechanisms. Formula can be required from other States with require declarations, they can be modified using extend, they can made to watch other states with The _in versions of requisites.

The following example uses the stock apache-formula alongside a custom state to create a vhost on a Debian/Ubuntu system and to reload the Apache service whenever the vhost is changed.

# Include the stock, upstream apache formula.
include:
  - apache

# Use the watch_in requisite to cause the apache service state to reload
# apache whenever the my-example-com-vhost state changes.
my-example-com-vhost:
  file:
    - managed
    - name: /etc/apache2/sites-available/my-example-com
    - watch_in:
      - service: apache

Don't be shy to read through the source for each Formula!

Reporting problems & making additions

Each Formula is a separate repository on GitHub. If you encounter a bug with a Formula please file an issue in the respective repository! Send fixes and additions as a pull request. Add tips and tricks to the repository wiki.

Writing Formulas

Each Formula is a separate repository in the saltstack-formulas organization on GitHub.

Note

Get involved creating new Formulas

The best way to create new Formula repositories for now is to create a repository in your own account on GitHub and notify a SaltStack employee when it is ready. We will add you to the contributors team on the saltstack-formulas organization and help you transfer the repository over. Ping a SaltStack employee on IRC (#salt on Freenode) or send an email to the salt-users mailing list.

There are a lot of repositories in that organization! Team members can manage which repositories they are subscribed to on GitHub's watching page: https://github.com/watching.

Style

Maintainability, readability, and reusability are all marks of a good Salt sls file. This section contains several suggestions and examples.

# Deploy the stable master branch unless version overridden by passing
# Pillar at the CLI or via the Reactor.

deploy_myapp:
  git.latest:
    - name: git@github.com/myco/myapp.git
    - version: {{ salt.pillar.get('myapp:version', 'master') }}
Use a descriptive State ID

The ID of a state is used as a unique identifier that may be referenced via other states in requisites. It must be unique across the whole state tree (it is a key in a dictionary, after all).

In addition a state ID should be descriptive and serve as a high-level hint of what it will do, or manage, or change. For example, deploy_webapp, or apache, or reload_firewall.

Use module.function notation

So-called "short-declaration" notation is preferred for referencing state modules and state functions. It provides a consistent pattern of module.function shared between Salt States, the Reactor, Overstate, Salt Mine, the Scheduler, as well as with the CLI.

# Do
apache:
  pkg.installed:
    - name: httpd

# Don't
apache:
  pkg:
    - installed
    - name: httpd

Salt's state compiler will transform "short-decs" into the longer format when compiling the human-friendly highstate structure into the machine-friendly lowstate structure.

Specify the name parameter

Use a unique and permanent identifier for the state ID and reserve name for data with variability.

The name declaration is a required parameter for all state functions. The state ID will implicitly be used as name if it is not explicitly set in the state.

In many state functions the name parameter is used for data that varies such as OS-specific package names, OS-specific file system paths, repository addresses, etc. Any time the ID of a state changes all references to that ID must also be changed. Use a permanent ID when writing a state the first time to future-proof that state and allow for easier refactors down the road.

Comment state files

YAML allows comments at varying indentation levels. It is a good practice to comment state files. Use vertical whitespace to visually separate different concepts or actions.

# Start with a high-level description of the current sls file.
# Explain the scope of what it will do or manage.

# Comment individual states as necessary.
update_a_config_file:
  # Provide details on why an unusual choice was made. For example:
  #
  # This template is fetched from a third-party and does not fit our
  # company norm of using Jinja. This must be processed using Mako.
  file.managed:
    - name: /path/to/file.cfg
    - source: salt://path/to/file.cfg.template
    - template: mako

  # Provide a description or explanation that did not fit within the state
  # ID. For example:
  #
  # Update the application's last-deployed timestamp.
  # This is a workaround until Bob configures Jenkins to automate RPM
  # builds of the app.
  cmd.run:
    # FIXME: Joe needs this to run on Windows by next quarter. Switch these
    # from shell commands to Salt's file.managed and file.replace state
    # modules.
    - name: |
        touch /path/to/file_last_updated
        sed -e 's/foo/bar/g' /path/to/file_environment
    - onchanges:
      - file: a_config_file

Be careful to use Jinja comments for commenting Jinja code and YAML comments for commenting YAML code.

# BAD EXAMPLE
# The Jinja in this YAML comment is still executed!
# {% set apache_is_installed = 'apache' in salt.pkg.list_pkgs() %}

# GOOD EXAMPLE
# The Jinja in this Jinja comment will not be executed.
{# {% set apache_is_installed = 'apache' in salt.pkg.list_pkgs() %} #}
Easy on the Jinja!

Jinja templating provides vast flexibility and power when building Salt sls files. It can also create an unmaintainable tangle of logic and data. Speaking broadly, Jinja is best used when kept apart from the states (as much as is possible).

Below are guidelines and examples of how Jinja can be used effectively.

Know the evaluation and execution order

High-level knowledge of how Salt states are compiled and run is useful when writing states.

The default renderer setting in Salt is Jinja piped to YAML. Each is a separate step. Each step is not aware of the previous or following step. Jinja is not YAML aware, YAML is not Jinja aware; they cannot share variables or interact.

  • Whatever the Jinja step produces must be valid YAML.
  • Whatever the YAML step produces must be a valid highstate data structure. (This is also true of the final step for any of the alternate renderers in Salt.)
  • Highstate can be thought of as a human-friendly data structure; easy to write and easy to read.
  • Salt's state compiler validates the highstate and compiles it to low state.
  • Low state can be thought of as a machine-friendly data structure. It is a list of dictionaries that each map directly to a function call.
  • Salt's state system finally starts and executes on each "chunk" in the low state. Remember that requisites are evaluated at runtime.
  • The return for each function call is added to the "running" dictionary which is the final output at the end of the state run.

The full evaluation and execution order:

Jinja -> YAML -> Highstate -> low state -> execution
Avoid changing the underlying system with Jinja

Avoid calling commands from Jinja that change the underlying system. Commands run via Jinja do not respect Salt's dry-run mode (test=True)! This is usually in conflict with the idempotent nature of Salt states unless the command being run is also idempotent.

Inspect the local system

A common use for Jinja in Salt states is to gather information about the underlying system. The grains dictionary available in the Jinja context is a great example of common data points that Salt itself has already gathered. Less common values are often found by running commands. For example:

{% set is_selinux_enabled = salt.cmd.run('sestatus') == '1' %}

This is usually best done with a variable assignment in order to separate the data from the state that will make use of the data.

Gather external data

One of the most common uses for Jinja is to pull external data into the state file. External data can come from anywhere like API calls or database queries, but it most commonly comes from flat files on the file system or Pillar data from the Salt Master. For example:

{% set some_data = salt.pillar.get('some_data', {'sane default': True}) %}

{# or #}

{% load_json 'path/to/file.json' as some_data %}

{# or #}

{% load_text 'path/to/ssh_key.pub' as ssh_pub_key %}

{# or #}

{% from 'path/to/other_file.jinja' import some_data with context %}

This is usually best done with a variable assignment in order to separate the data from the state that will make use of the data.

Light conditionals and looping

Jinja is extremely powerful for programatically generating Salt states. It is also easy to overuse. As a rule of thumb, if it is hard to read it will be hard to maintain!

Separate Jinja control-flow statements from the states as much as is possible to create readable states. Limit Jinja within states to simple variable lookups.

Below is a simple example of a readable loop:

{% for user in salt.pillar.get('list_of_users', []) %}

{# Ensure unique state IDs when looping. #}
{{ user.name }}-{{ loop.index }}:
  user.present:
    - name: {{ user.name }}
    - shell: {{ user.shell }}

{% endfor %}

Avoid putting a Jinja conditionals within Salt states where possible. Readability suffers and the correct YAML indentation is difficult to see in the surrounding visual noise. Parameterization (discussed below) and variables are both useful techniques to avoid this. For example:

{# ---- Bad example ---- #}

apache:
  pkg.installed:
    {% if grains.os_family == 'RedHat' %}
    - name: httpd
    {% elif grains.os_family == 'Debian' %}
    - name: apache2
    {% endif %}

{# ---- Better example ---- #}

{% if grains.os_family == 'RedHat' %}
{% set name = 'httpd' %}
{% elif grains.os_family == 'Debian' %}
{% set name = 'apache2' %}
{% endif %}

 apache:
  pkg.installed:
    - name: {{ name }}

{# ---- Good example ---- #}

{% set name = {
    'RedHat': 'httpd',
    'Debian': 'apache2',
}.get(grains.os_family) %}

 apache:
  pkg.installed:
    - name: {{ name }}

Dictionaries are useful to effectively "namespace" a collection of variables. This is useful with parameterization (discussed below). Dictionaries are also easily combined and merged. And they can be directly serialized into YAML which is often easier than trying to create valid YAML through templating. For example:

{# ---- Bad example ---- #}

haproxy_conf:
  file.managed:
    - name: /etc/haproxy/haproxy.cfg
    - template: jinja
    {% if 'external_loadbalancer' in grains.roles %}
    - source: salt://haproxy/external_haproxy.cfg
    {% elif 'internal_loadbalancer' in grains.roles %}
    - source: salt://haproxy/internal_haproxy.cfg
    {% endif %}
    - context:
        {% if 'external_loadbalancer' in grains.roles %}
        ssl_termination: True
        {% elif 'internal_loadbalancer' in grains.roles %}
        ssl_termination: False
        {% endif %}

{# ---- Better example ---- #}

{% load_yaml as haproxy_defaults %}
common_settings:
  bind_port: 80

internal_loadbalancer:
  source: salt://haproxy/internal_haproxy.cfg
  settings:
    bind_port: 8080
    ssl_termination: False

external_loadbalancer:
  source: salt://haproxy/external_haproxy.cfg
  settings:
    ssl_termination: True
{% endload %}

{% if 'external_loadbalancer' in grains.roles %}
{% set haproxy = haproxy_defaults['external_loadbalancer'] %}
{% elif 'internal_loadbalancer' in grains.roles %}
{% set haproxy = haproxy_defaults['internal_loadbalancer'] %}
{% endif %}

{% do haproxy.settings.update(haproxy_defaults.common_settings) %}

haproxy_conf:
  file.managed:
    - name: /etc/haproxy/haproxy.cfg
    - template: jinja
    - source: {{ haproxy.source }}
    - context: {{ haproxy.settings | yaml() }}

There is still room for improvement in the above example. For example, extracting into an external file or replacing the if-elif conditional with a function call to filter the correct data more succinctly. However, the state itself is simple and legible, the data is separate and also simple and legible. And those suggested improvements can be made at some future date without altering the state at all!

Avoid heavy logic and programming

Jinja is not Python. It was made by Python programmers and shares many semantics and some syntax but it does not allow for abitrary Python function calls or Python imports. Jinja is a fast and efficient templating language but the syntax can be verbose and visually noisy.

Once Jinja use within an sls file becomes slightly complicated -- long chains of if-elif-elif-else statements, nested conditionals, complicated dictionary merges, wanting to use sets -- instead consider using a different Salt renderer, such as the Python renderer. As a rule of thumb, if it is hard to read it will be hard to maintain -- switch to a format that is easier to read.

Using alternate renderers is very simple to do using Salt's "she-bang" syntax at the top of the file. The Python renderer must simply return the correct highstate data structure. The following example is a state tree of two sls files, one simple and one complicated.

/srv/salt/top.sls:

base:
  '*':
    - common_configuration
    - roles_configuration

/srv/salt/common_configuration.sls:

common_users:
  user.present:
    - names: [larry, curly, moe]

/srv/salt/roles_configuration:

#!py
def run():
    list_of_roles = set()

    # This example has the minion id in the form 'web-03-dev'.
    # Easily access the grains dictionary:
    try:
        app, instance_number, environment = __grains__['id'].split('-')
        instance_number = int(instance_number)
    except ValueError:
        app, instance_number, environment = ['Unknown', 0, 'dev']

    list_of_roles.add(app)

    if app == 'web' and environment == 'dev':
        list_of_roles.add('primary')
        list_of_roles.add('secondary')
    elif app == 'web' and environment == 'staging':
        if instance_number == 0:
            list_of_roles.add('primary')
        else:
            list_of_roles.add('secondary')

    # Easily cross-call Salt execution modules:
    if __salt__['myutils.query_valid_ec2_instance']():
        list_of_roles.add('is_ec2_instance')

    return {
        'set_roles_grains': {
            'grains.present': [
                {'name': 'roles'},
                {'value': list(list_of_roles)},
            ],
        },
    }
Jinja Macros

In Salt sls files Jinja macros are useful for one thing and one thing only: creating mini templates that can be reused and rendered on demand. Do not fall into the trap of thinking of macros as functions; Jinja is not Python (see above).

Macros are useful for creating reusable, parameterized states. For example:

{% macro user_state(state_id, user_name, shell='/bin/bash', groups=[]) %}
{{ state_id }}:
  user.present:
    - name: {{ user_name }}
    - shell: {{ shell }}
    - groups: {{ groups | json() }}
{% endmacro %}

{% for user_info in salt.pillar.get('my_users', []) %}
{{ user_state('user_number_' ~ loop.index, **user_info) }}
{% endfor %}

Macros are also useful for creating one-off "serializers" that can accept a data structure and write that out as a domain-specific configuration file. For example, the following macro could be used to write a php.ini config file:

/srv/salt/php.sls:

php_ini:
  file.managed:
    - name: /etc/php.ini
    - source: salt://php.ini.tmpl
    - template: jinja
    - context:
        php_ini_settings: {{ salt.pillar.get('php_ini', {}) | json() }}

/srv/pillar/php.sls:

php_ini:
  PHP:
    engine: 'On'
    short_open_tag: 'Off'
    error_reporting: 'E_ALL & ~E_DEPRECATED & ~E_STRICT'

/srv/salt/php.ini.tmpl:

{% macro php_ini_serializer(data) %}
{% for section_name, name_val_pairs in data.items() %}
[{{ section_name }}]
{% for name, val in name_val_pairs.items() -%}
{{ name }} = "{{ val }}"
{% endfor %}
{% endfor %}
{% endmacro %}

; File managed by Salt at <{{ source }}>.
; Your changes will be overwritten.

{{ php_ini_serializer(php_ini_settings) }}
Abstracting static defaults into a lookup table

Separate data that a state uses from the state itself to increases the flexibility and reusability of a state.

An obvious and common example of this is platform-specific package names and file system paths. Another example is sane defaults for an application, or common settings within a company or organization. Organizing such data as a dictionary (aka hash map, lookup table, associative array) often provides a lightweight namespacing and allows for quick and easy lookups. In addition, using a dictionary allows for easily merging and overriding static values within a lookup table with dynamic values fetched from Pillar.

A strong convention in Salt Formulas is to place platform-specific data, such as package names and file system paths, into a file named map.jinja that is placed alongside the state files.

The following is an example from the MySQL Formula. The grains.filter_by function performs a lookup on that table using the os_family grain (by default).

The result is that the mysql variable is assigned to a subset of the lookup table for the current platform. This allows states to reference, for example, the name of a package without worrying about the underlying OS. The syntax for referencing a value is a normal dictionary lookup in Jinja, such as {{ mysql['service'] }} or the shorthand {{ mysql.service }}.

map.jinja:

{% set mysql = salt['grains.filter_by']({
    'Debian': {
        'server': 'mysql-server',
        'client': 'mysql-client',
        'service': 'mysql',
        'config': '/etc/mysql/my.cnf',
        'python': 'python-mysqldb',
    },
    'RedHat': {
        'server': 'mysql-server',
        'client': 'mysql',
        'service': 'mysqld',
        'config': '/etc/my.cnf',
        'python': 'MySQL-python',
    },
    'Gentoo': {
        'server': 'dev-db/mysql',
        'client': 'dev-db/mysql',
        'service': 'mysql',
        'config': '/etc/mysql/my.cnf',
        'python': 'dev-python/mysql-python',
    },
}, merge=salt['pillar.get']('mysql:lookup')) %}

Values defined in the map file can be fetched for the current platform in any state file using the following syntax:

{% from "mysql/map.jinja" import mysql with context %}

mysql-server:
  pkg.installed:
    - name: {{ mysql.server }}
  service.running:
    - name: {{ mysql.service }}
Collecting common values

Common values can be collected into a base dictionary. This minimizes repetition of identical values in each of the lookup_dict sub-dictionaries. Now only the values that are different from the base must be specified of the alternates:

map.jinja:

{% set mysql = salt['grains.filter_by']({
    'default': {
        'server': 'mysql-server',
        'client': 'mysql-client',
        'service': 'mysql',
        'config': '/etc/mysql/my.cnf',
        'python': 'python-mysqldb',
    },
    'Debian': {
    },
    'RedHat': {
        'client': 'mysql',
        'service': 'mysqld',
        'config': '/etc/my.cnf',
        'python': 'MySQL-python',
    },
    'Gentoo': {
        'server': 'dev-db/mysql',
        'client': 'dev-db/mysql',
        'python': 'dev-python/mysql-python',
    },
},
merge=salt['pillar.get']('mysql:lookup'), default='default') %}
Overriding values in the lookup table

Allow static values within lookup tables to be overridden. This is a simple pattern which once again increases flexibility and reusability for state files.

The merge argument in filter_by specifies the location of a dictionary in Pillar that can be used to override values returned from the lookup table. If the value exists in Pillar it will take precedence.

This is useful when software or configuration files is installed to non-standard locations or on unsupported platforms. For example, the following Pillar would replace the config value from the call above.

mysql:
  lookup:
    config: /usr/local/etc/mysql/my.cnf

Note

Protecting Expansion of Content with Special Characters

When templating keep in mind that YAML does have special characters for quoting, flows, and other special structure and content. When a Jinja substitution may have special characters that will be incorrectly parsed by YAML care must be taken. It is a good policy to use the yaml_encode or the yaml_dquote Jinja filters:

{%- set foo = 7.7 %}
{%- set bar = none %}
{%- set baz = true %}
{%- set zap = 'The word of the day is "salty".' %}
{%- set zip = '"The quick brown fox . . ."' %}

foo: {{ foo|yaml_encode }}
bar: {{ bar|yaml_encode }}
baz: {{ baz|yaml_encode }}
zap: {{ zap|yaml_encode }}
zip: {{ zip|yaml_dquote }}

The above will be rendered as below:

foo: 7.7
bar: null
baz: true
zap: "The word of the day is \"salty\"."
zip: "\"The quick brown fox . . .\""

The filter_by function performs a simple dictionary lookup but also allows for fetching data from Pillar and overriding data stored in the lookup table. That same workflow can be easily performed without using filter_by; other dictionaries besides data from Pillar can also be used.

{% set lookup_table = {...} %}
{% do lookup_table.update(salt.pillar.get('my:custom:data')) %}
When to use lookup tables

The map.jinja file is only a convention within Salt Formulas. This greater pattern is useful for a wide variety of data in a wide variety of workflows. This pattern is not limited to pulling data from a single file or data source. This pattern is useful in States, Pillar, the Reactor, and Overstate as well.

Working with a data structure instead of, say, a config file allows the data to be cobbled together from multiple sources (local files, remote Pillar, database queries, etc), combined, overridden, and searched.

Below are a few examples of what lookup tables may be useful for and how they may be used and represented.

Platform-specific information

An obvious pattern and one used heavily in Salt Formulas is extracting platform-specific information such as package names and file system paths in a file named map.jinja. The pattern is explained in detail above.

Sane defaults

Application settings can be a good fit for this pattern. Store default settings along with the states themselves and keep overrides and sensitive settings in Pillar. Combine both into a single dictionary and then write the application config or settings file.

The example below stores most of the Apache Tomcat server.xml file alongside the Tomcat states and then allows values to be updated or augmented via Pillar. (This example uses the BadgerFish format for transforming JSON to XML.)

/srv/salt/tomcat/defaults.yaml:

Server:
  '@port': '8005'
  '@shutdown': SHUTDOWN
  GlobalNamingResources:
    Resource:
      '@auth': Container
      '@description': User database that can be updated and saved
      '@factory': org.apache.catalina.users.MemoryUserDatabaseFactory
      '@name': UserDatabase
      '@pathname': conf/tomcat-users.xml
      '@type': org.apache.catalina.UserDatabase
  # <...snip...>

/srv/pillar/tomcat.sls:

appX:
  server_xml_overrides:
    Server:
      Service:
        '@name': Catalina
        Connector:
          '@port': '8009'
          '@protocol': AJP/1.3
          '@redirectPort': '8443'
          # <...snip...>

/srv/salt/tomcat/server_xml.sls:

{% import_yaml 'tomcat/defaults.yaml' as server_xml_defaults %}
{% set server_xml_final_values = salt.pillar.get(
    'appX:server_xml_overrides',
    default=server_xml_defaults,
    merge=True)
%}

appX_server_xml:
  file.serialize:
    - name: /etc/tomcat/server.xml
    - dataset: {{ server_xml_final_values | json() }}
    - formatter: xml_badgerfish

The file.serialize state can provide a shorthand for creating some files from data structures. There are also many examples within Salt Formulas of creating one-off "serializers" (often as Jinja macros) that reformat a data structure to a specific config file format. For example, `Nginx vhosts`__ or the `php.ini`__

__: https://github.com/saltstack-formulas/nginx-formula/blob/5cad4512/nginx/ng/vhosts_config.sls __: https://github.com/saltstack-formulas/php-formula/blob/82e2cd3a/php/ng/files/php.ini

Environment specific information

A single state can be reused when it is parameterized as described in the section below, by separating the data the state will use from the state that performs the work. This can be the difference between deploying Application X and Application Y, or the difference between production and development. For example:

/srv/salt/app/deploy.sls:

{# Load the map file. #}
{% import_yaml 'app/defaults.yaml' as app_defaults %}

{# Extract the relevant subset for the app configured on the current
   machine (configured via a grain in this example). #}
{% app = app_defaults.get(salt.grains.get('role') %}

{# Allow values from Pillar to (optionally) update values from the lookup
   table. #}
{% do app_defaults.update(salt.pillar.get('myapp', {}) %}

deploy_application:
  git.latest:
    - name: {{ app.repo_url }}
    - version: {{ app.version }}
    - target: {{ app.deploy_dir }}

myco/myapp/deployed:
  event.send:
    - data:
        version: {{ app.version }}
    - onchanges:
      - git: deploy_application

/srv/salt/app/defaults.yaml:

appX:
  repo_url: git@github.com/myco/appX.git
  target: /var/www/appX
  version: master
appY:
  repo_url: git@github.com/myco/appY.git
  target: /var/www/appY
  version: v1.2.3.4
Single-purpose SLS files

Each sls file in a Formula should strive to do a single thing. This increases the reusability of this file by keeping unrelated tasks from getting coupled together.

As an example, the base Apache formula should only install the Apache httpd server and start the httpd service. This is the basic, expected behavior when installing Apache. It should not perform additional changes such as set the Apache configuration file or create vhosts.

If a formula is single-purpose as in the example above, other formulas, and also other states can include and use that formula with Requisites and Other Global State Arguments without also including undesirable or unintended side-effects.

The following is a best-practice example for a reusable Apache formula. (This skips platform-specific options for brevity. See the full apache-formula for more.)

# apache/init.sls
apache:
  pkg.installed:
    [...]
  service.running:
    [...]

# apache/mod_wsgi.sls
include:
  - apache

mod_wsgi:
  pkg.installed:
    [...]
    - require:
      - pkg: apache

# apache/conf.sls
include:
  - apache

apache_conf:
  file.managed:
    [...]
    - watch_in:
      - service: apache

To illustrate a bad example, say the above Apache formula installed Apache and also created a default vhost. The mod_wsgi state would not be able to include the Apache formula to create that dependency tree without also installing the unneeded default vhost.

Formulas should be reusable. Avoid coupling unrelated actions together.

Parameterization

Parameterization is a key feature of Salt Formulas and also for Salt States. Parameterization allows a single Formula to be reused across many operating systems; to be reused across production, development, or staging environments; and to be reused by many people all with varying goals.

Writing states, specifying ordering and dependencies is the part that takes the longest to write and to test. Filling those states out with data such as users or package names or file locations is the easy part. How many users, what those users are named, or where the files live are all implementation details that should be parameterized. This separation between a state and the data that populates a state creates a reusable formula.

In the example below the data that populates the state can come from anywhere -- it can be hard-coded at the top of the state, it can come from an external file, it can come from Pillar, it can come from an execution function call, or it can come from a database query. The state itself doesn't change regardless of where the data comes from. Production data will vary from development data will vary from data from one company to another, however the state itself stays the same.

{% set user_list = [
    {'name': 'larry', 'shell': 'bash'},
    {'name': 'curly', 'shell': 'bash'},
    {'name': 'moe', 'shell': 'zsh'},
] %}

{# or #}

{% set user_list = salt['pillar.get']('user_list') %}

{# or #}

{% load_json "default_users.json" as user_list %}

{# or #}

{% set user_list = salt['acme_utils.get_user_list']() %}

{% for user in list_list %}
{{ user.name }}:
  user.present:
    - name: {{ user.name }}
    - shell: {{ user.shell }}
{% endfor %}
Configuration

Formulas should strive to use the defaults of the underlying platform, followed by defaults from the upstream project, followed by sane defaults for the formula itself.

As an example, a formula to install Apache should not change the default Apache configuration file installed by the OS package. However, the Apache formula should include a state to change or override the default configuration file.

Pillar overrides

Pillar lookups must use the safe get() and must provide a default value. Create local variables using the Jinja set construct to increase redability and to avoid potentially hundreds or thousands of function calls across a large state tree.

{% from "apache/map.jinja" import apache with context %}
{% set settings = salt['pillar.get']('apache', {}) %}

mod_status:
  file.managed:
    - name: {{ apache.conf_dir }}
    - source: {{ settings.get('mod_status_conf', 'salt://apache/mod_status.conf') }}
    - template: {{ settings.get('template_engine', 'jinja') }}

Any default values used in the Formula must also be documented in the pillar.example file in the root of the repository. Comments should be used liberally to explain the intent of each configuration value. In addition, users should be able copy-and-paste the contents of this file into their own Pillar to make any desired changes.

Scripting

Remember that both State files and Pillar files can easily call out to Salt execution modules and have access to all the system grains as well.

{% if '/storage' in salt['mount.active']() %}
/usr/local/etc/myfile.conf:
  file:
    - symlink
    - target: /storage/myfile.conf
{% endif %}

Jinja macros to encapsulate logic or conditionals are discouraged in favor of writing custom execution modules in Python.

Repository structure

A basic Formula repository should have the following layout:

foo-formula
|-- foo/
|   |-- map.jinja
|   |-- init.sls
|   `-- bar.sls
|-- CHANGELOG.rst
|-- LICENSE
|-- pillar.example
|-- README.rst
`-- VERSION

See also

template-formula

The template-formula repository has a pre-built layout that serves as the basic structure for a new formula repository. Just copy the files from there and edit them.

README.rst

The README should detail each available .sls file by explaining what it does, whether it has any dependencies on other formulas, whether it has a target platform, and any other installation or usage instructions or tips.

A sample skeleton for the README.rst file:

===
foo
===

Install and configure the FOO service.

.. note::

    See the full `Salt Formulas installation and usage instructions
    <http://docs.saltstack.com/en/latest/topics/development/conventions/formulas.html>`_.

Available states
================

.. contents::
    :local:

``foo``
-------

Install the ``foo`` package and enable the service.

``foo.bar``
-----------

Install the ``bar`` package.
CHANGELOG.rst

The CHANGELOG.rst file should detail the individual versions, their release date and a set of bullet points for each version highlighting the overall changes in a given version of the formula.

A sample skeleton for the CHANGELOG.rst file:

CHANGELOG.rst:

foo formula
===========

0.0.2 (2013-01-01)

- Re-organized formula file layout
- Fixed filename used for upstart logger template
- Allow for pillar message to have default if none specified
Versioning

Formula are versioned according to Semantic Versioning, http://semver.org/.

Note

Given a version number MAJOR.MINOR.PATCH, increment the:

  1. MAJOR version when you make incompatible API changes,
  2. MINOR version when you add functionality in a backwards-compatible manner, and
  3. PATCH version when you make backwards-compatible bug fixes.

Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.

Formula versions are tracked using Git tags as well as the VERSION file in the formula repository. The VERSION file should contain the currently released version of the particular formula.

Testing Formulas

A smoke-test for invalid Jinja, invalid YAML, or an invalid Salt state structure can be performed by with the state.show_sls function:

salt '*' state.show_sls apache

Salt Formulas can then be tested by running each .sls file via state.sls and checking the output for the success or failure of each state in the Formula. This should be done for each supported platform.

SaltStack Packaging Guide

Since Salt provides a powerful toolkit for system management and automation, the package can be spit into a number of sub-tools. While packaging Salt as a single package containing all components is perfectly acceptable, the split packages should follow this convention.

Patching Salt For Distributions

The occasion may arise where Salt source and default configurations may need to be patched. It is preferable if Salt is only patched to include platform specific additions or to fix release time bugs. It is preferable that configuration settings and operations remain in the default state, as changes here lowers the user experience for users moving across distributions.

In the event where a packager finds a need to change the default configuration it is advised to add the files to the master.d or minion.d directories.

Source Files

Release packages should always be built from the source tarball distributed via pypi. Release packages should NEVER use a git checkout as the source for distribution.

Single Package

Shipping Salt as a single package, where the minion, master, and all tools are together is perfectly acceptable and practiced by distributions such as FreeBSD.

Split Package

Salt Should always be split in a standard way, with standard dependencies, this lowers cross distribution confusion about what components are going to be shipped with specific packages. These packages can be defined from the Salt Source as of Salt 2014.1.0:

Salt Common

The salt-common or salt package should contain the files provided by the salt python package, or all files distributed from the salt/ directory in the source distribution packages. The documentation contained under the doc/ directory can be a part of this package but splitting out a doc package is preferred. Since salt-call is the entry point to utilize the libs and is useful for all salt packages it is included in the salt-common package.

Name
  • salt OR salt-common
Files
  • salt/*
  • man/salt.7
  • scripts/salt-call
  • tests/*
  • man/salt-call.1
Depends
  • Python 2.6-2.7
  • PyYAML
  • Jinja2
Salt Master

The salt-master package contains the applicable scripts, related man pages and init information for the given platform.

Name
  • salt-master
Files
  • scripts/salt-master
  • scripts/salt
  • scripts/salt-run
  • scripts/salt-key
  • scripts/salt-cp
  • pkg/<master init data>
  • man/salt.1
  • man/salt-master.1
  • man/salt-run.1
  • man/salt-key.1
  • man/salt-cp.1
  • conf/master
Depends
  • Salt Common
  • ZeroMQ >= 3.2
  • PyZMQ >= 2.10
  • PyCrypto
  • M2Crypto
  • Python MessagePack (Messagepack C lib, or msgpack-pure)
Salt Syndic

The Salt Syndic package can be rolled completely into the Salt Master package. Platforms which start services as part of the package deployment need to maintain a separate salt-syndic package (primarily Debian based platforms).

The Syndic may optionally not depend on the anything more than the Salt Master since the master will bring in all needed dependencies, but fall back to the platform specific packaging guidelines.

Name
  • salt-syndic
Files
  • scripts/salt-syndic
  • pkg/<syndic init data>
  • man/salt-syndic.1
Depends
  • Salt Common
  • Salt Master
  • ZeroMQ >= 3.2
  • PyZMQ >= 2.10
  • PyCrypto
  • M2Crypto
  • Python MessagePack (Messagepack C lib, or msgpack-pure)
Salt Minion

The Minion is a standalone package and should not be split beyond the salt-minion and salt-common packages.

Name
  • salt-minion
Files
  • scripts/salt-minion
  • pkg/<minion init data>
  • man/salt-minion.1
  • conf/minion
Depends
  • Salt Common
  • ZeroMQ >= 3.2
  • PyZMQ >= 2.10
  • PyCrypto
  • M2Crypto
  • Python MessagePack (Messagepack C lib, or msgpack-pure)
Salt SSH

Since Salt SSH does not require the same dependencies as the minion and master, it should be split out.

Name
  • salt-ssh
Files
  • scripts/salt-ssh
  • man/salt-ssh.1
  • conf/cloud*
Depends
  • Salt Common
  • Python MessagePack (Messagepack C lib, or msgpack-pure)
Salt Cloud

As of Salt 2014.1.0 Salt Cloud is included in the same repo as Salt. This can be split out into a separate package or it can be included in the salt-master package.

Name
  • salt-cloud
Files
  • scripts/salt-cloud
  • man/salt-cloud.1
Depends
  • Salt Common
  • apache libcloud >= 0.14.0
Salt Doc

The documentation package is very distribution optional. A completely split package will split out the documentation, but some platform conventions do not prefer this. If the documentation is not split out, it should be included with the Salt Common package.

Name
  • salt-doc
Files
  • doc/*
Optional Depends
  • Salt Common
  • Python Sphinx
  • Make

Salt Release Process

The goal for Salt projects is to cut a new feature release every four to six weeks. This document outlines the process for these releases, and the subsequent bug fix releases which follow.

Feature Release Process

When a new release is ready to be cut, the person responsible for cutting the release will follow the following steps (written using the 0.16 release as an example):

  1. All open issues on the release milestone should be moved to the next release milestone. (e.g. from the 0.16 milestone to the 0.17 milestone)
  2. Release notes should be created documenting the major new features and bugfixes in the release.
  3. Create an annotated tag with only the major and minor version numbers, preceded by the letter v. (e.g. v0.16) This tag will reside on the develop branch.
  4. Create a branch for the new release, using only the major and minor version numbers. (e.g. 0.16)
  5. On this new branch, create an annotated tag for the first revision release, which is generally a release candidate. It should be preceded by the letter v. (e.g. v0.16.0RC)
  6. The release should be packaged from this annotated tag and uploaded to PyPI as well as the GitHub releases page for this tag.
  7. The packagers should be notified on the salt-packagers mailing list so they can create packages for all the major operating systems. (note that release candidates should go in the testing repositories)
  8. After the packagers have been given a few days to compile the packages, the release is announced on the salt-users mailing list.
  9. Log into RTD and add the new release there. (Have to do it manually)
Maintenance and Bugfix Releases

Once a release has been cut, regular cherry-picking sessions should begin to cherry-pick any bugfixes from the develop branch to the release branch (e.g. 0.16). Once major bugs have been fixes and cherry-picked, a bugfix release can be cut:

  1. On the release branch (i.e. 0.16), create an annotated tag for the revision release. It should be preceded by the letter v. (e.g. v0.16.2) Release candidates are unnecessary for bugfix releases.
  2. The release should be packaged from this annotated tag and uploaded to PyPI.
  3. The packagers should be notified on the salt-packagers mailing list so they can create packages for all the major operating systems.
  4. After the packagers have been given a few days to compile the packages, the release is announced on the salt-users mailing list.
Cherry-Picking Process for Bugfixes

Bugfixes should be made on the develop branch. If the bug also applies to the current release branch, then on the pull request against develop, the user should mention @basepi and ask for the pull request to be cherry-picked. If it is verified that the fix is a bugfix, then the Bugfix -- Cherry-Pick label will be applied to the pull request. When those commits are cherry-picked, the label will be switched to the Bugfix -- [Done] Cherry-Pick label. This allows easy recognition of which pull requests have been cherry-picked, and which are still pending to be cherry-picked. All cherry-picked commits will be present in the next release.

Features will not be cherry-picked, and will be present in the next feature release.

Salt Coding Style

Salt is developed with a certain coding style, while the style is dominantly PEP 8 it is not completely PEP 8. It is also noteworthy that a few development techniques are also employed which should be adhered to. In the end, the code is made to be "Salty".

Most importantly though, we will accept code that violates the coding style and KINDLY ask the contributor to fix it, or go ahead and fix the code on behalf of the contributor. Coding style is NEVER grounds to reject code contributions, and is never grounds to talk down to another member of the community (There are no grounds to treat others without respect, especially people working to improve Salt)!!

Linting

Most Salt style conventions are codified in Salt's .pylintrc file. This file is found in the root of the Salt project and can be passed as an argument to the pylint program as follows:

pylint --rcfile=/path/to/salt/.pylintrc salt/dir/to/lint
Strings

Salt follows a few rules when formatting strings:

Single Quotes

In Salt, all strings use single quotes unless there is a good reason not to. This means that docstrings use single quotes, standard strings use single quotes etc.:

def foo():
    '''
    A function that does things
    '''
    name = 'A name'
    return name
Formatting Strings

All strings which require formatting should use the .format string method:

data = 'some text'
more = '{0} and then some'.format(data)

Make sure to use indices or identifiers in the format brackets, since empty brackets are not supported by python 2.6.

Please do NOT use printf formatting.

Docstring Conventions

Docstrings should always add a newline, docutils takes care of the new line and it makes the code cleaner and more vertical:

GOOD:

def bar():
    '''
    Here lies a docstring with a newline after the quotes and is the salty
    way to handle it! Vertical code is the way to go!
    '''
    return

BAD:

def baz():
    '''This is not ok!'''
    return

When adding a new function or state, where possible try to use a versionadded directive to denote when the function or state was added.

def new_func(msg=''):
    '''
    .. versionadded:: 0.16.0

    Prints what was passed to the function.

    msg : None
        The string to be printed.
    '''
    print msg

If you are uncertain what version should be used, either consult a core developer in IRC or bring this up when opening your pull request and a core developer will add the proper version once your pull request has been merged. Bugfixes will be available in a bugfix release (i.e. 0.17.1, the first bugfix release for 0.17.0), while new features are held for feature releases, and this will affect what version number should be used in the versionadded directive.

Similar to the above, when an existing function or state is modified (for example, when an argument is added), then under the explanation of that new argument a versionadded directive should be used to note the version in which the new argument was added. If an argument's function changes significantly, the versionchanged directive can be used to clarify this:

def new_func(msg='', signature=''):
    '''
    .. versionadded:: 0.16.0

    Prints what was passed to the function.

    msg : None
        The string to be printed. Will be prepended with 'Greetings! '.

    .. versionchanged:: 0.17.1

    signature : None
        An optional signature.

    .. versionadded 0.17.0
    '''
    print 'Greetings! {0}\n\n{1}'.format(msg, signature)
Dictionaries

Dictionaries should be initialized using {} instead of dict().

See here for an in-depth discussion of this topic.

Imports

Salt code prefers importing modules and not explicit functions. This is both a style and functional preference. The functional preference originates around the fact that the module import system used by pluggable modules will include callable objects (functions) that exist in the direct module namespace. This is not only messy, but may unintentionally expose code python libs to the Salt interface and pose a security problem.

To say this more directly with an example, this is GOOD:

import os

def minion_path():
    path = os.path.join(self.opts['cachedir'], 'minions')
    return path

This on the other hand is DISCOURAGED:

from os.path import join

def minion_path():
    path = join(self.opts['cachedir'], 'minions')
    return path

The time when this is changed is for importing exceptions, generally directly importing exceptions is preferred:

This is a good way to import exceptions:

from salt.exceptions import CommandExecutionError
Absolute Imports

Although absolute imports seems like an awesome idea, please do not use it. Extra care would be necessary all over salt's code in order for absolute imports to work as supposed. Believe it, it has been tried before and, as a tried example, by renaming salt.modules.sysmod to salt.modules.sys, all other salt modules which needed to import sys would have to also import absolute_import, which should be avoided.

Vertical is Better

When writing Salt code, vertical code is generally preferred. This is not a hard rule but more of a guideline. As PEP 8 specifies, Salt code should not exceed 79 characters on a line, but it is preferred to separate code out into more newlines in some cases for better readability:

import os

os.chmod(
        os.path.join(self.opts['sock_dir'],
            'minion_event_pub.ipc'),
        448
        )

Where there are more line breaks, this is also apparent when constructing a function with many arguments, something very common in state functions for instance:

def managed(name,
        source=None,
        source_hash='',
        user=None,
        group=None,
        mode=None,
        template=None,
        makedirs=False,
        context=None,
        replace=True,
        defaults=None,
        env=None,
        backup='',
        **kwargs):

Note

Making function and class definitions vertical is only required if the arguments are longer then 80 characters. Otherwise, the formatting is optional and both are acceptable.

Line Length

For function definitions and function calls, Salt adheres to the PEP-8 specification of at most 80 characters per line.

Non function definitions or function calls, please adopt a soft limit of 120 characters per line. If breaking the line reduces the code readability, don't break it. Still, try to avoid passing that 120 characters limit and remember, vertical is better... unless it isn't

Indenting

Some confusion exists in the python world about indenting things like function calls, the above examples use 8 spaces when indenting comma-delimited constructs.

The confusion arises because the pep8 program INCORRECTLY flags this as wrong, where PEP 8, the document, cites only using 4 spaces here as wrong, as it doesn't differentiate from a new indent level.

Right:

def managed(name,
        source=None,
        source_hash='',
        user=None)

WRONG:

def managed(name,
    source=None,
    source_hash='',
    user=None)

Lining up the indent is also correct:

def managed(name,
            source=None,
            source_hash='',
            user=None)

This also applies to function calls and other hanging indents.

pep8 and Flake8 (and, by extension, the vim plugin Syntastic) will complain about the double indent for hanging indents. This is a known conflict between pep8 (the script) and the actual PEP 8 standard. It is recommended that this particular warning be ignored with the following lines in ~/.config/flake8:

[flake8]
ignore = E226,E241,E242,E126

Make sure your Flake8/pep8 are up to date. The first three errors are ignored by default and are present here to keep the behavior the same. This will also work for pep8 without the Flake8 wrapper -- just replace all instances of 'flake8' with 'pep8', including the filename.

Code Churn

Many pull requests have been submitted that only churn code in the name of PEP 8. Code churn is a leading source of bugs and is strongly discouraged. While style fixes are encouraged they should be isolated to a single file per commit, and the changes should be legitimate, if there are any questions about whether a style change is legitimate please reference this document and the official PEP 8 (http://legacy.python.org/dev/peps/pep-0008/) document before changing code. Many claims that a change is PEP 8 have been invalid, please double check before committing fixes.

Release notes

See the version numbers page for more information about the version numbering scheme.

Latest Stable Release

Salt 2015.5.1 Release Notes

Previous Releases

Salt 2015.5.0 Release Notes - Codename Lithium

The 2015.5.0 feature release of Salt is focused on hardening Salt and mostly on improving existing systems. A few major additions are present, primarily the new Beacon system. Most enhancements have been focused around improving existing features and interfaces.

As usual the release notes are not exhaustive and primarily include the most notable additions and improvements. Hundreds of bugs have been fixed and many modules have been substantially updated and added.

Warning

In order to fix potential shell injection vulnerabilities in salt modules, a change has been made to the various cmd module functions. These functions now default to python_shell=False, which means that the commands will not be sent to an actual shell.

The largest side effect of this change is that "shellisms", such as pipes, will not work by default. The modules shipped with salt have been audited to fix any issues that might have arisen from this change. Additionally, the cmd state module has been unaffected, and use of cmd.run in jinja is also unaffected. cmd.run calls on the CLI will also allow shellisms.

However, custom execution modules which use shellisms in cmd calls will break, unless you pass python_shell=True to these calls.

As a temporary workaround, you can set cmd_safe: False in your minion and master configs. This will revert the default, but is also less secure, as it will allow shell injection vulnerabilities to be written in custom code. We recommend you only set this setting for as long as it takes to resolve these issues in your custom code, then remove the override.

Note

Starting in this version of salt, pillar_opts defaults to False instead of True. This means that master opts will not be present in minion pillar, and as a result, config.get calls will not include master opts.

We recommend pillar is used for configuration options which need to make it to the minion.

Beacons

The beacon system allows the minion to hook into system processes and continually translate external events into the salt event bus. The primary example of this is the inotify beacon. This beacon uses inotify to watch configured files or directories on the minion for changes, creation, deletion etc.

This allows for the changes to be sent up to the master where the reactor can respond to changes.

Sudo Minion Settings

It is now possible to run the minion as a non-root user and for the minion to execute commands via sudo. Simply add sudo_user: root to the minion config, run the minion as a non-root user and grant that user sudo rights to execute salt-call.

Lazy Loader

The Lazy Loader is a significant overhaul of Salt's module loader system. The Lazy Loader will lazily load modules on access instead of all on start. In addition to a major performance improvement, this "sandboxes" modules so a bad/broken import of a single module will only affect jobs that require accessing the broken module. (:issue: 20274)

Enhanced Active Directory Support

The eauth system for LDAP has been extended to support Microsoft Active Directory out of the box. This includes Active Directory and LDAP group support for eauth.

Salt LXC Enhancements

The LXC systems have been overhauled to be more consistent and to fix many bugs.

This overhaul makes using LXC with Salt much easier and substantially improves the underlying capabilities of Salt's LXC integration.

Salt SSH
  • Additional configuration options and command line flags have been added to configure the scan roster on the fly
  • Added support for state.single in salt-ssh
  • Added support for publish.publish, publish.full_data, and publish.runner in salt-ssh
  • Added support for mine.get in salt-ssh
New Windows Installer

The new Windows installer changes how Salt is installed on Windows. The old installer used bbfreeze to create an isolated python environment to execute in. This made adding modules and python libraries difficult. The new installer sets up a more flexible python environment making it easy to manage the python install and add python modules.

Instead of frozen packages, a full python implementation resides in the bin directory (C:\salt\bin). By executing pip or easy_install from within the Scripts directory (C:\salt\bin\Scripts) you can install any additional python modules you may need for your custom environment.

The .exe's that once resided at the root of the salt directory (C:\salt) have been replaced by .bat files and should function the same way as the .exe's in previous versions.

The new Windows Installer will not replace the minion config file and key if they already exist on the target system. Only the salt program files will be replaced. C:\salt\conf and C:\salt\var will remain unchanged.

Removed Requests Dependency

The hard dependency on the requests library has been removed. Requests is still required by a number of cloud modules but is no longer required for normal Salt operations.

This removal fixes issues that were introduced with requests and salt-ssh, as well as issues users experienced from the many different packaging methods used by requests package maintainers.

Python 3 Updates

While Salt does not YET run on Python 3 it has been updated to INSTALL on Python 3, taking us one step closer. What remains is getting the test suite to the point where it can run on Python 3 so that we can verify compatibility.

RAET Additions

The RAET support continues to improve. RAET now supports multi-master and many bugs and performance issues have been fixed. RAET is much closer to being a first class citizen.

Modified File Detection

A number of functions have been added to the RPM-based package managers to detect and diff files that are modified from the original package installs. This can be found in the new pkg.modified functions.

Reactor Update

Fix an infinite recursion problem for runner/wheel reactor jobs by passing a "user" (Reactor) to all jobs that the reactor starts. The reactor skips all events created by that username -- thereby only reacting to events not caused by itself. Because of this, runner and wheel executions from the runner will have user "Reactor" in the job cache.

Misc Fixes/Additions
  • SDB driver for etcd. (:issue: 22043)
  • Add only_upgrade argument to apt-based pkg.install to only install a package version if the package is already installed. (Great for security updates!)
  • Joyent now requires a keyname to be specified in the provider configuration. This change was necessitated upstream by the 7.0+ API.
  • Add args argument to cmd.script_retcode to match cmd.script in the cmd module. (:issue: 21122)
  • Fixed bug where TCP keepalive was not being sent on the defined interval on the return port (4506) from minion to master. (:issue: 21465)
  • LocalClient may now optionally raise SaltClientError exceptions. If using this class directly, checking for and handling this exception is recommended. (:issue: 21501)
  • The SAuth object is now a singleton, meaning authentication state is global (per master) on each minion. This reduces sign-ins of minions from 3->1 per startup.
  • Nested outputter has been optimized, it is now much faster.
  • Extensive fileserver backend updates.
Deprecations
  • Removed parameter keyword argument from eselect.exec_action execution module.

  • Removed runas parameter from the following pip` execution module functions: install, uninstall, freeze, list_, list_upgrades, upgrade_available, upgrade. Please migrate to user.

  • Removed runas parameter from the following pip state module functions: installed, removed, uptodate . Please migrate to user.

  • Removed quiet option from all functions in cmdmod execution module. Please use output_loglevel=quiet instead.

  • Removed parameter argument from eselect.set_ state. Please migrate to module_parameter or action_parameter.

  • The salt_events table schema has changed to include an additional field called master_id to distinguish between events flowing into a database from multiple masters. If event_return is enabled in the master config, the database schema must first be updated to add the master_id field. This alteration can be accomplished as follows:

    ALTER TABLE salt_events ADD master_id VARCHAR(255) NOT NULL;

Known Issues
  • In multi-master mode, a minion may become temporarily unresponsive if modules or pillars are refreshed at the same time that one or more masters are down. This can be worked around by setting 'auth_timeout' and 'auth_tries' down to shorter periods.

Salt 2015.5.1 Release Notes

release:2015-05-20

Version 2015.5.1 is a bugfix release for 2015.5.0.

Changes:

  • salt.runners.cloud.action() has changed the fun keyword argument to func. Please update any calls to this function in the cloud runner.

Extended Changelog Courtesy of Todd Stansell (https://github.com/tjstansell/salt-changelogs):

PR #23989: (rallytime) Backport #23980 to 2015.5

@ 2015-05-20T19:33:41Z

  • PR #23980: (iggy) template: jinja2 -> jinja | refs: #23989
  • 117ecb1 Merge pull request #23989 from rallytime/bp-23980
  • 8f8557c template: jinja2 -> jinja
PR #23988: (rallytime) Backport #23977 to 2015.5

@ 2015-05-20T19:13:36Z

  • PR #23977: (ionutbalutoiu) Fixed glance image_create | refs: #23988
  • d4f1ba0 Merge pull request #23988 from rallytime/bp-23977
  • 46fc7c6 Fixed glance image_create
PR #23986: (basepi) [2015.5] Merge forward from 2014.7 to 2015.5

@ 2015-05-20T18:41:33Z

  • PR #23965: (hvnsweeting) handle all exceptions gitpython can raise
  • 9566e7d Merge pull request #23986 from basepi/merge-forward-2015.5
  • 0b78156 Merge remote-tracking branch 'upstream/2014.7' into merge-forward-2015.5
  • 314e4db Merge pull request #23965 from hvnsweeting/20147-fix-gitfs-gitpython-exception
  • 2576301 handle all exception gitpython can raise
PR #23985: (UtahDave) Add 2014.7.5-2 and 2015.5.0-2 Windows installer download links

@ 2015-05-20T18:32:44Z

  • 9d1130e Merge pull request #23985 from UtahDave/2015.5local
  • 10338d0 Add links to Windows 2015.5.0-2 install downloads
  • b84f975 updated Windows 2014.7.5-2 installer download link
PR #23983: (rallytime) Versionadded tags for https_user and https_pass args new in 2015.5.0

@ 2015-05-20T18:05:27Z

  • ca7729d Merge pull request #23983 from rallytime/versionadded_git_options
  • 14eae22 Versionadded tags for https_user and https_pass args new in 2015.5.0
PR #23970: (jayeshka) adding system unit test case

@ 2015-05-20T17:12:57Z

  • b06df57 Merge pull request #23970 from jayeshka/system-unit-test
  • 89eb008 adding system unit test case
PR #23967: (jayeshka) adding states/memcached unit test case

@ 2015-05-20T17:12:26Z

  • 38d5f75 Merge pull request #23967 from jayeshka/memcached-states-unit-test
  • 8ef9240 adding states/memcached unit test case
PR #23966: (jayeshka) adding states/modjk unit test case

@ 2015-05-20T17:11:48Z

  • 868e807 Merge pull request #23966 from jayeshka/modjk-states-unit-test
  • 422a964 adding states/modjk unit test case
PR #23942: (jacobhammons) Updates to sphinx saltstack2 doc theme

@ 2015-05-20T15:43:54Z

  • 6316490 Merge pull request #23942 from jacobhammons/2015.5
  • 31023c8 Updates to sphinx saltstack2 doc theme
PR #23874: (joejulian) Validate keyword arguments to be valid

@ 2015-05-20T04:53:40Z

  • ISSUE #23872: (joejulian) create_ca_signed_cert can error if dereferenced dict is used for args | refs: #23874
  • 587957b Merge pull request #23874 from joejulian/2015.5_tls_validate_kwargs
  • 30102ac Fix py3 and ordering inconsistency problems.
  • 493f7ad Validate keyword arguments to be valid
PR #23960: (rallytime) Backport #22114 to 2015.5

@ 2015-05-20T04:37:09Z

  • PR #22114: (dmyerscough) Fixing KeyError when there are no additional pages | refs: #23960
  • 00c5c22 Merge pull request #23960 from rallytime/bp-22114
  • f3e1d63 Catch KeyError
  • 306b1ea Fixing KeyError
  • 6b2cda2 Fix PEP8 complaint
  • 239e50f Fixing KeyError when there are no additional pages
PR #23961: (rallytime) Backport #23944 to 2015.5

@ 2015-05-20T04:35:41Z

  • PR #23944: (ryan-lane) Add missing loginclass argument to _changes call | refs: #23961
  • 4648b46 Merge pull request #23961 from rallytime/bp-23944
  • 970d19a Add missing loginclass argument to _changes call
PR #23948: (jfindlay) augeas.change state now returns changes as a dict

@ 2015-05-20T04:00:10Z

  • 0cb5cd3 Merge pull request #23948 from jfindlay/augeas_changes
  • f09b80a augeas.change state now returns changes as a dict
PR #23957: (rallytime) Backport #23951 to 2015.5

@ 2015-05-20T03:04:24Z

  • PR #23951: (ryan-lane) Do not check perms in file.copy if preserve | refs: #23957
  • 2d185f7 Merge pull request #23957 from rallytime/bp-23951
  • 996b431 Update file.py
  • 85d461f Do not check perms in file.copy if preserve
  • PR #23956: (rallytime) Backport #23906 to 2015.5 @ 2015-05-20T03:04:14Z

    • ISSUE #23839: (gladiatr72) wonky loader syndrome | refs: #23906
    • ISSUE #23373: (tnypex) reactor/orchestrate race condition on salt['pillar.get'] | refs: #23906
    • PR #23906: (gladiatr72) Added exception handler to trap the RuntimeError raised when | refs: #23956
    • ebff1ff Merge pull request #23956 from rallytime/bp-23906
    • 9d87fd3 add proper marker for format argument
    • 197688e Added exception handler to trap the RuntimeError raised when Depends.enforce_dependency() class method fires unsuccessfully. There appears to be no synchronization within the Depends decorator class wrt the class global dependency_dict which results in incomplete population of any loader instantiation occuring at the time of one of these exceptions.
  • PR #23955: (rallytime) Backport #19305 to 2015.5 @ 2015-05-20T03:03:55Z

    • ISSUE #19852: (TaiSHiNet) DigitalOcean APIv2 can't delete machines when there is only 1 page | refs: #23955
    • ISSUE #19304: (TaiSHiNet) DigitalOcean API v2 cannot delete VMs on 2nd page | refs: #19305
    • PR #19305: (TaiSHiNet) Fixes droplet listing past page 1 | refs: #23955
    • da3f919 Merge pull request #23955 from rallytime/bp-19305
    • bbf2429 Fixes droplet listing past page 1
  • PR #23940: (basepi) [2015.5] Merge forward from 2014.7 to 2015.5 @ 2015-05-19T22:37:58Z

    • ISSUE #23820: (UtahDave) 2014.7.5 schedule error | refs: #23881
    • ISSUE #22131: (quixoten) "unexpected keyword argument 'merge'" on 2014.7.2 (salt-ssh) | refs: #23887
    • PR #23939: (basepi) Add extended changelog to 2014.7.6 release notes
    • PR #23887: (basepi) [2014.7] Bring salt-ssh pillar.get in line with mainline pillar.get
    • PR #23881: (garethgreenaway) Fixes to schedule module in 2014.7
    • 02a78fc Merge pull request #23940 from basepi/merge-forward-2015.5
    • 36f0065 Merge remote-tracking branch 'upstream/2014.7' into merge-forward-2015.5
      • 9133912 Merge pull request #23939 from basepi/v2014.7.6release
        • 32b65dc Add extended changelog to 2014.7.6 release notes
      • 0031ca2 Merge pull request #23881 from garethgreenaway/23820_2014_7_schedule_list_issue
        • b207f2a Missing continue in the list function when deleting unused attributes.
      • 63bd21e Merge pull request #23887 from basepi/salt-ssh.pillar.get.22131
        • bc84502 Bring salt-ssh pillar.get in line with mainline pillar.get
  • PR #23932: (rallytime) Backport #23908 to 2015.5 @ 2015-05-19T21:41:28Z

    • PR #23908: (nleib) fix connection function to mongo | refs: #23932
    • ee4c01b Merge pull request #23932 from rallytime/bp-23908
    • 5d520c9 fix connection function to mongo
  • PR #23931: (rallytime) Backport #23880 to 2015.5 @ 2015-05-19T21:41:18Z

    • PR #23880: (bastiaanb) if setting client_config_dir to '~', expand path | refs: #23931
    • 70bd407 Merge pull request #23931 from rallytime/bp-23880
    • 8ce59a2 if setting client_config_dir to '~', expand path
  • PR #23898: (kiorky) Lxc profiles | refs: #23897 @ 2015-05-19T21:08:28Z

    • 5bdbf0a Merge pull request #23898 from makinacorpus/lxc_profiles
    • d9051a0 lxc: systemd support
    • e8d674f lxc: chroot fallback toggle
    • e2887a0 lxc: sync func name with develop
    • e96e345 lxc more fixes (lxc.set_dns)
    • fdb6424 lxc: Fix salt config (no more a kwarg)
    • 63e63fa repair salt cloud lxc api on develop
    • 80eabe2 lxc salt cloud doc
    • 73f229d lxc: unificate saltconfig/master/master_port
    • 0bc1f08 lxc: refactor a bit saltcloud/lxc interface
    • 7a80370 lxc: get networkprofile from saltcloud
    • 47acb2e lxc: default net profile has now correct options
    • 7eadf48 lxc: select the appropriate default bridge
  • PR #23922: (garethgreenaway) Fixes to debian_ip.py @ 2015-05-19T18:50:53Z

    • ISSUE #23900: (hashi825) salt ubuntu network building issue 2015.5.0 | refs: #23922
    • b818f72 Merge pull request #23922 from garethgreenaway/23900_2015_5_bonding_interface_fixes
    • 0bba536 Fixing issue reported when using bonded interfaces on Ubuntu. Attributes should be bond-, but the code was attempting to split just on bond_. Fix accounts for both, but the debian_ip.py module will write out bond attributes with bond-
  • PR #23925: (jpic) Fixed wrong path in LXC cloud documentation @ 2015-05-19T18:23:56Z

    • PR #23924: (jpic) Fixed wrong path in LXC cloud documentation | refs: #23925
    • b1c98a3 Merge pull request #23925 from jpic/fix/wrong_lxc_path
    • a4bcd75 Fixed wrong path in LXC cloud documentation
  • PR #23894: (whiteinge) Add __all__ attribute to Mock class for docs @ 2015-05-19T17:17:35Z

    • 7f6a716 Merge pull request #23894 from whiteinge/doc-mock__all__
    • 6eeca46 Add __all__ attribute to Mock class for docs
  • PR #23884: (jfindlay) Fix locale.set_locale on debian @ 2015-05-19T15:51:22Z

    • ISSUE #23767: (chrimi) Salt system.locale fails on non existent default locale | refs: #23884
    • 8108a9b Merge pull request #23884 from jfindlay/fix_locale
    • 91c2d51 use append_if_not_found in locale.set_locale
    • e632603 (re)generate /etc/default/locale
  • PR #23866: (jfindlay) backport #23834, change portage.dep.strip_empty to list comprehension @ 2015-05-19T15:50:43Z

    • PR #23834: (Arabus) Avoid deprecation warning from portage.dep.strip_empty() | refs: #23866
    • 6bae12f Merge pull request #23866 from jfindlay/flag_strip
    • aa032cc replace portage.dep.strip_empty() with list comprehension
    • 7576872 Proper replacement for portage.dep.strip_empty() with list comprehension, pep8fix
    • 2851a5c Switch portage.dep.strip_empty(...) to filter(None,...) to avoid deprecation warning and do essentially the same
  • PR #23917: (corywright) Split debian bonding options on dash instead of underscore @ 2015-05-19T15:44:35Z

    • ISSUE #23904: (mbrgm) Network config bonding section cannot be parsed when attribute names use dashes | refs: #23917
    • a67a008 Merge pull request #23917 from corywright/issue23904
    • c06f8cf Split debian bonding options on dash instead of underscore
  • PR #23909: (jayeshka) 'str' object has no attribute 'capitalized' @ 2015-05-19T15:41:53Z

    • e8fcd09 Merge pull request #23909 from jayeshka/file-exe-module
    • e422d9d 'str' object has no attribute 'capitalized'
  • PR #23903: (garethgreenaway) Adding docs for missing schedule state module parameters. @ 2015-05-19T06:29:34Z

    • c73bf38 Merge pull request #23903 from garethgreenaway/missing_docs_schedule_state
    • acd8ab9 Adding docs for missing schedule state module parameters.
  • PR #23806: (kiorky) Lxc seeding | refs: #23807 @ 2015-05-18T23:18:33Z

    • ff3cc7d Merge pull request #23806 from makinacorpus/lxc_seeding
    • 61b7aad runners/lxc: optim
  • PR #23892: (basepi) [2015.5] Merge forward from 2014.7 to 2015.5 @ 2015-05-18T23:07:57Z

    • PR #23891: (basepi) Update the release notes index page
    • PR #23888: (basepi) Update the 2014.7.6 release notes with CVE details
    • PR #23871: (rallytime) Backport #23848 to 2014.7
    • PR #23848: (dumol) Updated installation docs for SLES 12. | refs: #23871
    • 5f1a93d Merge pull request #23892 from basepi/merge-forward-2015.5
    • c2eed77 Merge remote-tracking branch 'upstream/2014.7' into merge-forward-2015.5
    • 17c5810 Merge pull request #23891 from basepi/releasenotes
      • dec153b Update the release notes index page
    • a93e58f Merge pull request #23888 from basepi/v2014.7.6release
      • 49921b6 Update the 2014.7.6 release notes with CVE details
    • 5073028 Merge pull request #23871 from rallytime/bp-23848
      • 379c09c Updated for SLES 12.
  • PR #23875: (rallytime) Backport #23838 to 2015.5 @ 2015-05-18T22:28:55Z

    • PR #23838: (gtmanfred) add refresh_beacons and sync_beacons | refs: #23875
    • 66d1335 Merge pull request #23875 from rallytime/bp-23838
    • 3174227 Add versionadded directives to new beacon saltutil functions
    • 4a94b2c add refresh_beacons and sync_beacons
  • PR #23876: (rallytime) Switch digital ocean tests to v2 driver @ 2015-05-18T22:17:13Z

    • d294cf2 Merge pull request #23876 from rallytime/switch_digital_ocean_tests_v2
    • dce9b54 Remove extra line
    • 4acf58e Switch digital ocean tests to v2 driver
  • PR #23882: (garethgreenaway) Fixes to scheduler in 2015.5 @ 2015-05-18T22:09:24Z

    • ISSUE #23792: (neogenix) Salt Scheduler Incorrect Response (True, should be False) | refs: #23882
    • b97a48c Merge pull request #23882 from garethgreenaway/23792_2015_5_wrong_return_code
    • 37dbde6 Job already exists in schedule, should return False.
  • PR #23868: (basepi) [2015.5] Merge forward from 2014.7 to 2015.5 @ 2015-05-18T18:35:54Z

    • ISSUE #20198: (jcftang) virt.get_graphics, virt.get_nics are broken, in turn breaking other things | refs: #23809
    • PR #23823: (gtmanfred) add link local for ipv6
    • PR #23810: (rallytime) Backport #23757 to 2014.7
    • PR #23809: (rallytime) Fix virtualport section of virt.get_nics loop
    • PR #23802: (gtmanfred) if it is ipv6 ip_to_int will fail
    • PR #23757: (clan) use abspath, do not eliminating symlinks | refs: #23810
    • PR #23573: (techhat) Scan all available networks for public and private IPs | refs: #23802
    • PR #21487: (rallytime) Backport #21469 to 2014.7 | refs: #23809
    • PR #21469: (vdesjardins) fixes #20198: virt.get_graphics and virt.get_nics calls in module virt | refs: #21487
    • 61c922e Merge pull request #23868 from basepi/merge-forward-2015.5
    • c9ed233 Merge remote-tracking branch 'upstream/2014.7' into merge-forward-2015.5
    • aee00c8 Merge pull request #23810 from rallytime/bp-23757
      • fb32c32 use abspath, do not eliminating symlinks
    • 6b3352b Merge pull request #23809 from rallytime/virt_get_nics_fix
      • 0616fb7 Fix virtualport section of virt.get_nics loop
    • 188f03f Merge pull request #23823 from gtmanfred/2014.7
      • 5ef006d add link local for ipv6
    • f3ca682 Merge pull request #23802 from gtmanfred/2014.7
      • 2da98b5 if it is ipv6 ip_to_int will fail
  • PR #23863: (rahulhan) Adding states/timezone.py unit test @ 2015-05-18T17:02:19Z

    • 433f873 Merge pull request #23863 from rahulhan/states_timezone_unit_test
    • 72fcabc Adding states/timezone.py unit test
  • PR #23862: (rahulhan) Adding states/tomcat.py unit tests @ 2015-05-18T17:02:10Z

    • 37b3ee5 Merge pull request #23862 from rahulhan/states_tomcat_unit_test
    • 65d7752 Adding states/tomcat.py unit tests
  • PR #23860: (rahulhan) Adding states/test.py unit tests @ 2015-05-18T17:01:49Z

    • dde7207 Merge pull request #23860 from rahulhan/states_test_unit_test
    • 1f4cf86 Adding states/test.py unit tests
  • PR #23859: (rahulhan) Adding states/sysrc.py unit tests @ 2015-05-18T17:01:46Z

    • 3c9b813 Merge pull request #23859 from rahulhan/states_sysrc_unit_test
    • 6a903b0 Adding states/sysrc.py unit tests
  • PR #23812: (rallytime) Backport #23790 to 2015.5 @ 2015-05-18T15:30:34Z

    • PR #23790: (aboe76) updated suse spec file to version 2015.5.0 | refs: #23812
    • 4cf30a7 Merge pull request #23812 from rallytime/bp-23790
    • 3f65631 updated suse spec file to version 2015.5.0
  • PR #23811: (rallytime) Backport #23786 to 2015.5 @ 2015-05-18T15:30:27Z

    • PR #23786: (kaithar) Log the error generated that causes returns.mysql.returner to except. | refs: #23811
    • c6f939a Merge pull request #23811 from rallytime/bp-23786
    • 346f30b Log the error generated that causes returns.mysql.returner to except.
  • PR #23850: (jayeshka) adding sysbench unit test case @ 2015-05-18T15:28:04Z

    • ce60582 Merge pull request #23850 from jayeshka/sysbench-unit-test
    • 280abde adding sysbench unit test case
  • PR #23843: (The-Loeki) Fix erroneous virtual:physical core grain detection @ 2015-05-18T15:24:22Z

    • 060902f Merge pull request #23843 from The-Loeki/patch-1
    • 9e2cf60 Fix erroneous virtual:physical core grain detection
  • PR #23816: (Snergster) Doc for #23685 Added prereq, caution, and additional mask information @ 2015-05-18T15:18:03Z

    • ISSUE #23815: (Snergster) [beacons] inotify errors on subdir creation | refs: #23816
    • 3257a9b Merge pull request #23816 from Snergster/23685-doc-fix
    • 0fca49d Added prereq, caution, and additional mask information
  • PR #23832: (ahus1) make saltify provider use standard boostrap procedure @ 2015-05-18T02:18:29Z

    • PR #23829: (ahus1) make saltify provider use standard boostrap procedure | refs: #23832
    • 3df3b85 Merge pull request #23832 from ahus1/ahus1_saltify_bootstrap_2015.5
    • f5b1734 fixing problem in unit test
    • cba47f6 make saltify to use standard boostrap procedure, therefore providing all options like master_sign_pub_file
  • PR #23791: (optix2000) Psutil compat @ 2015-05-16T04:05:54Z

    • 8ec4fb2 Merge pull request #23791 from optix2000/psutil_compat
    • 5470cf5 Fix pylint errors and sloppy inline comments
    • 64634b6 Update psutil.pid_list to use psutil.pids
    • 5dd6d69 Fix imports that aren't in __all__
    • 8a1da33 Fix test cases by mocking psutil_compat
    • 558798d Fix net_io_counters deprecation issue
    • 8140f92 Override unecessary pylint errors
    • 7d02ad4 Fix some of the mock names for the new API
    • 9b3023e Fix overloaded getters/setters. Fix line lengths
    • 180eb87 Fix whitespace
    • f8edf72 Use new psutil API in ps module
    • e48982f Fix version checking in psutil_compat
    • 93ee411 Create compatability psutil. psutil 3.0 drops 1.0 API, but we still support old psutil versions.
  • PR #23782: (terminalmage) Replace "command -v" with "which" and get rid of spurious log messages @ 2015-05-16T04:03:10Z

    • 405517b Merge pull request #23782 from terminalmage/issue23772
    • 0f6f239 More ignore_retcode to suppress spurious log msgs
    • b4c48e6 Ignore return code in lxc.attachable
    • 08658c0 Replace "command -v" with "which"
  • PR #23783: (basepi) [2015.5] Merge forward from 2014.7 to 2015.5 @ 2015-05-15T21:38:51Z

    • ISSUE #22959: (highlyunavailable) Windows Salt hangs if file.directory is trying to write to a drive that doesn't exist
    • ISSUE #22332: (rallytime) [salt-ssh] Add a check for host in /etc/salt/roster | refs: #23748
    • ISSUE #16424: (stanvit) salt-run cloud.create fails with saltify
    • PR #23748: (basepi) [2014.7] Log salt-ssh roster render errors more assertively and verbosely
    • PR #23731: (twangboy) Fixes #22959: Trying to add a directory to an unmapped drive in windows
    • PR #23730: (rallytime) Backport #23729 to 2014.7
    • PR #23729: (rallytime) Partially merge #23437 (grains fix) | refs: #23730
    • PR #23688: (twangboy) Added inet_pton to utils/validate/net.py for ip.set_static_ip in windows
    • PR #23488: (cellscape) LXC cloud fixes
    • PR #23437: (cedwards) Grains item patch | refs: #23729
    • cb2eb40 Merge pull request #23783 from basepi/merge-forward-2015.5
    • 9df51ca __opts__.get
    • 51d23ed Merge remote-tracking branch 'upstream/2014.7' into merge-forward-2015.5
      • d9af0c3 Merge pull request #23488 from cellscape/lxc-cloud-fixes
        • 64250a6 Remove profile from opts after creating LXC container
        • c4047d2 Set destroy=True in opts when destroying cloud instance
        • 9e1311a Store instance names in opts when performing cloud action
        • 934bc57 Correctly pass custom env to lxc-attach
        • 7fb85f7 Preserve test=True option in cloud states
        • 9771b5a Fix detection of absent LXC container in cloud state
        • fb24f0c Report failure when failed to create/clone LXC container
        • 2d9aa2b Avoid shadowing variables in lxc module
        • 792e102 Allow to override profile options in lxc.cloud_init_interface
        • 42bd64b Return changes on successful lxc.create from salt-cloud
        • 4409eab Return correct result when creating cloud LXC container
        • 377015c Issue #16424: List all providers when creating salt-cloud instance without profile
      • 808bbe1 Merge pull request #23748 from basepi/salt-ssh.roster.host.check
        • bc53e04 Log entire exception for render errors in roster
        • 753de6a Log render errors in roster to error level
        • e01a7a9 Always let the real YAML error through
      • 72cf360 Merge pull request #23731 from twangboy/fix_22959
        • 88e5495 Fixes #22959: Trying to add a directory to an unmapped drive in windows
      • 2610195 Merge pull request #23730 from rallytime/bp-23729
        • 1877cae adding support for nested grains to grains.item
      • 3e9df88 Merge pull request #23688 from twangboy/fix_23415
        • 6a91169 Fixed unused-import pylint error
        • 5e25b3f fixed pylint errors
        • 1a96766 Added inet_pton to utils/validate/net.py for ip.set_static_ip in windows
  • PR #23781: (jfindlay) fix unit test mock errors on arch @ 2015-05-15T19:40:07Z

    • 982f873 Merge pull request #23781 from jfindlay/fix_locale_tests
    • 14c711e fix unit test mock errors on arch
  • PR #23740: (jfindlay) Binary write @ 2015-05-15T18:10:44Z

    • ISSUE #23566: (rks2286) Salt-cp corrupting the file after transfer to minion | refs: #23740
    • 916b1c4 Merge pull request #23740 from jfindlay/binary_write
    • 626930a update incorrect comment wording
    • a978f5c always use binary file write mode on windows
  • PR #23736: (jfindlay) always load pip execution module @ 2015-05-15T18:10:16Z

    • ISSUE #23682: (chrish42) Pip module requires system pip, even when not used (with env_bin) | refs: #23736
    • 348645e Merge pull request #23736 from jfindlay/fix_pip
    • b8867a8 update pip tests
    • 040bbc4 only check pip version in one place
    • 6c453a5 check for executable status of bin_env
    • 3337257 always load the pip module as pip could be anywhere
  • PR #23770: (cellscape) Fix cloud LXC container destruction @ 2015-05-15T17:38:59Z

    • 10cedfb Merge pull request #23770 from cellscape/fix-cloud-lxc-destruction
    • 4f6021c Fix cloud LXC container destruction
  • PR #23759: (lisa2lisa) fixed the problem for not beable to revoke ., for more detail https… @ 2015-05-15T17:38:38Z

  • PR #23769: (cellscape) Fix file_roots CA lookup in salt.utils.http.get_ca_bundle @ 2015-05-15T16:21:49Z

    • 10615ff Merge pull request #23769 from cellscape/utils-http-ca-file-roots
    • 8e90f32 Fix file_roots CA lookup in salt.utils.http.get_ca_bundle
  • PR #23765: (jayeshka) adding states/makeconf unit test case @ 2015-05-15T14:29:43Z

    • fd8a1b7 Merge pull request #23765 from jayeshka/makeconf_states-unit-test
    • 26e31af adding states/makeconf unit test case
  • PR #23760: (ticosax) [doc] document refresh argument @ 2015-05-15T14:23:47Z

    • ee13b08 Merge pull request #23760 from ticosax/2015.5
    • e3ca859 document refresh argument
  • PR #23766: (jayeshka) adding svn unit test case @ 2015-05-15T14:23:18Z

    • a017f72 Merge pull request #23766 from jayeshka/svn-unit-test
    • 19939cf adding svn unit test case
  • PR #23751: (rallytime) Backport #23737 to 2015.5 @ 2015-05-15T03:58:37Z

    • ISSUE #23734: (bradthurber) 2015.5.0 modules/archive.py ZipFile instance has no attribute '__exit__' - only python 2.6? | refs: #23737
    • PR #23737: (bradthurber) fix for 2015.5.0 modules/archive.py ZipFile instance has no attribute… | refs: #23751
    • 0ed9d45 Merge pull request #23751 from rallytime/bp-23737
    • 8d1eb32 fix for 2015.5.0 modules/archive.py ZipFile instance has no attribute '__exit__' - only python 2.6? #23734
  • PR #23710: (kiorky) Get more useful output from stateful commands @ 2015-05-14T21:58:10Z

    • ISSUE #23709: (kiorky) cmdmod: enhancement is really needed for stateful commands | refs: #23710
    • d73984e Merge pull request #23710 from makinacorpus/i23709
    • c706909 Get more useful output from stateful commands
  • PR #23724: (rallytime) Backport #23609 to 2015.5 @ 2015-05-14T19:34:22Z

    • PR #23609: (kaidokert) file_map: chown created directories if not root #23608 | refs: #23724
    • cdf421b Merge pull request #23724 from rallytime/bp-23609
    • fe3a762 file_map: chmod created directories if not root
  • PR #23723: (rallytime) Backport #23568 to 2015.5 @ 2015-05-14T19:34:11Z

    • PR #23568: (techhat) Allow Salt Cloud to use either SCP or SFTP, as configured | refs: #23723
    • 94f9099 Merge pull request #23723 from rallytime/bp-23568
    • bbec34a Allow Salt Cloud to use either SCP or SFTP, as configured
  • PR #23725: (rallytime) Backport #23691 to 2015.5 @ 2015-05-14T19:32:30Z

    • PR #23691: (dennisjac) add initial configuration documentation for varstack pillar | refs: #23725
    • 137e5ee Merge pull request #23725 from rallytime/bp-23691
    • 28a846e add initial configuration documentation for varstack pillar
  • PR #23722: (rallytime) Backport #23472 to 2015.5 @ 2015-05-14T19:31:52Z

    • PR #23472: (techhat) Allow neutron network list to be used as pillar data | refs: #23722
    • 0c00995 Merge pull request #23722 from rallytime/bp-23472
    • c3d0f39 Change versionadded tag for backport
    • 023e88f Allow neutron network list to be used as pillar data
  • PR #23727: (jfindlay) fix npm execution module stacktrace @ 2015-05-14T18:14:12Z

    • ISSUE #23657: (arthurlogilab) [salt-cloud lxc] NameError: global name '__salt__' is not defined | refs: #23727 #23898 #23897
    • cbf4ca8 Merge pull request #23727 from jfindlay/npm_salt
    • 05392f2 fix npm execution module stacktrace
  • PR #23718: (rahulhan) Adding states/user.py unit tests @ 2015-05-14T17:15:38Z

    • ef536d5 Merge pull request #23718 from rahulhan/states_user_unit_tests
    • aad27db Adding states/user.py unit tests
  • PR #23720: (basepi) [2015.5] Merge forward from 2014.7 to 2015.5 @ 2015-05-14T17:13:02Z

    • ISSUE #23604: (Azidburn) service.dead on systemd Minion create an Error Message | refs: #23607
    • ISSUE #23548: (kkaig) grains.list_present produces incorrect (?) output | refs: #23674
    • ISSUE #23403: (iamfil) salt.runners.cloud.action fun parameter is replaced | refs: #23680
    • PR #23680: (cachedout) Rename kwarg in cloud runner
    • PR #23674: (cachedout) Handle lists correctly in grains.list_prsesent
    • PR #23672: (twangboy) Fix user present
    • PR #23670: (rallytime) Backport #23607 to 2014.7
    • PR #23607: (Azidburn) Fix for #23604. No error reporting. Exitcode !=0 are ok | refs: #23670
    • a529d74 Merge pull request #23720 from basepi/merge-forward-2015.5
    • 06a3ebd Merge remote-tracking branch 'upstream/2014.7' into merge-forward-2015.5
    • 1b86460 Merge pull request #23680 from cachedout/issue_23403
      • d5986c2 Rename kwarg in cloud runner
    • cd64af0 Merge pull request #23674 from cachedout/issue_23548
      • da8a2f5 Handle lists correctly in grains.list_prsesent
    • d322a19 Merge pull request #23672 from twangboy/fix_user_present
    • 43f7025 Merge pull request #23670 from rallytime/bp-23607
      • ed30dc4 Fix for #23604. No error reporting. Exitcode !=0 are ok
  • PR #23704: (jayeshka) adding states/lvs_server unit test case @ 2015-05-14T14:22:10Z

    • 13facbf Merge pull request #23704 from jayeshka/lvs_server_states-unit-test
    • da323da adding states/lvs_server unit test case
  • PR #23703: (jayeshka) adding states/lvs_service unit test case @ 2015-05-14T14:21:23Z

    • f95ca31 Merge pull request #23703 from jayeshka/lvs_service_states-unit-test
    • 66717c8 adding states/lvs_service unit test case
  • PR #23702: (jayeshka) Remove superfluous return statement. @ 2015-05-14T14:20:42Z

    • 07e987e Merge pull request #23702 from jayeshka/fix_lvs_service
    • ecff218 fix lvs_service
  • PR #23686: (jfindlay) remove superflous return statement @ 2015-05-14T14:20:18Z

    • 39973d4 Merge pull request #23686 from jfindlay/fix_lvs_server
    • 5aaeb73 remove superflous return statement
  • PR #23690: (rallytime) Backport #23424 to 2015.5 @ 2015-05-13T23:04:36Z

    • PR #23424: (jtand) Added python_shell=True for refresh_db in pacman.py | refs: #23690
    • be7c7ef Merge pull request #23690 from rallytime/bp-23424
    • 94574b7 Added python_shell=True for refresh_db in pacman.py
  • PR #23681: (cachedout) Start on 2015.5.1 release notes @ 2015-05-13T19:44:22Z

    • 1a0db43 Merge pull request #23681 from cachedout/2015_5_1_release_notes
    • bdbbfa6 Start on 2015.5.1 release notes
  • PR #23679: (jfindlay) Merge #23616 @ 2015-05-13T19:03:53Z

    • PR #23616: (Snergster) virtual returning none warning fixed in dev but missed in 2015.5 | refs: #23679
    • b54075a Merge pull request #23679 from jfindlay/merge_23616
    • 6e15e19 appease pylint's blank line strictures
    • 8750680 virtual returning none warning fixed in dev but missed in 2015.5
  • PR #23675: (basepi) [2015.5] Merge forward from 2014.7 to 2015.5 @ 2015-05-13T18:35:54Z

    • ISSUE #23611: (hubez) master_type set to 'failover' but 'master' is not of type list but of type <type 'str'> | refs: #23637
    • ISSUE #23479: (danielmorlock) Typo in pkg.removed for Gentoo? | refs: #23558
    • ISSUE #23452: (michaelforge) minion crashed with empty grain | refs: #23639
    • ISSUE #23411: (dr4Ke) grains.append should work at any level of a grain | refs: #23440
    • ISSUE #23355: (dr4Ke) salt-ssh: 'sources: salt://' files from 'pkg' state are not included in salt_state.tgz | refs: #23530
    • ISSUE #23110: (martinhoefling) Copying files from gitfs in file.recurse state fails
    • ISSUE #23004: (b18) 2014.7.5 - Windows - pkg.list_pkgs - "nxlog" never shows up in output. | refs: #23433
    • ISSUE #22908: (karanjad) Add failhard option to salt orchestration | refs: #23389
    • ISSUE #22141: (Deshke) grains.get_or_set_hash render error if hash begins with "%" | refs: #23640
    • PR #23661: (rallytime) Merge #23640 with whitespace fix
    • PR #23640: (cachedout) Add warning to get_or_set_hash about reserved chars | refs: #23661
    • PR #23639: (cachedout) Handle exceptions raised by __virtual__
    • PR #23637: (cachedout) Convert str master to list
    • PR #23606: (twangboy) Fixed checkbox for starting service and actually starting it
    • PR #23595: (rallytime) Backport #23549 to 2014.7
    • PR #23594: (rallytime) Backport #23496 to 2014.7
    • PR #23593: (rallytime) Backport #23442 to 2014.7
    • PR #23592: (rallytime) Backport #23389 to 2014.7
    • PR #23573: (techhat) Scan all available networks for public and private IPs | refs: #23802
    • PR #23558: (jfindlay) reorder emerge command line
    • PR #23554: (jleroy) Debian: Hostname always updated
    • PR #23551: (dr4Ke) grains.append unit tests, related to #23474
    • PR #23549: (vr-jack) Update __init__.py | refs: #23595
    • PR #23537: (t0rrant) Update changelog
    • PR #23530: (dr4Ke) salt-ssh state: fix including all salt:// references
    • PR #23496: (martinhoefling) Fix for issue #23110 | refs: #23594
    • PR #23474: (dr4Ke) Fix grains.append in nested dictionnary grains #23411
    • PR #23442: (clan) add directory itself to keep list | refs: #23593
    • PR #23440: (dr4Ke) fix grains.append in nested dictionnary grains #23411 | refs: #23474
    • PR #23433: (twangboy) Obtain all software from the registry
    • PR #23389: (cachedout) Correct fail_hard typo | refs: #23592
    • e480f13 Merge pull request #23675 from basepi/merge-forward-2015.5
    • bd63548 Merge remote-tracking branch 'upstream/2014.7' into merge-forward-2015.5
      • 0f006ac Merge pull request #23661 from rallytime/merge-23640
        • 4427f42 Whitespace fix
        • dd91154 Add warning to get_or_set_hash about reserved chars
      • 84e2ef8 Merge pull request #23639 from cachedout/issue_23452
        • d418b49 Syntax error!
        • 45b4015 Handle exceptions raised by __virtual__
      • bd9b94b Merge pull request #23637 from cachedout/issue_23611
        • 56cb1f5 Fix typo
        • f6fcf19 Convert str master to list
      • f20c0e4 Merge pull request #23595 from rallytime/bp-23549
        • 6efcac0 Update __init__.py
      • 1acaf86 Merge pull request #23594 from rallytime/bp-23496
        • d5ae1d2 Fix for issue #23110 This resolves issues when the freshly created directory is removed by fileserver.update.
      • 2c221c7 Merge pull request #23593 from rallytime/bp-23442
        • 39869a1 check w/ low['name'] only
        • 304cc49 another fix for file defined w/ id, but require name
        • 8814d41 add directory itself to keep list
      • fadd1ef Merge pull request #23606 from twangboy/fix_installer
        • 038331e Fixed checkbox for starting service and actually starting it
    • acdd3fc Fix lint
    • 680e88f Merge remote-tracking branch 'upstream/2014.7' into merge-forward-2015.5
      • 10b3f0f Merge pull request #23592 from rallytime/bp-23389
        • 734cc43 Correct fail_hard typo
      • cd34b9b Merge pull request #23573 from techhat/novaquery
        • f92db5e Linting
        • 26e00d3 Scan all available networks for public and private IPs
      • 2a72cd7 Merge pull request #23558 from jfindlay/fix_ebuild
        • 45404fb reorder emerge command line
      • a664a3c Merge pull request #23530 from dr4Ke/fix_salt-ssh_to_include_pkg_sources
        • 5df6a80 fix pylint warning
        • d0549e5 salt-ssh state: fix including all salt:// references
      • 55c3869 Merge pull request #23433 from twangboy/list_pkgs_fix
        • 8ab5b1b Fix pylint error
        • 2d11d65 Obtain all software from the registry
      • 755bed0 Merge pull request #23554 from jleroy/debian-hostname-fix
        • 5ff749e Debian: Hostname always updated
      • 6ec87ce Merge pull request #23551 from dr4Ke/grains.append_unit_tests
        • ebff9df fix pylint errors
        • c495404 unit tests for grains.append module function
        • 0c9a323 use MagickMock
        • c838a22 unit tests for grains.append module function
      • e96c5c5 Merge pull request #23474 from dr4Ke/fix_grains.append_nested
        • a01a5bb grains.get, parameter delimititer, versionadded: 2014.7.6
        • b39f504 remove debugging output
        • b6e15e2 fix grains.append in nested dictionnary grains #23411
      • ab7e1ae Merge pull request #23537 from t0rrant/patch-1
        • 8e03cc9 Update changelog
  • PR #23669: (rallytime) Backport #23586 to 2015.5 @ 2015-05-13T18:27:11Z

    • PR #23586: (Lothiraldan) Fix salt.state.file._unify_sources_and_hashes when sources is used without sources_hashes | refs: #23669
    • 0dad6be Merge pull request #23669 from rallytime/bp-23586
    • ef4c6ad Remove another unused import
    • 73cfda7 Remove unused import
    • 52b68d6 Use the zip_longest from six module for python 3 compatiblity
    • 18d5ff9 Fix salt.state.file._unify_sources_and_hashes when sources is used without sources_hashes
  • PR #23662: (rallytime) Merge #23642 with pylint fix @ 2015-05-13T15:46:51Z

    • PR #23642: (cachedout) Let saltmod handle lower-level exceptions gracefully | refs: #23662
    • fabef75 Merge pull request #23662 from rallytime/merge-23642
    • aa7bbd8 Remove unused import
    • 9e66d4c Let saltmod handle lower-level exceptions gracefully
  • PR #23622: (jfindlay) merge #23508 @ 2015-05-13T15:36:49Z

    • PR #23508: (cro) Port mysql returner to postgres using jsonb datatype | refs: #23622
    • 072b927 Merge pull request #23622 from jfindlay/pgjsonb
    • 454322c appease pylint's proscription on blank line excess
    • 57c6171 Get time with timezone correct also in job return.
    • e109d0f Get time with timezone correct.
    • 21e06b9 Fix SQL, remove unneeded imports.
    • 653f360 Stop making changes in 2 places.
    • d6daaa0 Typo.
    • 7d748bf SSL is handled differently by Pg, so don't set it here.
    • cc7c377 Fill alter_time field in salt_events with current time with timezone.
    • 43defe9 Port mysql module to Postgres using jsonb datatypes
  • PR #23651: (jayeshka) adding solr unit test case @ 2015-05-13T15:26:15Z

    • c1bdd4d Merge pull request #23651 from jayeshka/solr-unit-test
    • 6e05148 adding solr unit test case
  • PR #23649: (jayeshka) adding states/libvirt unit test case @ 2015-05-13T15:24:48Z

    • ee43411 Merge pull request #23649 from jayeshka/libvirt_states-unit-test
    • 0fb923a adding states/libvirt unit test case
  • PR #23648: (jayeshka) adding states/linux_acl unit test case @ 2015-05-13T15:24:11Z

    • c7fc466 Merge pull request #23648 from jayeshka/linux_acl_states-unit-test
    • 3f0ab29 removed error.
    • 11081c1 adding states/linux_acl unit test case
  • PR #23650: (jayeshka) adding states/kmod unit test case @ 2015-05-13T15:09:18Z

    • 4cba7ba Merge pull request #23650 from jayeshka/kmod_states-unit-test
    • 1987015 adding states/kmod unit test case
  • PR #23633: (jayeshka) made changes to test_interfaces function. @ 2015-05-13T06:51:07Z

    • bc8faf1 Merge pull request #23633 from jayeshka/win_network-2015.5-unit-test
    • 0936e1d made changes to test_interfaces function.
  • PR #23619: (jfindlay) fix kmod.present processing of module loading @ 2015-05-13T01:16:56Z

    • 7df3579 Merge pull request #23619 from jfindlay/fix_kmod_state
    • 73facbf fix kmod.present processing of module loading
  • PR #23598: (rahulhan) Adding states/win_dns_client.py unit tests @ 2015-05-12T21:47:36Z

    • d4f3095 Merge pull request #23598 from rahulhan/states_win_dns_client_unit_test
    • d08d885 Adding states/win_dns_client.py unit tests
  • PR #23597: (rahulhan) Adding states/vbox_guest.py unit tests @ 2015-05-12T21:46:30Z

    • 811c6a1 Merge pull request #23597 from rahulhan/states_vbox_guest_unit_test
    • 6a2909e Removed errors
    • 4cde78a Adding states/vbox_guest.py unit tests
  • PR #23615: (rallytime) Backport #23577 to 2015.5 @ 2015-05-12T21:19:11Z

    • PR #23577: (msciciel) Fix find and remove functions to pass database param | refs: #23615
    • 029ff11 Merge pull request #23615 from rallytime/bp-23577
    • 6f74477 Fix find and remove functions to pass database param
  • PR #23603: (rahulhan) Adding states/winrepo.py unit tests @ 2015-05-12T18:40:12Z

    • b858953 Merge pull request #23603 from rahulhan/states_winrepo_unit_test
    • a66e7e7 Adding states/winrepo.py unit tests
  • PR #23602: (rahulhan) Adding states/win_path.py unit tests @ 2015-05-12T18:39:37Z

    • 3cbbd6d Merge pull request #23602 from rahulhan/states_win_path_unit_test
    • 122c29f Adding states/win_path.py unit tests
  • PR #23600: (rahulhan) Adding states/win_network.py unit tests @ 2015-05-12T18:39:01Z

    • 3c904e8 Merge pull request #23600 from rahulhan/states_win_network_unit_test
    • b418404 removed lint error
    • 1be8023 Adding states/win_network.py unit tests
  • PR #23599: (rahulhan) Adding win_firewall.py unit tests @ 2015-05-12T18:37:49Z

    • 10243a7 Merge pull request #23599 from rahulhan/states_win_firewall_unit_test
    • 6cda890 Adding win_firewall.py unit tests
  • PR #23601: (basepi) Add versionadded for jboss module/state @ 2015-05-12T17:22:59Z

    • e73071d Merge pull request #23601 from basepi/jboss.version.added
    • 0174c8f Add versionadded for jboss module/state
  • PR #23469: (s0undt3ch) Call the windows specific function not the general one @ 2015-05-12T16:47:22Z

    • 9beb7bc Merge pull request #23469 from s0undt3ch/hotfix/call-the-win-func
    • 83e88a3 Call the windows specific function not the general one
  • PR #23583: (jayeshka) adding states/ipset unit test case @ 2015-05-12T16:31:55Z

    • d2f0975 Merge pull request #23583 from jayeshka/ipset_states-unit-test
    • 4330cf4 adding states/ipset unit test case
  • PR #23582: (jayeshka) adding states/keyboard unit test case @ 2015-05-12T16:31:17Z

    • 82a47e8 Merge pull request #23582 from jayeshka/keyboard_states-unit-test
    • fa94d7a adding states/keyboard unit test case
  • PR #23581: (jayeshka) adding states/layman unit test case @ 2015-05-12T16:30:36Z

    • 77e5b28 Merge pull request #23581 from jayeshka/layman_states-unit-test
    • 297b055 adding states/layman unit test case
  • PR #23580: (jayeshka) adding smf unit test case @ 2015-05-12T16:29:58Z

    • cbe3282 Merge pull request #23580 from jayeshka/smf-unit-test
    • 4f97191 adding smf unit test case
  • PR #23572: (The-Loeki) Fix regression of #21355 introduced by #21603 @ 2015-05-12T16:28:05Z

    • ISSUE #21603: (ipmb) ssh_auth.present fails on key without comment | refs: #23572 #23572
    • PR #21355: (The-Loeki) Fix for comments containing whitespaces
    • 16a3338 Merge pull request #23572 from The-Loeki/ssh_auth_fix
    • d8248dd Fix regression of #21355 introduced by #21603
  • PR #23565: (garethgreenaway) fix to aptpkg module @ 2015-05-12T16:25:46Z

    • ISSUE #23490: (lichtamberg) salt.modules.aptpkg.upgrade should have default "dist_upgrade=False" | refs: #23565
    • f843f89 Merge pull request #23565 from garethgreenaway/2015_2_aptpkg_upgrade_default_to_upgrade
    • 97ae514 aptpkg.upgrade should default to upgrade instead of dist_upgrade.
  • PR #23550: (jfindlay) additional mock for rh_ip_test test_build_bond @ 2015-05-12T15:17:16Z

    • ISSUE #23473: (terminalmage) unit.modules.rh_ip_test.RhipTestCase.test_build_bond is not properly mocked | refs: #23550
    • c1157cd Merge pull request #23550 from jfindlay/fix_rh_ip_test
    • e9b94d3 additional mock for rh_ip_test test_build_bond
  • PR #23552: (garethgreenaway) Fix for an issue caused by a previous pull request @ 2015-05-11T21:54:59Z

    • b593328 Merge pull request #23552 from garethgreenaway/2015_5_returner_fix_broken_previous_pr
    • 7d70e2b Passed argumentes in the call _fetch_profile_opts to were in the wrong order
  • PR #23547: (slinu3d) Added AWS v4 signature support for 2015.5 @ 2015-05-11T21:52:24Z

    • d0f9682 Merge pull request #23547 from slinu3d/2015.5
    • f3bfdb5 Fixed urlparse and urlencode calls
    • 802dbdb Added AWS v4 signature support for 2015.5
  • PR #23544: (basepi) [2015.5] Merge forward from 2014.7 to 2015.5 @ 2015-05-11T18:02:06Z

    • ISSUE #23159: (aneeshusa) Unused validator
    • ISSUE #20518: (ekle) module s3.get does not support eu-central-1 | refs: #23467
    • ISSUE #563: (chutz) pidfile support for minion and master daemons | refs: #23460 #23461
    • PR #23538: (cro) Update date in LICENSE file
    • PR #23505: (aneeshusa) Remove unused ssh config validator. Fixes #23159.
    • PR #23467: (slinu3d) Added AWS v4 signature support
    • PR #23460: (s0undt3ch) [2014.7] Update to latest stable bootstrap script v2015.05.07
    • PR #23444: (techhat) Add create_attach_volume to nova driver
    • PR #23439: (techhat) Add wait_for_passwd_maxtries variable
    • 06c6a1f Merge pull request #23544 from basepi/merge-forward-2015.5
    • f8a36bc Merge remote-tracking branch 'upstream/2014.7' into merge-forward-2015.5
      • b79fed3 Merge pull request #23538 from cro/licupdate
        • 345efe2 Update date in LICENSE file
      • a123a36 Merge pull request #23505 from aneeshusa/remove-unused-ssh-config-validator
        • 90af167 Remove unused ssh config validator. Fixes #23159.
      • ca2c21a Merge pull request #23467 from slinu3d/2014.7
        • 0b4081d Fixed pylint error at line 363
        • 5be5eb5 Fixed pylink errors
        • e64f374 Fixed lint errors
        • b9d1ac4 Added AWS v4 signature support
      • e6f9eec Merge pull request #23444 from techhat/novacreateattach
        • ebdb7ea Add create_attach_volume to nova driver
      • e331463 Merge pull request #23460 from s0undt3ch/hotfix/bootstrap-script-2014.7
        • edcd0c4 Update to latest stable bootstrap script v2015.05.07
      • 7a8ce1a Merge pull request #23439 from techhat/maxtries
        • 0ad3ff2 Add wait_for_passwd_maxtries variable
  • PR #23470: (twangboy) Fixed service.restart for salt-minion @ 2015-05-11T17:54:47Z

    • ISSUE #23426: (twangboy) Can't restart salt-minion on 64 bit windows (2015.5.0) | refs: #23470
    • aa5b896 Merge pull request #23470 from twangboy/fix_svc_restart
    • b3f284c Fixed tests
    • ad44d79 Fixed service.restart for salt-minion
  • PR #23539: (rahulhan) Adding states/virtualenv_mod.py unit tests @ 2015-05-11T17:02:31Z

    • 67988b2 Merge pull request #23539 from rahulhan/states_virtualenv_mod_unit_test
    • 750bb07 Adding states/virtualenv_mod.py unit tests
  • 6f0cf2e Merge remote-tracking branch 'upstream/2015.2' into 2015.5

    • ISSUE #23244: (freimer) Caller not available in reactors | refs: #23245
    • PR #23509: (keesbos) Catch the unset (empty/None) environment case
    • PR #23423: (cachedout) Remove jid_event from state.orch
    • PR #23245: (freimer) Add Caller functionality to reactors.
    • c966196 Merge pull request #23423 from cachedout/remove_jid_event_from_orch
      • f81aab7 Remove jid_event from state.orch
    • 2bb09b7 Merge pull request #23509 from keesbos/Catch_empty_environment
      • 6dedeac Catch the unset (empty/None) environment case
    • 6d42f30 Merge pull request #23245 from freimer/issue_23244
      • 24cf6eb Add Caller functionality to reactors.
  • PR #23513: (gladiatr72) short-circuit auto-failure of iptables.delete state @ 2015-05-11T15:18:33Z

    • c3f03d8 Merge pull request #23513 from gladiatr72/RFC_stop_iptables.check_from_short-circuiting_position-only_delete_rule
    • c71714c short-circuit auto-failure of iptables.delete state if position argument is set without the other accoutrements that check_rule requires.
  • PR #23534: (jayeshka) adding states/ini_manage unit test case @ 2015-05-11T14:32:06Z

    • 4e77f6f Merge pull request #23534 from jayeshka/ini_manage_states-unit-test
    • 831223c adding states/ini_manage unit test case
  • PR #23533: (jayeshka) adding states/hipchat unit test case @ 2015-05-11T14:30:22Z

    • 11ba9ed Merge pull request #23533 from jayeshka/hipchat-states-unit-test
    • 41d14b3 adding states/hipchat unit test case
  • PR #23532: (jayeshka) adding states/ipmi unit test case @ 2015-05-11T14:28:15Z

    • e542113 Merge pull request #23532 from jayeshka/ipmi-states-unit-test
    • fc3e64a adding states/ipmi unit test case
  • PR #23531: (jayeshka) adding service unit test case @ 2015-05-11T14:27:12Z

    • 9ba85fd Merge pull request #23531 from jayeshka/service-unit-test
    • 3ad5314 adding service unit test case
  • PR #23517: (garethgreenaway) fix to returners @ 2015-05-11T14:20:51Z

    • ISSUE #23512: (Code-Vortex) hipchat_returner / slack_returner not work correctly | refs: #23517
    • 32838cd Merge pull request #23517 from garethgreenaway/23512_2015_5_returners_with_profiles
    • 81e31e2 fix for returners that utilize profile attributes. code in the if else statement was backwards. #23512
  • PR #23502: (rahulhan) Adding states/win_servermanager.py unit tests @ 2015-05-08T19:47:18Z

    • 6be7d8d Merge pull request #23502 from rahulhan/states_win_servermanager_unit_test
    • 2490074 Adding states/win_servermanager.py unit tests
  • PR #23495: (jayeshka) adding seed unit test case @ 2015-05-08T17:30:38Z

    • 6048578 Merge pull request #23495 from jayeshka/seed-unit-test
    • 3f134bc adding seed unit test case
  • PR #23494: (jayeshka) adding sensors unit test case @ 2015-05-08T17:30:18Z

    • 70bc3c1 Merge pull request #23494 from jayeshka/sensors-unit-test
    • 1fb48a3 adding sensors unit test case
  • PR #23493: (jayeshka) adding states/incron unit test case @ 2015-05-08T17:29:59Z

    • b981b20 Merge pull request #23493 from jayeshka/incron-states-unit-test
    • cc7bc17 adding states/incron unit test case
  • PR #23492: (jayeshka) adding states/influxdb_database unit test case @ 2015-05-08T17:29:51Z

    • 4019c49 Merge pull request #23492 from jayeshka/influxdb_database-states-unit-test
    • e1fcac8 adding states/influxdb_database unit test case
  • PR #23491: (jayeshka) adding states/influxdb_user unit test case @ 2015-05-08T16:24:07Z

    • d317a77 Merge pull request #23491 from jayeshka/influxdb_user-states-unit-test
    • 9d4043f adding states/influxdb_user unit test case
  • PR #23477: (galet) LDAP auth: Escape filter value for group membership search @ 2015-05-07T22:04:48Z

    • e0b2a73 Merge pull request #23477 from galet/ldap-filter-escaping
    • 33038b9 LDAP auth: Escape filter value for group membership search
  • PR #23476: (cachedout) Lint becaon @ 2015-05-07T19:55:36Z

    • e1719fe Merge pull request #23476 from cachedout/lint_23431
    • 8d1ff20 Lint becaon
  • PR #23431: (UtahDave) Beacon fixes | refs: #23476 @ 2015-05-07T19:53:47Z

    • 1e299ed Merge pull request #23431 from UtahDave/beacon_fixes
    • 152f223 remove unused import
    • 81198f9 fix interval logic and example
    • 5504778 update to proper examples
    • 6890439 fix list for mask
    • ee7b579 remove custom interval code.
  • PR #23468: (rahulhan) Adding states/win_system.py unit tests @ 2015-05-07T19:20:50Z

    • ea55c44 Merge pull request #23468 from rahulhan/states_win_system_unit_test
    • 33f8c12 Adding states/win_system.py unit tests
  • PR #23466: (UtahDave) minor spelling fix @ 2015-05-07T19:19:06Z

    • e6e1114 Merge pull request #23466 from UtahDave/2015.5local
    • b2c399a minor spelling fix
  • PR #23461: (s0undt3ch) [2015.5] Update to latest stable bootstrap script v2015.05.07 @ 2015-05-07T19:16:18Z

    • ISSUE #563: (chutz) pidfile support for minion and master daemons | refs: #23460 #23461
    • 4eeb1e6 Merge pull request #23461 from s0undt3ch/hotfix/bootstrap-script
    • 638c63d Update to latest stable bootstrap script v2015.05.07
  • PR #23450: (jayeshka) adding scsi unit test case @ 2015-05-07T19:00:28Z

    • 8651278 Merge pull request #23450 from jayeshka/scsi-unit-test
    • e7269ff adding scsi unit test case
  • PR #23449: (jayeshka) adding s3 unit test case @ 2015-05-07T18:59:45Z

    • 8b374ae Merge pull request #23449 from jayeshka/s3-unit-test
    • 85786bf adding s3 unit test case
  • PR #23448: (jayeshka) adding states/keystone unit test case @ 2015-05-07T18:58:59Z

    • 49b431c Merge pull request #23448 from jayeshka/keystone-states-unit-test
    • a3050eb adding states/keystone unit test case
  • PR #23447: (jayeshka) adding states/grafana unit test case @ 2015-05-07T18:58:20Z

    • 23d7e7e Merge pull request #23447 from jayeshka/grafana-states-unit-test
    • 7e90a4a adding states/grafana unit test case
  • PR #23438: (techhat) Gate requests import @ 2015-05-07T07:22:58Z

    • 1fd0bc2 Merge pull request #23438 from techhat/gaterequests
    • d5b15fc Gate requests import
  • PR #23429: (basepi) [2015.5] Merge forward from 2014.7 to 2015.5 @ 2015-05-07T05:35:13Z

    • ISSUE #17245: (tomashavlas) localemod does not generate locale for Arch | refs: #23307 #23397
    • PR #23425: (basepi) [2014.7] Fix typo in FunctionWrapper
    • PR #23422: (cro) $HOME should not be used, some shells don't set it.
    • PR #23414: (jfindlay) 2015.2 -> 2015.5
    • PR #23409: (terminalmage) Update Lithium docstrings in 2014.7 branch | refs: #23410
    • PR #23404: (hvnsweeting) saltapi cherrypy: initialize var when POST body is empty
    • PR #23397: (jfindlay) add more flexible whitespace to locale_gen search
    • PR #23385: (rallytime) Backport #23346 to 2014.7
    • PR #23346: (ericfode) Allow file_map in salt-cloud to handle folders. | refs: #23385
    • 3c4f734 Merge pull request #23429 from basepi/merge-forward-2015.5
    • 7729834 Merge remote-tracking branch 'upstream/2014.7' into merge-forward-2015.5
    • 644eb75 Merge pull request #23422 from cro/gce_sh_home
      • 4ef9e6b Don't use $HOME to find user's directory, some shells don't set it
    • ef17ab4 Merge pull request #23425 from basepi/functionwrapper_typo
      • c390737 Fix typo in FunctionWrapper
    • 1b13ec0 Merge pull request #23385 from rallytime/bp-23346
      • 9efc13c more linting fixes
      • cf131c9 cleaned up some pylint errors
      • f981699 added logic to sftp_file and file_map to allow folder uploads using file_map
    • f8c7a62 Merge pull request #23414 from jfindlay/update_branch
      • 8074d16 2015.2 -> 2015.5
    • 54b3bd4 Merge pull request #23404 from hvnsweeting/cherrypy-post-emptybody-fix
      • f85f8f9 initialize var when POST body is empty
    • 160f703 Merge pull request #23409 from terminalmage/update-lithium-docstrings-2014.7
      • bc97d01 Fix sphinx typo
      • 20006b0 Update Lithium docstrings in 2014.7 branch
    • aa5fb0a Merge pull request #23397 from jfindlay/fix_locale_gen
      • 0941fef add more flexible whitespace to locale_gen search
  • PR #23396: (basepi) [2015.2] Merge forward from 2014.7 to 2015.2 @ 2015-05-06T21:42:35Z

    • ISSUE #23294: (variia) file.replace fails to append if repl string partially available | refs: #23350
    • ISSUE #23026: (adelcast) Incorrect salt-syndic logfile and pidfile locations | refs: #23341
    • ISSUE #22742: (hvnsweeting) salt-master says: "This master address: 'salt' was previously resolvable but now fails to resolve!" | refs: #23344
    • ISSUE #19114: (pykler) salt-ssh and gpg pillar renderer | refs: #23272 #23347 #23188
    • ISSUE #17245: (tomashavlas) localemod does not generate locale for Arch | refs: #23307 #23397
    • ISSUE #580: (thatch45) recursive watch not being caught | refs: #23324
    • ISSUE #552: (jhutchins) Support require and watch under the same state dec | refs: #23324
    • PR #23368: (kaithar) Backport #23367 to 2014.7
    • PR #23367: (kaithar) Put the sed insert statement back in to the output. | refs: #23368
    • PR #23350: (lorengordon) Append/prepend: search for full line
    • PR #23347: (basepi) [2014.7] Salt-SSH Backport FunctionWrapper.__contains__
    • PR #23344: (cachedout) Explicitely set file_client on master
    • PR #23341: (cachedout) Fix syndic pid and logfile path
    • PR #23324: (s0undt3ch) [2014.7] Update to the latest stable release of the bootstrap script v2015.05.04
    • PR #23318: (cellscape) Honor seed argument in LXC container initializaton
    • PR #23311: (cellscape) Fix new container initialization in LXC runner | refs: #23318
    • PR #23307: (jfindlay) check for /etc/locale.gen
    • PR #23272: (basepi) [2014.7] Allow salt-ssh minion config overrides via master config and roster | refs: #23347
    • PR #23188: (basepi) [2014.7] Work around bug in salt-ssh in config.get for gpg renderer | refs: #23272
    • PR #18368: (basepi) Merge forward from 2014.7 to develop | refs: #23367 #23368
    • PR #589: (epoelke) add --quiet and --outfile options to saltkey | refs: #23324
    • PR #567: (bastichelaar) Added upstart module | refs: #23324
    • PR #560: (UtahDave) The runas feature that was added in 93423aa2e5e4b7de6452090b0039560d2b13... | refs: #23324
    • PR #504: (SEJeff) File state goodies | refs: #23324
    • 1fb8445 Merge pull request #23396 from basepi/merge-forward-2015.2
    • 2766c8c Fix typo in FunctionWrapper
    • fd09cda Merge remote-tracking branch 'upstream/2014.7' into merge-forward-2015.2
      • 0c76dd4 Merge pull request #23368 from kaithar/bp-23367
        • 577f419 Pylint fix
        • 8d9acd1 Put the sed insert statement back in to the output.
      • 3493cc1 Merge pull request #23350 from lorengordon/file.replace_assume_line
        • b60e224 Append/prepend: search for full line
      • 7be5c48 Merge pull request #23341 from cachedout/issue_23026
        • e98e65e Fix tests
        • 6011b43 Fix syndic pid and logfile path
      • ea61abf Merge pull request #23272 from basepi/salt-ssh.minion.config.19114
        • c223309 Add versionadded
        • be7407f Lint
        • c2c3375 Missing comma
        • 8e3e8e0 Pass the minion_opts through the FunctionWrapper
        • cb69cd0 Match the master config template in the master config reference
        • 87fc316 Add Salt-SSH section to master config template
        • 91dd9dc Add ssh_minion_opts to master config ref
        • c273ea1 Add minion config to salt-ssh doc
        • a0b6b76 Add minion_opts to roster docs
        • 5212c35 Accept minion_opts from the target information
        • e2099b6 Process ssh_minion_opts from master config
        • 3b64214 Revert "Work around bug in salt-ssh in config.get for gpg renderer"
        • 494953a Remove the strip (embracing multi-line YAML dump)
        • fe87f0f Dump multi-line yaml into the SHIM
        • b751a72 Inject local minion config into shim if available
      • 4f760dd Merge pull request #23347 from basepi/salt-ssh.functionwrapper.contains.19114
        • 30595e3 Backport FunctionWrapper.__contains__
      • 02658b1 Merge pull request #23344 from cachedout/issue_22742
        • 5adc96c Explicitely set file_client on master
      • ba7605d Merge pull request #23318 from cellscape/honor-seed-argument
        • 228b1be Honor seed argument in LXC container initializaton
      • 4ac4509 Merge pull request #23307 from jfindlay/fix_locale_gen
        • 101199a check for /etc/locale.gen
      • f790f42 Merge pull request #23324 from s0undt3ch/hotfix/bootstrap-script-2014.7
      • 6643e47 Update to the latest stable release of the bootstrap script v2015.05.04
  • 23d4feb Merge remote-tracking branch 'upstream/2015.2' into 2015.5
  • PR #23412: (rahulhan) Adding states/win_update.py unit tests @ 2015-05-06T18:31:09Z

    • b3c1672 Merge pull request #23412 from rahulhan/states_win_update_unit_test
    • 9bc1519 Removed unwanted imports
    • f12bfcf Adding states/win_update.py unit tests
  • PR #23413: (terminalmage) Update manpages for 2015.2 -> 2015.5 @ 2015-05-06T17:12:57Z

    • f2d7646 Merge pull request #23413 from terminalmage/update-manpages
    • 23fa440 Update manpages to reflect 2015.2 rename to 2015.5
    • 0fdaa73 Fix missed docstring updates from 2015.2 -> 2015.5
    • 4fea5ba Add missing RST file
  • PR #23410: (terminalmage) Update Lithium docstrings in 2015.2 branch @ 2015-05-06T15:53:52Z

    • PR #23409: (terminalmage) Update Lithium docstrings in 2014.7 branch | refs: #23410
    • bafbea7 Merge pull request #23410 from terminalmage/update-lithium-docstrings-2015.2
    • d395565 Update Lithium docstrings in 2015.2 branch
  • PR #23407: (jayeshka) adding rsync unit test case @ 2015-05-06T15:52:23Z

    • 02ef41a Merge pull request #23407 from jayeshka/rsync-unit-test
    • a4dd836 adding rsync unit test case
  • PR #23406: (jayeshka) adding states/lxc unit test case @ 2015-05-06T15:51:50Z

    • 58ec2a2 Merge pull request #23406 from jayeshka/lxc-states-unit-test
    • 32a0d03 adding states/lxc unit test case
  • PR #23395: (basepi) [2015.2] Add note to 2015.2.0 release notes about master opts in pillar @ 2015-05-05T22:15:20Z

    • 8837d00 Merge pull request #23395 from basepi/2015.2.0masteropts
    • b261c95 Add note to 2015.2.0 release notes about master opts in pillar
  • PR #23393: (basepi) [2015.2] Add warning about python_shell changes to 2015.2.0 release notes @ 2015-05-05T22:12:46Z

    • f79aed5 Merge pull request #23393 from basepi/2015.2.0python_shell
    • b2f033f Add CLI note
    • 48e7b3e Add warning about python_shell changes to 2015.2.0 release notes
  • PR #23380: (gladiatr72) Fix for double output with static salt cli/v2015.2 @ 2015-05-05T21:44:28Z

    • a977776 Merge pull request #23380 from gladiatr72/fix_for_double_output_with_static__salt_CLI/v2015.2
    • c47fdd7 Actually removed the static bits from below the else: fold this time.
    • 4ee3679 Fix for incorrect output with salt CLI --static option
  • PR #23379: (rahulhan) Adding states/rabbitmq_cluster.py @ 2015-05-05T21:44:06Z

    • 5c9543c Merge pull request #23379 from rahulhan/states_rabbitmq_cluster_test
    • 04c22d1 Adding states/rabbitmq_cluster.py
  • PR #23377: (rahulhan) Adding states/xmpp.py unit tests @ 2015-05-05T21:43:35Z

    • 430f080 Merge pull request #23377 from rahulhan/states_xmpp_test
    • 32923b5 Adding states/xmpp.py unit tests
  • PR #23335: (steverweber) 2015.2: include doc in master config for module_dirs @ 2015-05-05T21:28:58Z

    • 8c057e6 Merge pull request #23335 from steverweber/2015.2
    • 5e3bae9 help installing python pysphere lib
    • 97513b0 include module_dirs
    • 36b1c87 include module_dirs
  • PR #23362: (jayeshka) adding states/zk_concurrency unit test case @ 2015-05-05T15:50:06Z

    • 1648253 Merge pull request #23362 from jayeshka/zk_concurrency-states-unit-test
    • f60dda4 adding states/zk_concurrency unit test case
  • PR #23363: (jayeshka) adding riak unit test case @ 2015-05-05T14:23:05Z

    • 1cdaeed Merge pull request #23363 from jayeshka/riak-unit-test
    • f9da6db adding riak unit test case

Salt 2015.5.2 Release Notes

release:TBA

Version 2015.5.2 is a bugfix release for 2015.5.0.

Extended Changelog Courtesy of Todd Stansell (https://github.com/tjstansell/salt-changelogs):

PR #24346: (rallytime) Backport #24271 to 2015.5

@ 2015-06-03T18:44:31Z

PR #24271: (randybias) Fixed the setup instructions
refs: #24346
  • 76927c9 Merge pull request #24346 from rallytime/bp-24271
  • 04067b6 Fixed the setup instructions
PR #24345: (rallytime) Backport #24013 to 2015.5

@ 2015-06-03T18:39:41Z

ISSUE #24012: (jbq) Enabling a service does not create the appropriate rc.d symlinks on Ubuntu
refs: #24013
PR #24013: (jbq) Fix enabling a service on Ubuntu #24012
refs: #24345
  • 4afa03d Merge pull request #24345 from rallytime/bp-24013
  • 16e0732 Fix enabling a service on Ubuntu #24012
PR #24365: (jacobhammons) Fixes for PDF build errors

@ 2015-06-03T17:50:02Z

  • c3392c2 Merge pull request #24365 from jacobhammons/DocFixes
  • 0fc1902 Fixes for PDF build errors
PR #24313: (nicholascapo) Fix #22991 Correctly set result when test=True

@ 2015-06-03T14:49:18Z

ISSUE #22991: (nicholascapo) npm.installed ignores test=True * ae681a4 Merge pull request #24313 from nicholascapo/fix-22991-npm.installed-test-true * ac9644c Fix #22991 npm.installed correctly set result on test=True

PR #24312: (nicholascapo) Fix #18966: file.serialize supports test=True

@ 2015-06-03T14:49:06Z

ISSUE #18966: (bechtoldt) file.serialize ignores test=True * d57a9a2 Merge pull request #24312 from nicholascapo/fix-18966-file.serialize-test-true * e7328e7 Fix #18966 file.serialize correctly set result on test=True

PR #24302: (jfindlay) fix pkg hold/unhold integration test

@ 2015-06-03T03:27:43Z

  • 6b694e3 Merge pull request #24302 from jfindlay/pkg_tests
  • c2db0b1 fix pkg hold/unhold integration test
PR #24349: (rallytime) Remove references to mount_points in ec2 docs

@ 2015-06-03T01:54:09Z

ISSUE #14021: (mathrawka) EC2 doc mentions mount_point, but unable to use properly
refs: #24349
  • aca8447 Merge pull request #24349 from rallytime/fix-14021
  • a235b11 Remove references to mount_points in ec2 docs
PR #24328: (dr4Ke) Fix state grains silently fails 2015.5

@ 2015-06-02T15:18:46Z

ISSUE #24319: (dr4Ke) grains state shouldn't fail silently * 88a997e Merge pull request #24328 from dr4Ke/fix_state_grains_silently_fails_2015.5 * 8a63d1e fix state grains silently fails #24319

  • ca1af20 grains state: add some tests
PR #24310: (techhat) Add warning about destroying maps

@ 2015-06-02T03:01:28Z

ISSUE #24036: (arthurlogilab) [salt-cloud] Protect against passing command line arguments as names for the --destroy command in map files
refs: #24310
ISSUE #9772: (s0undt3ch) Delete VM's in a map does not delete them all
refs: #24310
  • 7dcd9bb Merge pull request #24310 from techhat/mapwarning
  • ca535a6 Add warning about destroying maps
PR #24281: (steverweber) Ipmi docfix

@ 2015-06-01T17:45:36Z

  • 02bfb25 Merge pull request #24281 from steverweber/ipmi_docfix
  • dd36f2c yaml formating
  • f6deef3 include api_kg kwarg in ipmi state
  • a7d4e97 doc cleanup
  • 0ded2fd save more cleanup to doc
  • 08872f2 fix name api_key to api_kg
  • 165a387 doc fix add api_kg kwargs
  • 1ec7888 cleanup docs
PR #24287: (jfindlay) fix pkg test on ubuntu 12.04 for realz

@ 2015-06-01T14:16:37Z

  • 73cd2cb Merge pull request #24287 from jfindlay/pkg_test
  • 98944d8 fix pkg test on ubuntu 12.04 for realz
PR #24279: (rallytime) Backport #24263 to 2015.5

@ 2015-06-01T04:29:34Z

PR #24263: (cdarwin) Correct usage of import_yaml in formula documentation
refs: #24279
  • 02017a0 Merge pull request #24279 from rallytime/bp-24263
  • beff7c7 Correct usage of import_yaml in formula documentation
PR #24277: (rallytime) Put a space between after_jump commands

@ 2015-06-01T04:28:26Z

ISSUE #24226: (c4urself) iptables state needs to keep ordering of flags
refs: #24277
  • 2ba696d Merge pull request #24277 from rallytime/fix_iptables_jump
  • e2d1606 Move after_jump split out of loop
  • d14f130 Remove extra loop
  • 42ed532 Put a space between after_jump commands
PR #24262: (basepi) More dictupdate after #24142

@ 2015-05-31T04:09:37Z

PR #24142: (basepi) Optimize dictupdate.update and add #24097 functionality
refs: #24262
PR #24097: (kiorky) Optimize dictupdate
  • 113eba3 Merge pull request #24262 from basepi/dictupdatefix
  • 0c4832c Raise a typeerror if non-dict types
  • be21aaa Pylint
  • bb8a6c6 More optimization
  • c933249 py3 compat
  • ff6b2a7 Further optimize dictupdate.update()
  • c73f5ba Remove unused valtype
PR #24269: (kiorky) zfs: Fix spurious retcode hijacking in virtual

@ 2015-05-30T17:47:49Z

  • 785d5a1 Merge pull request #24269 from makinacorpus/zfs
  • 0bf23ce zfs: Fix spurious retcode hijacking in virtual
PR #24257: (jfindlay) fix pkg mod integration test on ubuntu 12.04

@ 2015-05-29T23:09:00Z

  • 3d885c0 Merge pull request #24257 from jfindlay/pkg_tests
  • 9508924 fix pkg mod integration test on ubuntu 12.04
PR #24260: (basepi) Fix some typos from #24080

@ 2015-05-29T22:54:58Z

ISSUE #23657: (arthurlogilab) [salt-cloud lxc] NameError: global name '__salt__' is not defined
PR #24080: (kiorky) Lxc consistency2
PR #24066: (kiorky) Merge forward 2015.5 -> develop
refs: #23982
PR #24065: (kiorky) continue to fix #23883
PR #23982: (kiorky) lxc: path support
refs: #24080
  • 08a1075 Merge pull request #24260 from basepi/lxctypos24080
  • 0fa1ad3 Fix another lxc typo
  • 669938f s/you ll/you'll/
PR #24080: (kiorky) Lxc consistency2

@ 2015-05-29T22:51:54Z

ISSUE #23657: (arthurlogilab) [salt-cloud lxc] NameError: global name '__salt__' is not defined
PR #24066: (kiorky) Merge forward 2015.5 -> develop
refs: #23982
PR #24065: (kiorky) continue to fix #23883
PR #23982: (kiorky) lxc: path support
refs: #24080
  • 75590cf Merge pull request #24080 from makinacorpus/lxc_consistency2
  • 81f8067 lxc: fix old lxc test
  • 458f506 seed: lint
  • 96b8d55 Fix seed.mkconfig yamldump
  • 76ddb68 lxc/applynet: conservative
  • ce7096f variable collision
  • 8a8b28d lxc: lint
  • 458b18b more lxc docs
  • ef1f952 lxc docs: typos
  • d67a43d more lxc docs
  • 608da5e modules/lxc: merge resolution
  • 27c4689 modules/lxc: more consistent comparsion
  • 07c365a lxc: merge conflict spotted
  • 9993915 modules/lxc: rework settings for consistency
  • ce11d83 lxc: Global doc refresh
  • 61ed2f5 clouds/lxc: profile key is conflicting
PR #24247: (rallytime) Backport #24220 to 2015.5

@ 2015-05-29T21:40:01Z

ISSUE #24210: (damonnk) salt-cloud vsphere.py should allow key_filename param
refs: #24220
PR #24220: (djcrabhat) adding key_filename param to vsphere provider
refs: #24247
  • da14f3b Merge pull request #24247 from rallytime/bp-24220
  • 0b1041d adding key_filename param to vsphere provider
PR #24254: (rallytime) Add deprecation warning to Digital Ocean v1 Driver

@ 2015-05-29T21:39:25Z

PR #22731: (dmyerscough) Decommission DigitalOcean APIv1 and have users use the new DigitalOcean APIv2
refs: #24254
  • 21d6126 Merge pull request #24254 from rallytime/add_deprecation_warning_digitalocean
  • cafe37b Add note to docs about deprecation
  • ea0f1e0 Add deprecation warning to digital ocean driver to move to digital_ocean_v2
PR #24252: (aboe76) Updated suse spec to 2015.5.1

@ 2015-05-29T21:38:45Z

  • dac055d Merge pull request #24252 from aboe76/opensuse_package
  • 0ad617d Updated suse spec to 2015.5.1
PR #24251: (garethgreenaway) Returners broken in 2015.5

@ 2015-05-29T21:37:52Z

  • 49e7fe8 Merge pull request #24251 from garethgreenaway/2015_5_returner_brokenness
  • 5df6b52 The code calling cfg as a function vs treating it as a dictionary and using get is currently backwards causing returners to fail when used from the CLI and in scheduled jobs.
PR #24255: (rallytime) Clarify digital ocean documentation and mention v1 driver deprecation

@ 2015-05-29T21:37:07Z

ISSUE #21498: (rallytime) Clarify Digital Ocean Documentation
refs: #24255
  • bfb9461 Merge pull request #24255 from rallytime/clarify_digital_ocean_driver_docs
  • 8d51f75 Clarify digital ocean documentation and mention v1 driver deprecation
PR #24232: (rallytime) Backport #23308 to 2015.5

@ 2015-05-29T21:36:46Z

PR #23308: (thusoy) Don't merge: Add missing jump arguments to iptables module
refs: #24232
  • 41f5756 Merge pull request #24232 from rallytime/bp-23308
  • 2733f66 Import string
  • 9097cca Add missing jump arguments to iptables module
PR #24245: (Sacro) Unset PYTHONHOME when starting the service

@ 2015-05-29T20:00:31Z

  • a95982c Merge pull request #24245 from Sacro/patch-2
  • 6632d06 Unset PYTHONHOME when starting the service
PR #24121: (hvnsweeting) deprecate setting user permission in rabbitmq_vhost.present

@ 2015-05-29T15:55:40Z

  • 1504c76 Merge pull request #24121 from hvnsweeting/rabbitmq-host-deprecate-set-permission
  • 2223158 deprecate setting user permission in rabbitmq_host.present
PR #24179: (merll) Changing user and group only possible for existing ids.

@ 2015-05-29T15:52:43Z

PR #24169: (merll) Changing user and group only possible for existing ids.
refs: #24179
  • ba02f65 Merge pull request #24179 from Precis/fix-file-uid-gid-2015.0
  • ee4c9d5 Use ids if user or group is not present.
PR #24229: (msteed) Fix auth failure on syndic with external_auth

@ 2015-05-29T15:04:06Z

ISSUE #24147: (paclat) Syndication issues when using authentication on master of masters.
refs: #24229
  • 9bfb066 Merge pull request #24229 from msteed/issue-24147
  • 482d1cf Fix auth failure on syndic with external_auth
PR #24234: (jayeshka) adding states/quota unit test case.

@ 2015-05-29T14:14:27Z

  • 19fa43c Merge pull request #24234 from jayeshka/quota-states-unit-test
  • c233565 adding states/quota unit test case.
PR #24217: (jfindlay) disable intermittently failing tests

@ 2015-05-29T03:08:39Z

ISSUE #40: (thatch45) Clean up timeouts
refs: #22857
PR #23623: (jfindlay) Fix /jobs endpoint's return
refs: #24217
PR #22857: (jacksontj) Fix /jobs endpoint's return
refs: #23623
  • e15142c Merge pull request #24217 from jfindlay/disable_bad_tests
  • 6b62804 disable intermittently failing tests
PR #24199: (ryan-lane) Various fixes for boto_route53 and boto_elb

@ 2015-05-29T03:02:41Z

  • ce8e43b Merge pull request #24199 from lyft/route53-fix-elb
  • d8dc9a7 Better unit tests for boto_elb state
  • 62f214b Remove cnames_present test
  • 7b9ae82 Lint fix
  • b74b0d1 Various fixes for boto_route53 and boto_elb
PR #24142: (basepi) Optimize dictupdate.update and add #24097 functionality
refs: #24262

@ 2015-05-29T03:00:56Z

PR #24097: (kiorky) Optimize dictupdate

PR #21968: (ryanwohara) Verifying the key has a value before using it. * a43465d Merge pull request #24142 from basepi/dictupdate24097 * 5c6e210 Deepcopy on merge_recurse

  • a13c84a Fix None check from #21968
  • 9ef2c64 Add docstring
  • 8579429 Add in recursive_update from #24097
  • 8599143 if key not in dest, don't recurse
  • d8a84b3 Rename klass to valtype
PR #24208: (jayeshka) adding states/ports unit test case.

@ 2015-05-28T23:06:33Z

  • 526698b Merge pull request #24208 from jayeshka/ports-states-unit-test
  • 657b709 adding states/ports unit test case.
PR #24219: (jfindlay) find zfs without modinfo

@ 2015-05-28T21:07:26Z

ISSUE #20635: (dennisjac) 2015.2.0rc1: zfs errors in log after update
refs: #24219
  • d00945f Merge pull request #24219 from jfindlay/zfs_check
  • 15d4019 use the salt loader in the zfs mod
  • 5599b67 try to search for zfs if modinfo is unavailable
PR #24190: (msteed) Fix issue 23815

@ 2015-05-28T20:10:34Z

ISSUE #23815: (Snergster) [beacons] inotify errors on subdir creation * 3dc4b85 Merge pull request #24190 from msteed/issue-23815 * 086a1a9 lint

  • 65de62f fix #23815
  • d04e916 spelling
  • db9f682 add inotify beacon unit tests
PR #24211: (rallytime) Backport #24205 to 2015.5

@ 2015-05-28T18:28:15Z

PR #24205: (hazelesque) Docstring fix in salt.modules.yumpkg.hold
refs: #24211
  • 436634b Merge pull request #24211 from rallytime/bp-24205
  • 23284b5 Docstring fix in salt.modules.yumpkg.hold
PR #24212: (terminalmage) Clarify error in rendering template for top file

@ 2015-05-28T18:26:20Z

  • cc58624 Merge pull request #24212 from terminalmage/clarify-error-msg
  • ca807fb Clarify error in rendering template for top file
PR #24213: (The-Loeki) ShouldFix _- troubles in debian_ip

@ 2015-05-28T18:24:39Z

ISSUE #23904: (mbrgm) Network config bonding section cannot be parsed when attribute names use dashes
refs: #23917
ISSUE #23900: (hashi825) salt ubuntu network building issue 2015.5.0
refs: #23922
PR #23922: (garethgreenaway) Fixes to debian_ip.py
refs: #24213
PR #23917: (corywright) Split debian bonding options on dash instead of underscore
refs: #24213
  • 9825160 Merge pull request #24213 from The-Loeki/patch-3
  • a68d515 ShouldFix _- troubles in debian_ip
PR #24214: (basepi) 2015.5.1release

@ 2015-05-28T16:23:57Z

  • 071751d Merge pull request #24214 from basepi/2015.5.1release
  • e5ba31b 2015.5.1 release date
  • 768494c Update latest release in docs
PR #24202: (rallytime) Backport #24186 to 2015.5

@ 2015-05-28T05:16:48Z

PR #24186: (thcipriani) Update salt vagrant provisioner info
refs: #24202
  • c2f1fdb Merge pull request #24202 from rallytime/bp-24186
  • db793dd Update salt vagrant provisioner info
PR #24192: (rallytime) Backport #20474 to 2015.5

@ 2015-05-28T05:16:18Z

PR #20474: (djcrabhat) add sudo, sudo_password params to vsphere deploy to allow for non-root deploys
refs: #24192
  • 8a085a2 Merge pull request #24192 from rallytime/bp-20474
  • fd3c783 add sudo, sudo_password params to deploy to allow for non-root deploys
PR #24184: (rallytime) Backport #24129 to 2015.5

@ 2015-05-28T05:15:08Z

PR #24129: (pengyao) Wheel client doc
refs: #24184
  • 7cc535b Merge pull request #24184 from rallytime/bp-24129
  • 722a662 fixed a typo
  • 565eb46 Add cmd doc for WheelClient
PR #24183: (rallytime) Backport #19320 to 2015.5

@ 2015-05-28T05:14:36Z

PR #19320: (clan) add 'state_output_profile' option for profile output
refs: #24183
  • eb0af70 Merge pull request #24183 from rallytime/bp-19320
  • 55db1bf sate_output_profile default to True
  • 9919227 fix type: statei -> state
  • 0549ca6 add 'state_output_profile' option for profile output
PR #24201: (whiteinge) Add list of client libraries for the rest_cherrypy module to the top-level documentation

@ 2015-05-28T02:12:09Z

  • 1b5bf23 Merge pull request #24201 from whiteinge/rest_cherrypy-client-libs
  • 5f71802 Add list of client libraries for the rest_cherrypy module
  • 28fc77f Fix rest_cherrypy config example indentation
PR #24195: (rallytime) Merge #24185 with a couple of fixes

@ 2015-05-27T22:18:37Z

PR #24185: (jacobhammons) Fixes for doc build errors
refs: #24195
  • 3307ec2 Merge pull request #24195 from rallytime/merge-24185
  • d8daa9d Merge #24185 with a couple of fixes
  • 634d56b Fixed pylon error
  • 0689815 Fixes for doc build errors
PR #24166: (jayeshka) adding states/pkgng unit test case.

@ 2015-05-27T20:27:49Z

  • 7e400bc Merge pull request #24166 from jayeshka/pkgng-states-unit-test
  • 2234bb0 adding states/pkgng unit test case.
PR #24189: (basepi) [2015.5] Merge forward from 2014.7 to 2015.5

@ 2015-05-27T20:26:31Z

PR #24178: (rallytime) Backport #24118 to 2014.7, too. PR #24159: (rallytime) Fill out modules/keystone.py CLI Examples PR #24158: (rallytime) Fix test_valid_docs test for tls module PR #24118: (trevor-h) removed deprecated pymongo usage

  • 9fcda79 Merge pull request #24189 from basepi/merge-forward-2015.5
  • 8839e9c Merge remote-tracking branch 'upstream/2014.7' into merge-forward-2015.5
  • 9d7331c Merge pull request #24178 from rallytime/bp-24118
    • e2217a0 removed deprecated pymongo usage as no longer functional with pymongo > 3.x
  • 4e8c503 Merge pull request #24159 from rallytime/keystone_doc_examples
    • dadac8d Fill out modules/keystone.py CLI Examples
  • fc10ee8 Merge pull request #24158 from rallytime/fix_doc_error
    • 49a517e Fix test_valid_docs test for tls module
PR #24181: (jtand) Fixed error where file was evaluated as a symlink in test_absent

@ 2015-05-27T18:26:28Z

  • 2303dec Merge pull request #24181 from jtand/file_test
  • 5f0e601 Fixed error where file was evaluated as a symlink in test_absent
PR #24180: (terminalmage) Skip libvirt tests if not running as root

@ 2015-05-27T18:18:47Z

  • a162768 Merge pull request #24180 from terminalmage/fix-libvirt-test
  • 72e7416 Skip libvirt tests if not running as root
PR #24165: (jayeshka) adding states/portage_config unit test case.

@ 2015-05-27T17:15:08Z

  • 1fbc5b2 Merge pull request #24165 from jayeshka/portage_config-states-unit-test
  • 8cf1505 adding states/portage_config unit test case.
PR #24164: (jayeshka) adding states/pecl unit test case.

@ 2015-05-27T17:14:26Z

  • 4747856 Merge pull request #24164 from jayeshka/pecl-states-unit-test
  • 563a5b3 adding states/pecl unit test case.
PR #24160: (The-Loeki) small enhancement to data module; pop()

@ 2015-05-27T17:03:10Z

  • cdaaa19 Merge pull request #24160 from The-Loeki/patch-1
  • 2175ff3 doc & merge fix
  • eba382c small enhancement to data module; pop()
PR #24153: (techhat) Batch mode sometimes improperly builds lists of minions to process

@ 2015-05-27T16:21:53Z

  • 4a8dbc7 Merge pull request #24153 from techhat/batchlist
  • 467ba64 Make sure that minion IDs are strings
PR #24167: (jayeshka) adding states/pagerduty unit test case.

@ 2015-05-27T16:14:01Z

  • ed8ccf5 Merge pull request #24167 from jayeshka/pagerduty-states-unit-test
  • 1af8c83 adding states/pagerduty unit test case.
PR #24156: (basepi) [2015.5] Merge forward from 2014.7 to 2015.5

@ 2015-05-27T15:05:01Z

ISSUE #23464: (tibold) cmd_iter_no_block() blocks
refs: #24093

PR #24125: (hvnsweeting) Fix rabbitmq test mode PR #24093: (msteed) Make LocalClient.cmd_iter_no_block() not block PR #24008: (davidjb) Correct reST formatting for states.cmd documentation PR #23933: (jacobhammons) sphinx saltstack2 doc theme * b9507d1 Merge pull request #24156 from basepi/merge-forward-2015.5 * e52b5ab Remove stray >>>>>

  • 7dfbd92 Merge remote-tracking branch 'upstream/2014.7' into merge-forward-2015.5
    • c0d32e0 Merge pull request #24125 from hvnsweeting/fix-rabbitmq-test-mode
      • 71862c6 enhance log
      • 28e2594 change according to new output of rabbitmq module functions
      • cd0212e processes and returns better output for rabbitmq module
    • 39a8f30 Merge pull request #24093 from msteed/issue-23464
      • fd35903 Fix failing test
      • 41b344c Make LocalClient.cmd_iter_no_block() not block
    • 5bffd30 Merge pull request #24008 from davidjb/2014.7
      • 8b8d029 Correct reST formatting for documentation
    • 1aa0420 Merge pull request #23933 from jacobhammons/2014.7
    • a3613e6 removed numbering from doc TOC
    • 78b737c removed 2015.* release from release notes, updated index page to remove PDF/epub links
    • e867f7d Changed build settings to use saltstack2 theme and update release versions.
    • 81ed9c9 sphinx saltstack2 doc theme
PR #24145: (jfindlay) attempt to decode win update package

@ 2015-05-26T23:20:20Z

ISSUE #24102: (bormotov) win_update encondig problems
refs: #24145
  • 05745fa Merge pull request #24145 from jfindlay/win_update_encoding
  • cc5e17e attempt to decode win update package
PR #24123: (kiorky) fix service enable/disable change

@ 2015-05-26T21:24:19Z

ISSUE #24122: (kiorky) service.dead is no more stateful: services does not handle correctly enable/disable change state
refs: #24123
  • 7024789 Merge pull request #24123 from makinacorpus/ss
  • 2e2e1d2 fix service enable/disable change
PR #24146: (rallytime) Fixes the boto_vpc_test failure on CentOS 5 tests

@ 2015-05-26T20:15:19Z

  • 51c3cec Merge pull request #24146 from rallytime/fix_centos_boto_failure
  • ac0f97d Fixes the boto_vpc_test failure on CentOS 5 tests
PR #24144: (twangboy) Compare Keys ignores all newlines and carriage returns

@ 2015-05-26T19:25:48Z

ISSUE #24052: (twangboy) v2015.5.1 Changes the way it interprets the minion_master.pub file
ISSUE #23566: (rks2286) Salt-cp corrupting the file after transfer to minion
PR #23740: (jfindlay) Binary write
refs: #24144
  • 1c91a21 Merge pull request #24144 from twangboy/fix_24052
  • c197b41 Compare Keys removing all newlines and carriage returns
PR #24139: (rallytime) Backport #24118 to 2015.5

@ 2015-05-26T18:24:27Z

PR #24118: (trevor-h) removed deprecated pymongo usage
  • 0841667 Merge pull request #24139 from rallytime/bp-24118
  • 4bb519b removed deprecated pymongo usage as no longer functional with pymongo > 3.x
PR #24138: (rallytime) Backport #24116 to 2015.5

@ 2015-05-26T18:23:51Z

PR #24116: (awdrius) Fixed typo in chown username (ending dot) that fails the command.
refs: #24138
  • 742eca2 Merge pull request #24138 from rallytime/bp-24116
  • 7f08641 Fixed typo in chown username (ending dot) that fails the command.
PR #24137: (rallytime) Backport #24105 to 2015.5

@ 2015-05-26T18:23:40Z

PR #24105: (cedwards) Updated some beacon-specific documentation formatting
refs: #24137
  • e01536d Merge pull request #24137 from rallytime/bp-24105
  • f0778a0 Updated some beacon-specific documentation formatting
PR #24136: (rallytime) Backport #24104 to 2015.5

@ 2015-05-26T15:58:47Z

ISSUE #23364: (pruiz) Unable to destroy host using proxmox cloud: There was an error destroying machines: 501 Server Error: Method 'DELETE /nodes/pmx1/openvz/openvz/100' not implemented PR #24104: (pruiz) Only try to stop a VM if it's not already stopped. (fixes #23364)

refs: #24136
  • 89cdf97 Merge pull request #24136 from rallytime/bp-24104
  • c538884 Only try to stop a VM if it's not already stopped. (fixes #23364)
PR #24135: (rallytime) Backport #24083 to 2015.5

@ 2015-05-26T15:58:27Z

PR #24083: (swdream) fix code block syntax
refs: #24135
  • 67c4373 Merge pull request #24135 from rallytime/bp-24083
  • e1d06f9 fix code block syntax
PR #24131: (jayeshka) adding states/mysql_user unit test case

@ 2015-05-26T15:58:10Z

  • a83371e Merge pull request #24131 from jayeshka/mysql_user-states-unit-test
  • ed1ef69 adding states/mysql_user unit test case
PR #24130: (jayeshka) adding states/ntp unit test case

@ 2015-05-26T15:57:29Z

  • 1dc1d2a Merge pull request #24130 from jayeshka/ntp-states-unit-test
  • ede4a9f adding states/ntp unit test case
PR #24128: (jayeshka) adding states/openstack_config unit test case

@ 2015-05-26T15:56:08Z

  • 3943417 Merge pull request #24128 from jayeshka/openstack_config-states-unit-test
  • ca09e0f adding states/openstack_config unit test case
PR #24127: (jayeshka) adding states/npm unit test case

@ 2015-05-26T15:55:18Z

  • 23f25c4 Merge pull request #24127 from jayeshka/npm-states-unit-test
  • c3ecabb adding states/npm unit test case
PR #24077: (anlutro) Change how state_verbose output is filtered

@ 2015-05-26T15:41:11Z

ISSUE #24009: (hvnsweeting) state_verbose False summary is wrong
refs: #24077
  • 07488a4 Merge pull request #24077 from alprs/fix-outputter_highstate_nonverbose_count
  • 7790408 Change how state_verbose output is filtered
PR #24119: (jfindlay) Update contrib docs

@ 2015-05-26T15:37:01Z

  • 224820f Merge pull request #24119 from jfindlay/update_contrib_docs
  • fa2d411 update example release branch in contrib docs
  • a0b76b5 clarify git rebase instructions
  • 3517e00 fix contribution docs link typos
  • 651629c backport dev contrib doc updates to 2015.5
PR #23928: (joejulian) Add the ability to replace existing certificates

@ 2015-05-25T19:47:26Z

  • 5488c4a Merge pull request #23928 from joejulian/2015.5_tls_module_replace_existing
  • 4a4cbdd Add the ability to replace existing certificates
PR #24078: (jfindlay) if a charmap is not supplied, set it to the codeset

@ 2015-05-25T19:39:19Z

ISSUE #23221: (Reiner030) Debian Jessie: locale.present not working again
refs: #24078
  • dd90ef0 Merge pull request #24078 from jfindlay/locale_charmap
  • 5eb97f0 if a charmap is not supplied, set it to the codeset
PR #24088: (jfindlay) pkg module integration tests

@ 2015-05-25T19:39:02Z

  • 9cec5d3 Merge pull request #24088 from jfindlay/pkg_tests
  • f1bd5ec adding pkg module integration tests
  • 739b2ef rework yumpkg refresh_db so args are not mandatory
PR #24089: (jfindlay) allow override of binary file mode on windows

@ 2015-05-25T19:38:44Z

ISSUE #24052: (twangboy) v2015.5.1 Changes the way it interprets the minion_master.pub file
  • 517552c Merge pull request #24089 from jfindlay/binary_write
  • b2259a6 allow override of binary file mode on windows
PR #24092: (jfindlay) collect scattered contents edits, ensure it's a str

@ 2015-05-25T19:38:10Z

ISSUE #23973: (mschiff) state file.managed: setting contents_pillar to a pillar which is a list throws exception instead giving descriptive error message
refs: #24092
  • 121ab9f Merge pull request #24092 from jfindlay/file_state
  • cfa0f13 collect scattered contents edits, ensure it's a str
PR #24112: (The-Loeki) thin_gen breaks when thinver doesn't exist

@ 2015-05-25T19:37:47Z

  • 84e65de Merge pull request #24112 from The-Loeki/patch-1
  • 34646ea thin_gen breaks when thinver doesn't exist
PR #24108: (jayeshka) adding states/mysql_query unit test case

@ 2015-05-25T12:30:48Z

  • ec509ed Merge pull request #24108 from jayeshka/mysql_query-states-unit-test
  • ec50450 adding states/mysql_query unit test case
PR #24110: (jayeshka) adding varnish unit test case

@ 2015-05-25T12:30:21Z

  • f2e5d6c Merge pull request #24110 from jayeshka/varnish-unit-test
  • e119889 adding varnish unit test case
PR #24109: (jayeshka) adding states/mysql_grants unit test case

@ 2015-05-25T12:29:53Z

  • 4fca2b4 Merge pull request #24109 from jayeshka/mysql_grants-states-unit-test
  • 11a93cb adding states/mysql_grants unit test case
PR #24028: (nleib) send a disable message to disable puppet

@ 2015-05-25T04:02:11Z

  • 6b43c9a Merge pull request #24028 from nleib/2015.5
  • 15f24b4 update format of string in disabled msg
  • 7690e5b remove trailing whitespaces
  • 56a9720 Update puppet.py
  • 9686391 Update puppet.py
  • 33f3d68 send a disable message to disable puppet
PR #24100: (jfindlay) adding states/file unit test case

@ 2015-05-24T05:17:54Z

PR #23963: (jayeshka) adding states/file unit test case
refs: #24100
  • 52c9aca Merge pull request #24100 from jfindlay/merge_23963
  • 7d59deb adding states/file unit test case
PR #24098: (galet) Systemd not recognized properly on Oracle Linux 7

@ 2015-05-24T04:07:31Z

ISSUE #21446: (dpheasant) check for systemd on Oracle Linux
refs: #24098
  • 0eb9f15 Merge pull request #24098 from galet/2015.5
  • 4d6ab21 Systemd not recognized properly on Oracle Linux 7
PR #24090: (jfindlay) adding states/mount unit test case

@ 2015-05-22T23:02:57Z

PR #24062: (jayeshka) adding states/mount unit test case
refs: #24090
  • 8e04db7 Merge pull request #24090 from jfindlay/merge_24062
  • a81a922 adding states/mount unit test case
PR #24086: (rallytime) Backport #22806 to 2015.5

@ 2015-05-22T21:18:20Z

ISSUE #22574: (unicolet) error when which is not available
refs: #22806
PR #22806: (jfindlay) use cmd.run_all instead of cmd.run_stdout
refs: #24086
  • c0079f5 Merge pull request #24086 from rallytime/bp-22806
  • f728f55 use cmd.run_all instead of cmd.run_stdout
PR #24024: (jayeshka) adding states/mongodb_user unit test case

@ 2015-05-22T20:53:19Z

  • 09de253 Merge pull request #24024 from jayeshka/mongodb_user-states-unit-test
  • f31dc92 resolved errors
  • d038b1f adding states/mongodb_user unit test case
PR #24065: (kiorky) continue to fix #23883

@ 2015-05-22T18:59:21Z

ISSUE #23883: (kaithar) max_event_size seems broken * bfd812c Merge pull request #24065 from makinacorpus/real23883 * 028282e continue to fix #23883

PR #24029: (kiorky) Fix providers handling

@ 2015-05-22T16:56:06Z

ISSUE #24017: (arthurlogilab) [salt-cloud openstack] TypeError: unhashable type: 'dict' on map creation
refs: #24029
  • 429adfe Merge pull request #24029 from makinacorpus/fixproviders
  • 412b39b Fix providers handling
PR #23936: (jfindlay) remove unreachable returns in file state

@ 2015-05-22T16:26:49Z

  • a42cccc Merge pull request #23936 from jfindlay/file_state
  • ac29c0c also validate file.recurse source parameter
  • 57f7388 remove unreachable returns in file state
PR #24063: (jayeshka) removed tuple index error

@ 2015-05-22T14:58:20Z

  • 8b69b41 Merge pull request #24063 from jayeshka/mount-states-module
  • b9745d5 removed tuple index error
PR #24057: (rallytime) Backport #22572 to 2015.5

@ 2015-05-22T05:36:25Z

PR #22572: (The-Loeki) Small docfix for GitPillar
refs: #24057
  • 02ac4aa Merge pull request #24057 from rallytime/bp-22572
  • 49aad84 Small docfix for GitPillar
PR #24040: (rallytime) Backport #24027 to 2015.5

@ 2015-05-21T23:43:54Z

ISSUE #23088: (wfhg) Segfault when adding a Zypper repo on SLES 11.3
refs: #24027
PR #24027: (wfhg) Add baseurl to salt.modules.zypper.mod_repo
refs: #24040
  • 82de059 Merge pull request #24040 from rallytime/bp-24027
  • 37d25d8 Added baseurl as alias for url and mirrorlist in salt.modules.zypper.mod_repo.
PR #24039: (rallytime) Backport #24015 to 2015.5

@ 2015-05-21T23:43:25Z

PR #24015: (YanChii) minor improvement of solarisips docs & fix typos
refs: #24039
  • d909781 Merge pull request #24039 from rallytime/bp-24015
  • 6bfaa94 minor improovement of solarisips docs & fix typos
PR #24038: (rallytime) Backport #19599 to 2015.5

@ 2015-05-21T23:43:10Z

ISSUE #19598: (fayetted) ssh_auth.present test=true incorectly reports changes will be made
refs: #19599
PR #19599: (fayetted) Fix ssh_auth test mode, compare lines not just key
refs: #24038
  • 4a0f254 Merge pull request #24038 from rallytime/bp-19599
  • ea00d3e Fix ssh_auth test mode, compare lines not just key
PR #24046: (rallytime) Remove key management test from digital ocean cloud tests

@ 2015-05-21T22:32:04Z

  • 42b87f1 Merge pull request #24046 from rallytime/remove_key_test
  • 1d031ca Remove key management test from digital ocean cloud tests
PR #24044: (cro) Remove spurious log message, fix typo in doc

@ 2015-05-21T22:31:49Z

  • eff54b1 Merge pull request #24044 from cro/pgjsonb
  • de06633 Remove spurious log message, fix typo in doc
PR #24001: (msteed) issue #23883

@ 2015-05-21T20:32:30Z

ISSUE #23883: (kaithar) max_event_size seems broken * ac32000 Merge pull request #24001 from msteed/issue-23883 * bea97a8 issue #23883

PR #23995: (kiorky) Lxc path pre

@ 2015-05-21T17:26:03Z

  • f7fae26 Merge pull request #23995 from makinacorpus/lxc_path_pre
  • 319282a lint
  • 1dc67e5 lxc: versionadded
  • fcad7cb lxc: states improvments
  • 644bd72 lxc: more consistence for profiles
  • 139372c lxc: remove merge cruft
  • 725b046 lxc: Repair merge
PR #24032: (kartiksubbarao) Update augeas_cfg.py

@ 2015-05-21T17:03:42Z

ISSUE #16383: (interjection) salt.states.augeas.change example from docs fails with exception
refs: #24032
  • 26d6851 Merge pull request #24032 from kartiksubbarao/augeas_insert_16383
  • 3686dcd Update augeas_cfg.py
PR #24025: (jayeshka) adding timezone unit test case

@ 2015-05-21T16:50:53Z

  • 55c9245 Merge pull request #24025 from jayeshka/timezone-unit-test
  • 1ec33e2 removed assertion error
  • 16ecb28 adding timezone unit test case
PR #24023: (jayeshka) adding states/mongodb_database unit test case

@ 2015-05-21T16:49:17Z

  • e243617 Merge pull request #24023 from jayeshka/mongodb_database-states-unit-test
  • 5a9ac7e adding states/mongodb_database unit test case
PR #24022: (jayeshka) adding states/modjk_worker unit test case

@ 2015-05-21T16:48:29Z

  • b377bd9 Merge pull request #24022 from jayeshka/modjk_worker-states-unit-test
  • 05c0a98 adding states/modjk_worker unit test case
PR #24005: (msteed) issue #23776

@ 2015-05-21T01:55:34Z

ISSUE #23776: (enblde) Presence change events constantly reporting all minions as new in 2015.5 * 701c51b Merge pull request #24005 from msteed/issue-23776 * 62e67d8 issue #23776

PR #23996: (neogenix) iptables state generates a 0 position which is invalid in iptables cli #23950

@ 2015-05-20T22:44:27Z

ISSUE #23950: (neogenix) iptables state generates a 0 position which is invalid in iptables cli
refs: #23996
  • 17b7c0b Merge pull request #23996 from neogenix/2015.5-23950
  • ad417a5 fix for #23950
PR #23994: (rallytime) Skip the gpodder pkgrepo test for Ubuntu 15 - they don't have vivid ppa up yet

@ 2015-05-20T21:18:21Z

  • 4cb8773 Merge pull request #23994 from rallytime/skip_test_ubuntu_15
  • 9e0ec07 Skip the gpodder pkgrepo test - they don't have vivid ppa up yet

Salt 2014.7.0 Release Notes - Codename Helium

This release is the largest Salt release ever, with more features and commits then any previous release of Salt. Everything from the new RAET transport to major updates in Salt Cloud and the merging of Salt API into the main project.

Important

The Fedora/RHEL/CentOS salt-master package has been modified for this release. The following components of Salt have been broken out and placed into their own packages:

  • salt-syndic
  • salt-cloud
  • salt-ssh

When the salt-master package is upgraded, these components will be removed, and they will need to be manually installed.

Important

Compound/pillar matching have been temporarily disabled for the mine and publish modules for this release due to the possibility of inferring pillar data using pillar glob matching. A proper fix is now in the 2014.7 branch and scheduled for the 2014.7.1 release, and compound matching and non-globbing pillar matching will be re-enabled at that point.

Compound and pillar matching for normal salt commands are unaffected.

New Transport!
RAET Transport Option

This has been a HUGE amount of work, but the beta release of Salt with RAET is ready to go. RAET is a reliable queuing transport system that has been developed in partnership with a number of large enterprises to give Salt an alternative to ZeroMQ and a way to get Salt to scale well beyond tens of thousands of servers. Unlike ZeroMQ, RAET is completely asynchronous in every aspect of its operation and has been developed using the flow programming paradigm. This allows for many new capabilities to be added to Salt in the upcoming releases.

Please keep in mind that this is a beta release of RAET and we hope for bugs to be worked out, performance to be better realized and more in the 2015.5.0 release.

Simply stated, users running Salt with RAET should expect some hiccups as we hammer out the update. This is a BETA release of Salt RAET.

For information about how to use Salt with RAET please see the tutorial.

Salt SSH Enhancements

Salt SSH has just entered a new league, with substantial updates and improvements to make salt-ssh more reliable and easier then ever! From new features like the ansible roster and fileserver backends to the new pypi salt-ssh installer to lowered deps and a swath of bugfixes, salt-ssh is basically reborn!

Install salt-ssh Using pip

Salt-ssh is now pip-installable!

https://pypi.python.org/pypi/salt-ssh/

Pip will bring in all of the required deps, and while some deps are compiled, they all include pure python implementations, meaning that any compile errors which may be seen can be safely ignored.

pip install salt-ssh
Fileserver Backends

Salt-ssh can now use the salt fileserver backend system. This allows for the gitfs, hgfs, s3, and many more ways to centrally store states to be easily used with salt-ssh. This also allows for a distributed team to easily use a centralized source.

Saltfile Support

The new saltfile system makes it easy to have a user specific custom extended configuration.

Ext Pillar

Salt-ssh can now use the external pillar system. Making it easier then ever to use salt-ssh with teams.

No More sshpass

Thanks to the enhancements in the salt vt system, salt-ssh no longer requires sshpass to send passwords to ssh. This also makes the manipulation of ssh calls substantially more flexible, allowing for intercepting ssh calls in a much more fluid way.

Pure Python Shim

The salt-ssh call originally used a shell script to discover what version of python to execute with and determine the state of the ssh code deployment. This shell script has been replaced with a pure python version making it easy to increase the capability of the code deployment without causing platform inconsistency issues with different shell interpreters.

Custom Module Delivery

Custom modules are now seamlessly delivered. This makes the deployment of custom grains, states, execution modules and returners a seamless process.

CP Module Support

Salt-ssh now makes simple file transfers easier then ever! The cp module allows for files to be conveniently sent from the salt fileserver system down to systems.

More Thin Directory Options

Salt ssh functions by copying a subset of the salt code, or salt thin down to the target system. In the past this was always transferred to /tmp/.salt and cached there for subsequent commands.

Now, salt thin can be sent to a random directory and removed when the call is complete with the -W option. The new -W option still uses a static location but will clean up that location when finished.

The default salt thin location is now user defined, allowing multiple users to cleanly access the same systems.

State System Enhancements
New Imperative State Keyword "Listen"

The new listen and listen_in keywords allow for completely imperative states by calling the mod_watch() routine after all states have run instead of re-ordering the states.

Mod Aggregate Runtime Manipulator

The new mod_aggregate system allows for the state system to rewrite the state data during execution. This allows for state definitions to be aggregated dynamically at runtime.

The best example is found in the pkg state. If mod_aggregate is turned on, then when the first pkg state is reached, the state system will scan all of the other running states for pkg states and take all other packages set for install and install them all at once in the first pkg state.

These runtime modifications make it easy to run groups of states together. In future versions, we hope to fill out the mod_aggregate system to build in more and more optimizations.

For more documentation on mod_aggregate, see the documentation.

New Requisites: onchanges and onfail

The new onchanges and onchanges_in requisites make a state apply only if there are changes in the required state. This is useful to execute post hooks after changes occur on a system.

The other new requisites, onfail, and onfail_in, allow for a state to run in reaction to the failure of another state.

For more information about these new requisites, see the requisites documentation.

Global onlyif and unless

The onlyif and unless options can now be used for any state declaration.

Use names to expand and override values

The names declaration in Salt's state system can now override or add values to the expanded data structure. For example:

my_users:
  user.present:
    - names:
      - larry
      - curly
      - moe:
        - shell: /bin/zsh
        - groups:
          - wheel
    - shell: /bin/bash
Major Features
Scheduler Additions

The Salt scheduler system has received MAJOR enhancements, allowing for cron-like scheduling and much more granular timing routines. See here for more info.

Red Hat 7 Family Support

All the needed additions have been made to run Salt on RHEL 7 and derived OSes like CentOS and Scientific.

Fileserver Backends in salt-call

Fileserver backends like gitfs can now be used without a salt master! Just add the fileserver backend configuration to the minion config and execute salt-call. This has been a much-requested feature and we are happy to finally bring it to our users.

Amazon Execution Modules

An entire family of execution modules further enhancing Salt's Amazon Cloud support. They include the following:

LXC Runner Enhancements

BETA The Salt LXC management system has received a number of enhancements which make running an LXC cloud entirely from Salt an easy proposition.

Next Gen Docker Management

The Docker support in Salt has been increased at least ten fold. The Docker API is now completely exposed and Salt ships with Docker data tracking systems which make automating Docker deployments very easy.

Peer System Performance Improvements

The peer system communication routines have been refined to make the peer system substantially faster.

SDB

Encryption at rest for configs

GPG Renderer

Encrypted pillar at rest

OpenStack Expansion

Lots of new OpenStack stuff

Queues System

Ran change external queue systems into Salt events

Multi Master Failover Additions

Connecting to multiple masters is more dynamic then ever

Chef Execution Module

Managing Chef with Salt just got even easier!

salt-api Project Merge

The salt-api project has been merged into Salt core and is now available as part of the regular salt-master package install. No API changes were made, the salt-api script and init scripts remain intact.

salt-api has always provided Yet Another Pluggable Interface to Salt (TM) in the form of "netapi" modules. These are modules that bind to a port and start a service. Like many of Salt's other module types, netapi modules often have library and configuration dependencies. See the documentation for each module for instructions.

Synchronous and Asynchronous Execution of Runner and Wheel Modules

salt.runner.RunnerClient and salt.wheel.WheelClient have both gained complimentary cmd_sync and cmd_async methods allowing for synchronous and asynchronous execution of any Runner or Wheel module function, all protected using Salt's external authentication system. salt-api benefits from this addition as well.

rest_cherrypy Additions

The rest_cherrypy netapi module provides the main REST API for Salt.

Web Hooks

This release of course includes the Web Hook additions from the most recent salt-api release, which allows external services to signal actions within a Salt infrastructure. External services such as Amazon SNS, Travis-CI, or GitHub, as well as internal services that cannot or should not run a Salt minion daemon can be used as first-class components in Salt's rich orchestration capabilities.

The raw HTTP request body is now available in the event data. This is sometimes required information for checking an HMAC signature in order to verify a HTTP request. As an example, Amazon or GitHub requests are signed this way.

Generating and Accepting Minion Keys

The /key convenience URL generates a public and private key for a minion, automatically pre-accepts the public key on the Salt Master, and returns both keys as a tarball for download.

This allows for easily bootstrapping the key on a new minion with a single HTTP call, such as with a Kickstart script, all using regular shell tools.

curl -sS http://salt-api.example.com:8000/keys \
        -d mid=jerry \
        -d username=kickstart \
        -d password=kickstart \
        -d eauth=pam \
        -o jerry-salt-keys.tar
Fileserver Backend Enhancements

All of the fileserver backends have been overhauled to be faster, lighter, and more reliable. The VCS backends (gitfs, hgfs, and svnfs) have also received a lot of new features.

Additionally, most config parameters for the VCS backends can now be configured on a per-remote basis, allowing for global config parameters to be overridden for a specific gitfs/hgfs/svnfs remote.

New gitfs Features
Pygit2 and Dulwich

In addition to supporting GitPython, support for pygit2 (0.20.3 and newer) and dulwich have been added. Provided a compatible version of pygit2 is installed, it will now be the default provider. The config parameter gitfs_provider has been added to allow one to choose a specific provider for gitfs.

Mountpoints

Prior to this release, to serve a file from gitfs at a salt fileserver URL of salt://foo/bar/baz.txt, it was necessary to ensure that the parent directories existed in the repository. A new config parameter gitfs_mountpoint allows gitfs remotes to be exposed starting at a user-defined salt:// URL.

Environment Whitelisting/Blacklisting

By default, gitfs will expose all branches and tags as Salt fileserver environments. Two new config parameters, gitfs_env_whitelist, and gitfs_env_blacklist, allow more control over which branches and tags are exposed. More detailed information on how these two options work can be found in the Gitfs Walkthrough.

Expanded Authentication Support

As of pygit2 0.20.3, both http(s) and SSH key authentication are supported, and Salt now also supports both authentication methods when using pygit2. Keep in mind that pygit2 0.20.3 is not yet available on many platforms, so those who had been using authenticated git repositories with a passphraseless key should stick to GitPython if a new enough pygit2 is not yet available for the platform on which the master is running.

A full explanation of how to use authentication can be found in the Gitfs Walkthrough.

New hgfs Features
Mountpoints

This feature works exactly like its gitfs counterpart. The new config parameter is called hgfs_mountpoint.

Environment Whitelisting/Blacklisting

This feature works exactly like its gitfs counterpart. The new config parameters are called hgfs_env_whitelist and hgfs_env_blacklist.

New svnfs Features
Mountpoints

This feature works exactly like its gitfs counterpart. The new config parameter is called svnfs_mountpoint.

Environment Whitelisting/Blacklisting

This feature works exactly like its gitfs counterpart. The new config parameters are called svnfs_env_whitelist and svnfs_env_blacklist.

Configurable Trunk/Branches/Tags Paths

Prior to this release, the paths where trunk, branches, and tags were located could only be in directores named "trunk", "branches", and "tags" directly under the root of the repository. Three new config parameters (svnfs_trunk, svnfs_branches, and svnfs_tags) allow SVN repositories which are laid out differently to be used with svnfs.

New minionfs Features
Mountpoint

This feature works exactly like its gitfs counterpart. The new config parameter is called minionfs_mountpoint. The one major difference is that, as minionfs doesn't use multiple remotes (it just serves up files pushed to the master using cp.push) there is no such thing as a per-remote configuration for minionfs_mountpoint.

Changing the Saltenv from Which Files are Served

A new config parameter (minionfs_env) allows minionfs files to be served from a Salt fileserver environment other than base.

Minion Whitelisting/Blacklisting

By default, minionfs will expose the pushed files from all minions. Two new config parameters, minionfs_whitelist, and minionfs_blacklist, allow minionfs to be restricted to serve files from only the desired minions.

Pyobjects Renderer

Salt now ships with with the Pyobjects Renderer that allows for construction of States using pure Python with an idiomatic object interface.

New Modules

In addition to the Amazon modules mentioned above, there are also several other new execution modules:

New External Pillars
Salt Call Change

When used with a returner, salt-call now contacts a master if --local is not specicified.

Deprecations
salt.modules.virtualenv_mod
  • Removed deprecated memoize function from salt/utils/__init__.py (deprecated)
  • Removed deprecated no_site_packages argument from create function (deprecated)
  • Removed deprecated check_dns argument from minion_config and apply_minion_config functions (deprecated)
  • Removed deprecated OutputOptionsWithTextMixIn class from salt/utils/parsers.py (deprecated)
  • Removed the following deprecated functions from salt/modules/ps.py: - physical_memory_usage (deprecated) - virtual_memory_usage (deprecated) - cached_physical_memory (deprecated) - physical_memory_buffers (deprecated)
  • Removed deprecated cloud arguments from cloud_config function in salt/config.py: - vm_config (deprecated) - vm_config_path (deprecated)
  • Removed deprecated libcloud_version function from salt/cloud/libcloudfuncs.py (deprecated)
  • Removed deprecated CloudConfigMixIn class from salt/utils/parsers.py (deprecated)

Salt 2014.7.1 Release Notes

release:2015-01-12

Version 2014.7.1 is a bugfix release for 2014.7.0. The changes include:

  • Fixed gitfs serving symlinks in file.recurse states (issue 17700)
  • Fixed holding of multiple packages (YUM) when combined with version pinning (issue 18468)
  • Fixed use of Jinja templates in masterless mode with non-roots fileserver backend (issue 17963)
  • Re-enabled pillar and compound matching for mine and publish calls. Note that pillar globbing is still disabled for those modes, for security reasons. (issue 17194)
  • Fix for tty: True in salt-ssh (issue 16847)
  • Fix for supervisord states when supervisor not installed to system python (issue 18044)
  • Fix for logging when log_level='quiet' for cmd.run (issue 19479)

Salt 2014.7.2 Release Notes

release:2015-02-09

Version 2014.7.2 is a bugfix release for 2014.7.0. The changes include:

  • Fix erroneous warnings for systemd service enabled check (issue 19606)
  • Fix FreeBSD kernel module loading, listing, and persistence kmod (issue 197151, issue 19682)
  • Allow case-sensitive npm package names in the npm state. This may break behavior for people expecting the state to lowercase their npm package names for them. The npm module was never affected by mandatory lowercasing. (issue 20329)
  • Deprecate the activate parameter for pip.install for both the module and the state. If bin_env is given and points to a virtualenv, there is no need to activate that virtualenv in a shell for pip to install to the virtualenv.
  • Fix a file-locking bug in gitfs (issue 18839)
  • Deprecated archive_user in favor of standardized user parameter in state and added group parameter.

Salt 2014.7.3 Release Notes

release:TBA

Version 2014.7.3 is a bugfix release for 2014.7.0.

Changes:

  • Multi-master minions mode no longer route fileclient operations asymetrically. This fixes the source of many multi-master bugs where the minion would become unrepsonsive from one or more masters.
  • Fix bug wherein network.iface could produce stack traces.
  • net.arp will no longer be made available unless arp is installed on the system.
  • Major performance improvements to Saltnado
  • Allow KVM module to operate under KVM itself or VMWare Fusion
  • Various fixes to the Windows installation scripts
  • Fix issue where the syndic would not correctly propogate loads to the master job cache.
  • Improve error handling on invalid /etc/network/interfaces file in salt networking modules
  • Fix bug where a reponse status was not checked for in fileclient.get_url
  • Enable eauth when running salt in batch mode
  • Increase timeout in Boto Route53 module
  • Fix bugs with Salt's 'tar' module option parsing
  • Fix parsing of NTP servers on Windows
  • Fix issue with blockdev tuning not reporting changes correctly
  • Update to the latest Salt bootstrap script
  • Update Linode salt-cloud driver to use either linode-python or apache-libcloud
  • Fix for s3.query function to return correct headers
  • Fix for s3.head returning None for files that exist
  • Fix the disable function in win_service module so that the service is disabled correctly
  • Fix race condition between master and minion when making a directory when both daemons are on the same host
  • Fix an issue where file.recurse would fail at the root of an svn repo when the repo has a mountpoint
  • Fix an issue where file.recurse would fail at the root of an hgfs repo when the repo has a mountpoint
  • Fix an issue where file.recurse would fail at the root of an gitfs repo when the repo has a mountpoint
  • Add status.master capability for Windows.
  • Various fixes to ssh_known_hosts
  • Various fixes to states.network bonding for Debian
  • The debian_ip.get_interfaces module no longer removes nameservers.
  • Better integration between grains.virtual and systemd-detect-virt and virt-what
  • Fix traceback in sysctl.present state output
  • Fix for issue where mount.mounted would fail when superopts were not a part of mount.active (extended=True). Also mount.mounted various fixes for Solaris and FreeBSD.
  • Fix error where datetimes were not correctly safeguarded before being passed into msgpack.
  • Fix file.replace regressions. If the pattern is not found, and if dry run is False, and if backup is False, and if a pre-existing file exists with extension .bak, then that backup file will be overwritten. This backup behavior is a result of how fileinput works. Fixing it requires either passing through the file twice (the first time only to search for content and set a flag), or rewriting file.replace so it doesn't use fileinput
  • VCS filreserver fixes/optimizations
  • Catch fileserver configuration errors on master start
  • Raise errors on invalid gitfs configurations
  • set_locale when locale file does not exist (Redhat family)
  • Fix to correctly count active devices when created mdadm array with spares
  • Fix to correctly target minions in batch mode
  • Support ssh:// urls using the gitfs dulwhich backend
  • New fileserver runner
  • Fix various bugs with argument parsing to the publish module.
  • Fix disk.usage for Synology OS
  • Fix issue with tags occurring twice with docker.pulled
  • Fix incorrect key error in SMTP returner
  • Fix condition which would remount loopback filesystems on every state run
  • Remove requsites from listens after they are called in the state system
  • Make system implementation of service.running aware of legacy service calls
  • Fix issue where publish.publish would not handle duplicate responses gracefully.
  • Accept Kali Linux for aptpkg salt execution module
  • Fix bug where cmd.which could not handle a dirname as an argument
  • Fix issue in ps.pgrep where exceptions were thrown on Windows.

Known issues:

  • In multimaster mode, a minion may become temporarily unresponsive if modules or pillars are refreshed at the same time that one or more masters are down. This can be worked around by setting 'auth_timeout' and 'auth_tries' down to shorter periods.

Salt 2014.7.4 Release Notes

release:2015-03-30

Version 2014.7.4 is a bugfix release for 2014.7.0.

This is a security release. The security issues fixed have only been present since 2014.7.0, and only users of the two listed modules are vulnerable. The following CVEs have been resolved:

  • CVE-2015-1838 SaltStack: insecure /tmp file handling in salt/modules/serverdensity_device.py
  • CVE-2015-1839 SaltStack: insecure /tmp file handling in salt/modules/chef.py

Changes:

  • Multi-master minions mode no longer route fileclient operations asymetrically. This fixes the source of many multi-master bugs where the minion would become unrepsonsive from one or more masters.
  • Fix bug wherein network.iface could produce stack traces.
  • net.arp will no longer be made available unless arp is installed on the system.
  • Major performance improvements to Saltnado
  • Allow KVM module to operate under KVM itself or VMWare Fusion
  • Various fixes to the Windows installation scripts
  • Fix issue where the syndic would not correctly propogate loads to the master job cache.
  • Improve error handling on invalid /etc/network/interfaces file in salt networking modules
  • Fix bug where a reponse status was not checked for in fileclient.get_url
  • Enable eauth when running salt in batch mode
  • Increase timeout in Boto Route53 module
  • Fix bugs with Salt's 'tar' module option parsing
  • Fix parsing of NTP servers on Windows
  • Fix issue with blockdev tuning not reporting changes correctly
  • Update to the latest Salt bootstrap script
  • Update Linode salt-cloud driver to use either linode-python or apache-libcloud
  • Fix for s3.query function to return correct headers
  • Fix for s3.head returning None for files that exist
  • Fix the disable function in win_service module so that the service is disabled correctly
  • Fix race condition between master and minion when making a directory when both daemons are on the same host
  • Fix an issue where file.recurse would fail at the root of an svn repo when the repo has a mountpoint
  • Fix an issue where file.recurse would fail at the root of an hgfs repo when the repo has a mountpoint
  • Fix an issue where file.recurse would fail at the root of an gitfs repo when the repo has a mountpoint
  • Add status.master capability for Windows.
  • Various fixes to ssh_known_hosts
  • Various fixes to states.network bonding for Debian
  • The debian_ip.get_interfaces module no longer removes nameservers.
  • Better integration between grains.virtual and systemd-detect-virt and virt-what
  • Fix traceback in sysctl.present state output
  • Fix for issue where mount.mounted would fail when superopts were not a part of mount.active (extended=True). Also mount.mounted various fixes for Solaris and FreeBSD.
  • Fix error where datetimes were not correctly safeguarded before being passed into msgpack.
  • Fix file.replace regressions. If the pattern is not found, and if dry run is False, and if backup is False, and if a pre-existing file exists with extension .bak, then that backup file will be overwritten. This backup behavior is a result of how fileinput works. Fixing it requires either passing through the file twice (the first time only to search for content and set a flag), or rewriting file.replace so it doesn't use fileinput
  • VCS filreserver fixes/optimizations
  • Catch fileserver configuration errors on master start
  • Raise errors on invalid gitfs configurations
  • set_locale when locale file does not exist (Redhat family)
  • Fix to correctly count active devices when created mdadm array with spares
  • Fix to correctly target minions in batch mode
  • Support ssh:// urls using the gitfs dulwhich backend
  • New fileserver runner
  • Fix various bugs with argument parsing to the publish module.
  • Fix disk.usage for Synology OS
  • Fix issue with tags occurring twice with docker.pulled
  • Fix incorrect key error in SMTP returner
  • Fix condition which would remount loopback filesystems on every state run
  • Remove requsites from listens after they are called in the state system
  • Make system implementation of service.running aware of legacy service calls
  • Fix issue where publish.publish would not handle duplicate responses gracefully.
  • Accept Kali Linux for aptpkg salt execution module
  • Fix bug where cmd.which could not handle a dirname as an argument
  • Fix issue in ps.pgrep where exceptions were thrown on Windows.

Known issues:

  • In multimaster mode, a minion may become temporarily unresponsive if modules or pillars are refreshed at the same time that one or more masters are down. This can be worked around by setting 'auth_timeout' and 'auth_tries' down to shorter periods.
  • There are known issues with batch mode operating on the incorrect number of minions. This bug can be patched with the change in Pull Request #22464.
  • The fun, state, and unless keywords are missing from the state internals, which can cause problems running some states. This bug can be patched with the change in Pull Request #22365.

Salt 2014.7.5 Release Notes

release:2015-04-16

Version 2014.7.5 is a bugfix release for 2014.7.0.

Changes:

  • Fixed a key error bug in salt-cloud
  • Updated man pages to better match documentation
  • Fixed bug concerning high CPU usage with salt-ssh
  • Fixed bugs with remounting cvfs and fuse filesystems
  • Fixed bug with alowing requisite tracking of entire sls files
  • Fixed bug with aptpkg.mod_repo returning OK even if apt-add-repository fails
  • Increased frequency of ssh terminal output checking
  • Fixed malformed locale string in localmod module
  • Fixed checking of available version of package when accept_keywords were changed
  • Fixed bug to make git.latest work with empty repositories
  • Added **kwargs to service.mod_watch which removes warnings about enable and __reqs__ not being supported by the function
  • Improved state comments to not grow so quickly on failed requisites
  • Added force argument to service to trigger force_reload
  • Fixed bug to andle pkgrepo keyids that have been converted to int
  • Fixed module.portage_config bug with appending accept_keywords
  • Fixed bug to correctly report disk usage on windows minion
  • Added the ability to specify key prefix for S3 ext_pillar
  • Fixed issues with batch mode operating on the incorrect number of minions
  • Fixed a bug with the proxmox cloud provider stacktracing on disk definition
  • Fixed a bug with the changes dictionary in the file state
  • Fixed the TCP keep alive settings to work better with SREQ caching
  • Fixed many bugs within the iptables state and module
  • Fixed bug with states by adding fun, state, and unless to the state runtime internal keywords listing
  • Added ability to eAuth against Active Directory
  • Fixed some salt-ssh issues when running on Fedora 21
  • Fixed grains.get_or_set_hash to work with multiple entries under same key
  • Added better explanations and more examples of how the Reactor calls functions to docs
  • Fixed bug to not pass ex_config_drive to libcloud unless it's explicitly enabled
  • Fixed bug with pip.install on windows
  • Fixed bug where puppet.run always returns a 0 retcode
  • Fixed race condition bug with minion scheduling via pillar
  • Made efficiency improvements and bug fixes to the windows installer
  • Updated environment variables to fix bug with pygit2 when running salt as non-root user
  • Fixed cas behavior on data module -- data.cas was not saving changes
  • Fixed GPG rendering error
  • Fixed strace error in virt.query
  • Fixed stacktrace when running chef-solo command
  • Fixed possible bug wherein uncaught exceptions seem to make zmq3 tip over when threading is involved
  • Fixed argument passing to the reactor
  • Fixed glibc caching to prevent bug where salt-minion getaddrinfo in dns_check() never got updated nameservers

Known issues:

  • In multimaster mode, a minion may become temporarily unresponsive if modules or pillars are refreshed at the same time that one or more masters are down. This can be worked around by setting 'auth_timeout' and 'auth_tries' down to shorter periods.

Salt 2014.7.6 Release Notes

release:2015-05-18

Version 2014.7.6 is a bugfix release for 2014.7.0.

This release is a security release. A minor issue was found, as cited below:

  • CVE-2015-4017 -- Certificates are not verified when connecting to server in the Aliyun and Proxmox modules

Only users of the Aliyun or Proxmox cloud modules are at risk. The vulnerability does not exist in the latest 2015.5.0 release of Salt.

Changes:

  • salt.runners.cloud.action() has changed the fun keyword argument to func. Please update any calls to this function in the cloud runner.

Extended Changelog Courtesy of Todd Stansell (https://github.com/tjstansell/salt-changelogs):

  • PR #23810: (rallytime) Backport #23757 to 2014.7 @ 2015-05-18T15:30:21Z

    • PR #23757: (clan) use abspath, do not eliminating symlinks | refs: #23810
    • aee00c8 Merge pull request #23810 from rallytime/bp-23757
    • fb32c32 use abspath, do not eliminating symlinks
  • PR #23809: (rallytime) Fix virtualport section of virt.get_nics loop @ 2015-05-18T15:30:09Z

    • ISSUE #20198: (jcftang) virt.get_graphics, virt.get_nics are broken, in turn breaking other things | refs: #23809
    • PR #21487: (rallytime) Backport #21469 to 2014.7 | refs: #23809
    • PR #21469: (vdesjardins) fixes #20198: virt.get_graphics and virt.get_nics calls in module virt | refs: #21487
    • 6b3352b Merge pull request #23809 from rallytime/virt_get_nics_fix
    • 0616fb7 Fix virtualport section of virt.get_nics loop
  • PR #23823: (gtmanfred) add link local for ipv6 @ 2015-05-17T12:48:25Z

    • 188f03f Merge pull request #23823 from gtmanfred/2014.7
    • 5ef006d add link local for ipv6
  • PR #23802: (gtmanfred) if it is ipv6 ip_to_int will fail @ 2015-05-16T04:06:59Z

    • PR #23573: (techhat) Scan all available networks for public and private IPs | refs: #23802
    • f3ca682 Merge pull request #23802 from gtmanfred/2014.7
    • 2da98b5 if it is ipv6 ip_to_int will fail
  • PR #23488: (cellscape) LXC cloud fixes @ 2015-05-15T18:09:35Z

    • ISSUE #16424: (stanvit) salt-run cloud.create fails with saltify
    • d9af0c3 Merge pull request #23488 from cellscape/lxc-cloud-fixes
    • 64250a6 Remove profile from opts after creating LXC container
    • c4047d2 Set destroy=True in opts when destroying cloud instance
    • 9e1311a Store instance names in opts when performing cloud action
    • 934bc57 Correctly pass custom env to lxc-attach
    • 7fb85f7 Preserve test=True option in cloud states
    • 9771b5a Fix detection of absent LXC container in cloud state
    • fb24f0c Report failure when failed to create/clone LXC container
    • 2d9aa2b Avoid shadowing variables in lxc module
    • 792e102 Allow to override profile options in lxc.cloud_init_interface
    • 42bd64b Return changes on successful lxc.create from salt-cloud
    • 4409eab Return correct result when creating cloud LXC container
    • 377015c Issue #16424: List all providers when creating salt-cloud instance without profile
  • PR #23748: (basepi) [2014.7] Log salt-ssh roster render errors more assertively and verbosely @ 2015-05-14T22:38:10Z

    • ISSUE #22332: (rallytime) [salt-ssh] Add a check for host in /etc/salt/roster | refs: #23748
    • 808bbe1 Merge pull request #23748 from basepi/salt-ssh.roster.host.check
    • bc53e04 Log entire exception for render errors in roster
    • 753de6a Log render errors in roster to error level
    • e01a7a9 Always let the real YAML error through
  • PR #23731: (twangboy) Fixes #22959: Trying to add a directory to an unmapped drive in windows @ 2015-05-14T21:59:14Z

    • ISSUE #22959: (highlyunavailable) Windows Salt hangs if file.directory is trying to write to a drive that doesn't exist
    • 72cf360 Merge pull request #23731 from twangboy/fix_22959
    • 88e5495 Fixes #22959: Trying to add a directory to an unmapped drive in windows
  • PR #23730: (rallytime) Backport #23729 to 2014.7 @ 2015-05-14T21:58:34Z

    • 2610195 Merge pull request #23730 from rallytime/bp-23729
    • 1877cae adding support for nested grains to grains.item
  • PR #23688: (twangboy) Added inet_pton to utils/validate/net.py for ip.set_static_ip in windows @ 2015-05-14T16:15:56Z

    • 3e9df88 Merge pull request #23688 from twangboy/fix_23415
    • 6a91169 Fixed unused-import pylint error
    • 5e25b3f fixed pylint errors
    • 1a96766 Added inet_pton to utils/validate/net.py for ip.set_static_ip in windows
  • PR #23680: (cachedout) Rename kwarg in cloud runner @ 2015-05-13T19:44:02Z

    • ISSUE #23403: (iamfil) salt.runners.cloud.action fun parameter is replaced | refs: #23680
    • 1b86460 Merge pull request #23680 from cachedout/issue_23403
    • d5986c2 Rename kwarg in cloud runner
  • PR #23674: (cachedout) Handle lists correctly in grains.list_prsesent @ 2015-05-13T18:34:58Z

    • ISSUE #23548: (kkaig) grains.list_present produces incorrect (?) output | refs: #23674
    • cd64af0 Merge pull request #23674 from cachedout/issue_23548
    • da8a2f5 Handle lists correctly in grains.list_prsesent
  • PR #23672: (twangboy) Fix user present @ 2015-05-13T18:30:09Z

    • d322a19 Merge pull request #23672 from twangboy/fix_user_present
    • 731e7af Merge branch '2014.7' of https://github.com/saltstack/salt into fix_user_present
    • d6f70a4 Fixed user.present to create password in windows
  • PR #23670: (rallytime) Backport #23607 to 2014.7 @ 2015-05-13T18:27:17Z

    • ISSUE #23604: (Azidburn) service.dead on systemd Minion create an Error Message | refs: #23607
    • PR #23607: (Azidburn) Fix for #23604. No error reporting. Exitcode !=0 are ok | refs: #23670
    • 43f7025 Merge pull request #23670 from rallytime/bp-23607
    • ed30dc4 Fix for #23604. No error reporting. Exitcode !=0 are ok
  • PR #23661: (rallytime) Merge #23640 with whitespace fix @ 2015-05-13T15:47:30Z

    • ISSUE #22141: (Deshke) grains.get_or_set_hash render error if hash begins with "%" | refs: #23640
    • PR #23640: (cachedout) Add warning to get_or_set_hash about reserved chars | refs: #23661
    • 0f006ac Merge pull request #23661 from rallytime/merge-23640
    • 4427f42 Whitespace fix
    • dd91154 Add warning to get_or_set_hash about reserved chars
  • PR #23639: (cachedout) Handle exceptions raised by __virtual__ @ 2015-05-13T15:11:12Z

    • ISSUE #23452: (michaelforge) minion crashed with empty grain | refs: #23639
    • 84e2ef8 Merge pull request #23639 from cachedout/issue_23452
    • d418b49 Syntax error!
    • 45b4015 Handle exceptions raised by __virtual__
  • PR #23637: (cachedout) Convert str master to list @ 2015-05-13T15:08:19Z

    • ISSUE #23611: (hubez) master_type set to 'failover' but 'master' is not of type list but of type <type 'str'> | refs: #23637
    • bd9b94b Merge pull request #23637 from cachedout/issue_23611
    • 56cb1f5 Fix typo
    • f6fcf19 Convert str master to list
  • PR #23595: (rallytime) Backport #23549 to 2014.7 @ 2015-05-12T21:19:40Z

    • f20c0e4 Merge pull request #23595 from rallytime/bp-23549
    • 6efcac0 Update __init__.py
  • PR #23594: (rallytime) Backport #23496 to 2014.7 @ 2015-05-12T21:19:34Z

    • ISSUE #23110: (martinhoefling) Copying files from gitfs in file.recurse state fails
    • PR #23496: (martinhoefling) Fix for issue #23110 | refs: #23594
    • 1acaf86 Merge pull request #23594 from rallytime/bp-23496
    • d5ae1d2 Fix for issue #23110 This resolves issues when the freshly created directory is removed by fileserver.update.
  • PR #23593: (rallytime) Backport #23442 to 2014.7 @ 2015-05-12T21:19:26Z

    • PR #23442: (clan) add directory itself to keep list | refs: #23593
    • 2c221c7 Merge pull request #23593 from rallytime/bp-23442
    • 39869a1 check w/ low['name'] only
    • 304cc49 another fix for file defined w/ id, but require name
    • 8814d41 add directory itself to keep list
  • PR #23606: (twangboy) Fixed checkbox for starting service and actually starting it @ 2015-05-12T21:18:50Z

    • fadd1ef Merge pull request #23606 from twangboy/fix_installer
    • 038331e Fixed checkbox for starting service and actually starting it
  • PR #23592: (rallytime) Backport #23389 to 2014.7 @ 2015-05-12T16:44:42Z

    • ISSUE #22908: (karanjad) Add failhard option to salt orchestration | refs: #23389
    • PR #23389: (cachedout) Correct fail_hard typo | refs: #23592
    • 10b3f0f Merge pull request #23592 from rallytime/bp-23389
    • 734cc43 Correct fail_hard typo
  • PR #23573: (techhat) Scan all available networks for public and private IPs | refs: #23802 @ 2015-05-12T15:22:22Z

    • cd34b9b Merge pull request #23573 from techhat/novaquery
    • f92db5e Linting
    • 26e00d3 Scan all available networks for public and private IPs
  • PR #23558: (jfindlay) reorder emerge command line @ 2015-05-12T15:17:46Z

    • ISSUE #23479: (danielmorlock) Typo in pkg.removed for Gentoo? | refs: #23558
    • 2a72cd7 Merge pull request #23558 from jfindlay/fix_ebuild
    • 45404fb reorder emerge command line
  • PR #23530: (dr4Ke) salt-ssh state: fix including all salt:// references @ 2015-05-12T15:13:43Z

    • ISSUE #23355: (dr4Ke) salt-ssh: 'sources: salt://' files from 'pkg' state are not included in salt_state.tgz | refs: #23530
    • a664a3c Merge pull request #23530 from dr4Ke/fix_salt-ssh_to_include_pkg_sources
    • 5df6a80 fix pylint warning
    • d0549e5 salt-ssh state: fix including all salt:// references
  • PR #23433: (twangboy) Obtain all software from the registry @ 2015-05-11T22:47:52Z

    • ISSUE #23004: (b18) 2014.7.5 - Windows - pkg.list_pkgs - "nxlog" never shows up in output. | refs: #23433
    • 55c3869 Merge pull request #23433 from twangboy/list_pkgs_fix
    • 8ab5b1b Fix pylint error
    • 2d11d65 Obtain all software from the registry
  • PR #23554: (jleroy) Debian: Hostname always updated @ 2015-05-11T21:57:00Z

    • 755bed0 Merge pull request #23554 from jleroy/debian-hostname-fix
    • 5ff749e Debian: Hostname always updated
  • PR #23551: (dr4Ke) grains.append unit tests, related to #23474 @ 2015-05-11T21:54:25Z

    • 6ec87ce Merge pull request #23551 from dr4Ke/grains.append_unit_tests
    • ebff9df fix pylint errors
    • c495404 unit tests for grains.append module function
    • 0c9a323 use MagickMock
    • c838a22 unit tests for grains.append module function
  • PR #23474: (dr4Ke) Fix grains.append in nested dictionnary grains #23411 @ 2015-05-11T18:00:21Z

    • ISSUE #23411: (dr4Ke) grains.append should work at any level of a grain | refs: #23440
    • PR #23440: (dr4Ke) fix grains.append in nested dictionnary grains #23411 | refs: #23474
    • e96c5c5 Merge pull request #23474 from dr4Ke/fix_grains.append_nested
    • a01a5bb grains.get, parameter delimititer, versionadded: 2014.7.6
    • b39f504 remove debugging output
    • b6e15e2 fix grains.append in nested dictionnary grains #23411
  • PR #23537: (t0rrant) Update changelog @ 2015-05-11T17:02:16Z

    • ab7e1ae Merge pull request #23537 from t0rrant/patch-1
    • 8e03cc9 Update changelog
  • PR #23538: (cro) Update date in LICENSE file @ 2015-05-11T15:19:25Z

    • b79fed3 Merge pull request #23538 from cro/licupdate
    • 345efe2 Update date in LICENSE file
  • PR #23505: (aneeshusa) Remove unused ssh config validator. Fixes #23159. @ 2015-05-09T13:24:15Z

    • ISSUE #23159: (aneeshusa) Unused validator
    • a123a36 Merge pull request #23505 from aneeshusa/remove-unused-ssh-config-validator
    • 90af167 Remove unused ssh config validator. Fixes #23159.
  • PR #23467: (slinu3d) Added AWS v4 signature support @ 2015-05-08T14:36:19Z

    • ISSUE #20518: (ekle) module s3.get does not support eu-central-1 | refs: #23467
    • ca2c21a Merge pull request #23467 from slinu3d/2014.7
    • 0b4081d Fixed pylint error at line 363
    • 5be5eb5 Fixed pylink errors
    • e64f374 Fixed lint errors
    • b9d1ac4 Added AWS v4 signature support
  • PR #23444: (techhat) Add create_attach_volume to nova driver @ 2015-05-07T19:51:32Z

    • e6f9eec Merge pull request #23444 from techhat/novacreateattach
    • ebdb7ea Add create_attach_volume to nova driver
  • PR #23460: (s0undt3ch) [2014.7] Update to latest stable bootstrap script v2015.05.07 @ 2015-05-07T19:10:54Z

    • ISSUE #563: (chutz) pidfile support for minion and master daemons | refs: #23460
    • e331463 Merge pull request #23460 from s0undt3ch/hotfix/bootstrap-script-2014.7
    • edcd0c4 Update to latest stable bootstrap script v2015.05.07
  • PR #23439: (techhat) Add wait_for_passwd_maxtries variable @ 2015-05-07T07:28:56Z

    • 7a8ce1a Merge pull request #23439 from techhat/maxtries
    • 0ad3ff2 Add wait_for_passwd_maxtries variable
  • PR #23422: (cro) $HOME should not be used, some shells don't set it. @ 2015-05-06T21:02:36Z

    • 644eb75 Merge pull request #23422 from cro/gce_sh_home
    • 4ef9e6b Don't use $HOME to find user's directory, some shells don't set it
  • PR #23425: (basepi) [2014.7] Fix typo in FunctionWrapper @ 2015-05-06T20:38:03Z

    • ef17ab4 Merge pull request #23425 from basepi/functionwrapper_typo
    • c390737 Fix typo in FunctionWrapper
  • PR #23385: (rallytime) Backport #23346 to 2014.7 @ 2015-05-06T20:12:29Z

    • PR #23346: (ericfode) Allow file_map in salt-cloud to handle folders. | refs: #23385
    • 1b13ec0 Merge pull request #23385 from rallytime/bp-23346
    • 9efc13c more linting fixes
    • cf131c9 cleaned up some pylint errors
    • f981699 added logic to sftp_file and file_map to allow folder uploads using file_map
  • PR #23414: (jfindlay) 2015.2 -> 2015.5 @ 2015-05-06T20:04:02Z

    • f8c7a62 Merge pull request #23414 from jfindlay/update_branch
    • 8074d16 2015.2 -> 2015.5
  • PR #23404: (hvnsweeting) saltapi cherrypy: initialize var when POST body is empty @ 2015-05-06T17:35:56Z

    • 54b3bd4 Merge pull request #23404 from hvnsweeting/cherrypy-post-emptybody-fix
    • f85f8f9 initialize var when POST body is empty
  • PR #23409: (terminalmage) Update Lithium docstrings in 2014.7 branch @ 2015-05-06T16:20:46Z

    • 160f703 Merge pull request #23409 from terminalmage/update-lithium-docstrings-2014.7
    • bc97d01 Fix sphinx typo
    • 20006b0 Update Lithium docstrings in 2014.7 branch
  • PR #23397: (jfindlay) add more flexible whitespace to locale_gen search @ 2015-05-06T03:44:11Z

    • ISSUE #17245: (tomashavlas) localemod does not generate locale for Arch | refs: #23307 #23397
    • aa5fb0a Merge pull request #23397 from jfindlay/fix_locale_gen
    • 0941fef add more flexible whitespace to locale_gen search
  • PR #23368: (kaithar) Backport #23367 to 2014.7 @ 2015-05-05T21:42:26Z

    • PR #23367: (kaithar) Put the sed insert statement back in to the output. | refs: #23368
    • PR #18368: (basepi) Merge forward from 2014.7 to develop | refs: #23367 #23368
    • 0c76dd4 Merge pull request #23368 from kaithar/bp-23367
    • 577f419 Pylint fix
    • 8d9acd1 Put the sed insert statement back in to the output.
  • PR #23350: (lorengordon) Append/prepend: search for full line @ 2015-05-05T21:42:11Z

    • ISSUE #23294: (variia) file.replace fails to append if repl string partially available | refs: #23350
    • 3493cc1 Merge pull request #23350 from lorengordon/file.replace_assume_line
    • b60e224 Append/prepend: search for full line
  • PR #23341: (cachedout) Fix syndic pid and logfile path @ 2015-05-05T21:29:10Z

    • ISSUE #23026: (adelcast) Incorrect salt-syndic logfile and pidfile locations | refs: #23341
    • 7be5c48 Merge pull request #23341 from cachedout/issue_23026
    • e98e65e Fix tests
    • 6011b43 Fix syndic pid and logfile path
  • PR #23272: (basepi) [2014.7] Allow salt-ssh minion config overrides via master config and roster | refs: #23347 @ **

    • ISSUE #19114: (pykler) salt-ssh and gpg pillar renderer | refs: #23188 #23272 #23347
    • PR #23188: (basepi) [2014.7] Work around bug in salt-ssh in config.get for gpg renderer | refs: #23272
    • ea61abf Merge pull request #23272 from basepi/salt-ssh.minion.config.19114
    • c223309 Add versionadded
    • be7407f Lint
    • c2c3375 Missing comma
    • 8e3e8e0 Pass the minion_opts through the FunctionWrapper
    • cb69cd0 Match the master config template in the master config reference
    • 87fc316 Add Salt-SSH section to master config template
    • 91dd9dc Add ssh_minion_opts to master config ref
    • c273ea1 Add minion config to salt-ssh doc
    • a0b6b76 Add minion_opts to roster docs
    • 5212c35 Accept minion_opts from the target information
    • e2099b6 Process ssh_minion_opts from master config
    • 3b64214 Revert "Work around bug in salt-ssh in config.get for gpg renderer"
    • 494953a Remove the strip (embracing multi-line YAML dump)
    • fe87f0f Dump multi-line yaml into the SHIM
    • b751a72 Inject local minion config into shim if available
  • PR #23347: (basepi) [2014.7] Salt-SSH Backport FunctionWrapper.__contains__ @ 2015-05-05T14:13:21Z

    • ISSUE #19114: (pykler) salt-ssh and gpg pillar renderer | refs: #23188 #23272 #23347
    • PR #23272: (basepi) [2014.7] Allow salt-ssh minion config overrides via master config and roster | refs: #23347
    • PR #23188: (basepi) [2014.7] Work around bug in salt-ssh in config.get for gpg renderer | refs: #23272
    • 4f760dd Merge pull request #23347 from basepi/salt-ssh.functionwrapper.contains.19114
    • 30595e3 Backport FunctionWrapper.__contains__
  • PR #23344: (cachedout) Explicitely set file_client on master @ 2015-05-04T23:21:48Z

    • ISSUE #22742: (hvnsweeting) salt-master says: "This master address: 'salt' was previously resolvable but now fails to resolve!" | refs: #23344
    • 02658b1 Merge pull request #23344 from cachedout/issue_22742
    • 5adc96c Explicitely set file_client on master
  • PR #23318: (cellscape) Honor seed argument in LXC container initializaton @ 2015-05-04T20:58:12Z

    • PR #23311: (cellscape) Fix new container initialization in LXC runner | refs: #23318
    • ba7605d Merge pull request #23318 from cellscape/honor-seed-argument
    • 228b1be Honor seed argument in LXC container initializaton
  • PR #23307: (jfindlay) check for /etc/locale.gen @ 2015-05-04T20:56:32Z

    • ISSUE #17245: (tomashavlas) localemod does not generate locale for Arch | refs: #23307 #23397
    • 4ac4509 Merge pull request #23307 from jfindlay/fix_locale_gen
    • 101199a check for /etc/locale.gen
  • PR #23324: (s0undt3ch) [2014.7] Update to the latest stable release of the bootstrap script v2015.05.04 @ 2015-05-04T16:28:30Z

    • ISSUE #580: (thatch45) recursive watch not being caught | refs: #23324
    • ISSUE #552: (jhutchins) Support require and watch under the same state dec | refs: #23324
    • PR #589: (epoelke) add --quiet and --outfile options to saltkey | refs: #23324
    • PR #567: (bastichelaar) Added upstart module | refs: #23324
    • PR #560: (UtahDave) The runas feature that was added in 93423aa2e5e4b7de6452090b0039560d2b13... | refs: #23324
    • PR #504: (SEJeff) File state goodies | refs: #23324
    • f790f42 Merge pull request #23324 from s0undt3ch/hotfix/bootstrap-script-2014.7
    • 6643e47 Update to the latest stable release of the bootstrap script v2015.05.04
  • PR #23329: (cro) Require requests to verify cert when talking to aliyun and proxmox cloud providers @ 2015-05-04T16:18:17Z

    • 5487367 Merge pull request #23329 from cro/cloud_verify_cert
    • 860d4b7 Turn on ssl verify for requests.
  • PR #23311: (cellscape) Fix new container initialization in LXC runner | refs: #23318 @ 2015-05-04T09:55:29Z

    • ea20176 Merge pull request #23311 from cellscape/fix-salt-cloud-lxc-init
    • 76fbb34 Fix new container initialization in LXC runner
  • PR #23298: (chris-prince) Fixed issue #18880 in 2014.7 branch @ 2015-05-03T15:49:41Z

    • ISSUE #18880: (johtso) npm installed breaks when a module is missing
    • c399b8f Merge pull request #23298 from chris-prince/2014.7
    • 0fa25db Fixed issue #18880 in 2014.7 branch
  • PR #23292: (rallytime) Merge #23151 with pylint fixes @ 2015-05-02T03:54:12Z

    • ISSUE #23148: (cr1st1p) virt - error handling bogus if machine image location is wrong
    • PR #23151: (cr1st1p) Fixes #23148 | refs: #23292
    • 16ecefd Merge pull request #23292 from rallytime/merge-23151
    • 8ff852a Merge #23151 with pylint fixes
    • 8ffa12e Fixes #23148
  • PR #23274: (basepi) [2014.7] Reduce salt-ssh debug log verbosity @ 2015-05-01T20:19:23Z

    • ce24315 Merge pull request #23274 from basepi/salt-ssh.debug.verbosity
    • ecee6c6 Log stdout and stderr to trace
    • 08f54d7 Log stdout and stderr to trace as well
    • 9b9c30f Reduce salt-ssh debug log verbosity
  • PR #23261: (rallytime) Fix tornado websocket event handler registration @ 2015-05-01T18:20:31Z

    • ISSUE #22605: (mavenAtHouzz) Tornado websockets event Handlers registration are incorrect | refs: #23261
    • 7b55e43 Merge pull request #23261 from rallytime/fix-22605
    • 4950fbf Fix tornado websocket event handler registration
  • PR #23258: (teizz) TCP keepalives on the ret side, Revisited. @ 2015-05-01T16:13:49Z

    • 83ef7cb Merge pull request #23258 from teizz/ret_keepalive_2014_7_5
    • 0b9fb6f The fixes by cachedout which were backported into 2015_2 were missing a single parameter thus not setting up the TCP keepalive for the ZeroMQ Channel by default.
  • PR #23241: (techhat) Move iptables log options after the jump @ 2015-05-01T01:31:59Z

    • ISSUE #23224: (twellspring) iptables.append --log parameters must be after --jump LOG | refs: #23241
    • 8de3c83 Merge pull request #23241 from techhat/issue23224
    • 87f7948 Move iptables log options after the jump
  • PR #23228: (rallytime) Backport #23171 to 2014.7 @ 2015-04-30T21:09:45Z

    • PR #23171: (skizunov) Bugfix: 'clean_proc_dir' is broken | refs: #23228
    • f20210e Merge pull request #23228 from rallytime/bp-23171
    • e670e99 Bugfix: 'clean_proc_dir' is broken
  • PR #23227: (rallytime) Backport #22808 to 2014.7 @ 2015-04-30T21:09:14Z

    • ISSUE #22703: (Xiol) salt-ssh does not work with list matcher | refs: #22808
    • PR #22808: (basepi) [2015.2] Add list targeting to salt-ssh flat roster | refs: #23227
    • 721cc28 Merge pull request #23227 from rallytime/bp-22808
    • d208a00 Dict, not list
    • a3f529e It's already been converted to a list
    • dd57f2d Add list targeting to salt-ssh flat roster
  • PR #22823: (hvnsweeting) 22822 file directory clean @ 2015-04-30T15:25:51Z

    • 82c22af Merge pull request #22823 from hvnsweeting/22822-file-directory-clean
    • c749c27 fix lint - remove unnecessary parenthesis
    • cb3dfee refactor
    • 8924b5a refactor: use relpath instead of do it manually
    • d3060a5 refactor
    • 5759a0e bugfix: fix file.directory clean=True when it require parent dir
  • PR #22977: (bersace) Fix fileserver backends __opts__ overwritten by _pillar @ 2015-04-30T15:24:56Z

    • ISSUE #22941: (bersace) _pillar func breaks fileserver globals | refs: #22977 #22942
    • PR #22942: (bersace) Fix fileserver backends global overwritten by _pillar | refs: #22977
    • f6c0728 Merge pull request #22977 from bersace/fix-fileserver-backends-pillar-side-effect
    • 5f451f6 Fix fileserver backends __opts__ overwritten by _pillar
  • PR #23180: (jfindlay) fix typos from 36841bdd in masterapi.py @ 2015-04-30T15:22:41Z

    • ISSUE #23166: (claudiupopescu) "Error in function _minion_event" resulting in modules not loaded | refs: #23180
    • 34206f7 Merge pull request #23180 from jfindlay/remote_event
    • 72066e1 fix typos from 36841bdd in masterapi.py
  • PR #23176: (jfindlay) copy standard cmd.run* kwargs into cmd.run_chroot @ 2015-04-30T15:22:12Z

    • ISSUE #23153: (cr1st1p) cmdmod : run_chroot - broken in 2014.7.5 - missing kwargs | refs: #23176
    • b6b8216 Merge pull request #23176 from jfindlay/run_chroot
    • 7dc3417 copy standard cmd.run* kwargs into cmd.run_chroot
  • PR #23193: (joejulian) supervisord.mod_watch should accept sfun @ 2015-04-30T04:34:21Z

    • ISSUE #23192: (joejulian) supervisord mod_watch does not accept sfun | refs: #23193
    • effacbe Merge pull request #23193 from joejulian/2014.7_supervisord_accept_sfun
    • efb59f9 supervisord.mod_watch should accept sfun
  • PR #23188: (basepi) [2014.7] Work around bug in salt-ssh in config.get for gpg renderer | refs: #23272 @ 2015-04-30T04:34:10Z

    • 72fe88e Merge pull request #23188 from basepi/salt-ssh.function.wrapper.gpg.19114
    • d73979e Work around bug in salt-ssh in config.get for gpg renderer
  • PR #23154: (cachedout) Re-establish channel on interruption in fileclient @ 2015-04-29T16:18:59Z

    • ISSUE #21480: (msciciel) TypeError: string indices must be integers, not str | refs: #23154
    • 168508e Merge pull request #23154 from cachedout/refresh_channel
    • 9f8dd80 Re-establish channel on interruption in fileclient
  • PR #23146: (rallytime) Backport #20779 to 2014.7 @ 2015-04-28T20:45:06Z

    • ISSUE #20647: (ryan-lane) file.serialize fails to serialize due to ordered dicts | refs: #20779
    • PR #20779: (cachedout) Use declared yaml options | refs: #23146
    • 3b53e04 Merge pull request #23146 from rallytime/bp-20779
    • ffd1849 compare OrderedDicts in serializer unit test
    • a221706 Just change serialize
    • a111798 Use declared yaml options
  • PR #23145: (rallytime) Backport #23089 to 2014.7 @ 2015-04-28T20:44:56Z

    • PR #23089: (cachedout) Stringify version number before lstrip | refs: #23145
    • 8bb4664 Merge pull request #23145 from rallytime/bp-23089
    • 93c41af Stringify version number before lstrip
  • PR #23144: (rallytime) Backport #23124 to 2014.7 @ 2015-04-28T20:44:46Z

    • ISSUE #16188: (drawks) salt.modules.parted has various functions with bogus input validation. | refs: #23124
    • PR #23124: (ether42) fix parsing the output of parted in parted.list_() | refs: #23144
    • c85d36f Merge pull request #23144 from rallytime/bp-23124-2014-7
    • 6b64da7 fix parsing the output of parted
  • PR #23120: (terminalmage) Don't run os.path.relpath() if repo doesn't have a "root" param set @ 2015-04-28T15:46:54Z

    • a27b158 Merge pull request #23120 from terminalmage/fix-gitfs-relpath
    • 1860fff Don't run os.path.relpath() if repo doesn't have a "root" param set
  • PR #23132: (clinta) Backport b27c176 @ 2015-04-28T15:00:30Z

    • fcba607 Merge pull request #23132 from clinta/patch-2
    • a824d72 Backport b27c176
  • PR #23114: (rallytime) Adjust ZeroMQ 4 docs to reflect changes to Ubuntu 12 packages @ 2015-04-28T03:59:24Z

    • ISSUE #18476: (Auha) Upgrading salt on my master caused dependency issues | refs: #23114 #18610
    • PR #18610: (rallytime) Make ZMQ 4 installation docs for ubuntu more clear | refs: #23114
    • b0f4b28 Merge pull request #23114 from rallytime/remove_ubuntu_zmq4_docs
    • f6cc7c8 Adjust ZeroMQ 4 docs to reflect changes to Ubuntu 12 packages
  • PR #23108: (rallytime) Backport #23097 to 2014.7 @ 2015-04-28T03:58:05Z

    • ISSUE #23085: (xenophonf) Use "s3fs" (not "s3") in fileserver_roots | refs: #23097
    • PR #23097: (rallytime) Change s3 to s3fs in fileserver_roots docs example | refs: #23108
    • 399857f Merge pull request #23108 from rallytime/bp-23097
    • fa88984 Change s3 to s3fs in fileserver_roots docs example
  • PR #23112: (basepi) [2014.7] Backport #22199 to fix mysql returner save_load errors @ 2015-04-28T03:55:44Z

    • ISSUE #22171: (basepi) We should only call returner.save_load once per jid | refs: #22199
    • PR #22199: (basepi) [2015.2] Put a bandaid on the save_load duplicate issue (mysql returner) | refs: #23112
    • 5541537 Merge pull request #23112 from basepi/mysql_returner_save_load
    • 0127012 Put a bandaid on the save_load duplicate issue
  • PR #23113: (rallytime) Revert "Backport #22895 to 2014.7" @ 2015-04-28T03:27:29Z

    • PR #22925: (rallytime) Backport #22895 to 2014.7 | refs: #23113
    • PR #22895: (aletourneau) pam_tally counter was not reset to 0 after a succesfull login | refs: #22925
    • dfe2066 Merge pull request #23113 from saltstack/revert-22925-bp-22895
    • b957ea8 Revert "Backport #22895 to 2014.7"
  • PR #23094: (terminalmage) pygit2: disable cleaning of stale refs for authenticated remotes @ 2015-04-27T20:51:28Z

    • ISSUE #23013: (markusr815) gitfs regression with authenticated repos | refs: #23094
    • 21515f3 Merge pull request #23094 from terminalmage/issue23013
    • aaf7b04 pygit2: disable cleaning of stale refs for authenticated remotes
  • PR #23048: (jfindlay) py-2.6 compat for utils/boto.py ElementTree exception @ 2015-04-25T16:56:45Z

    • d45aa21 Merge pull request #23048 from jfindlay/ET_error
    • 64c42cc py-2.6 compat for utils/boto.py ElementTree exception
  • PR #23025: (jfindlay) catch exceptions on bad system locales/encodings @ 2015-04-25T16:56:30Z

    • ISSUE #22981: (syphernl) Locale state throwing traceback when generating not (yet) existing locale | refs: #23025
    • d25a5c1 Merge pull request #23025 from jfindlay/fix_sys_locale
    • 9c4d62b catch exceptions on bad system locales/encodings
  • PR #22932: (hvnsweeting) bugfix: also manipulate dir_mode when source not defined @ 2015-04-25T16:54:58Z

    • 5e44b59 Merge pull request #22932 from hvnsweeting/file-append-bugfix
    • 3f368de do not use assert in execution module
    • 9d4fd4a bugfix: also manipulate dir_mode when source not defined
  • PR #23055: (jfindlay) prevent ps module errors on accessing dead procs @ 2015-04-24T22:39:49Z

    • ISSUE #23021: (ether42) ps.pgrep raises NoSuchProcess | refs: #23055
    • c2416a4 Merge pull request #23055 from jfindlay/fix_ps
    • c2dc7ad prevent ps module errors on accessing dead procs
  • PR #23031: (jfindlay) convert exception e.message to just e @ 2015-04-24T18:38:13Z

    • bfd9158 Merge pull request #23031 from jfindlay/exception
    • 856bad1 convert exception e.message to just e
  • PR #23015: (hvnsweeting) if status of service is stop, there is not an error with it @ 2015-04-24T14:35:10Z

    • 7747f33 Merge pull request #23015 from hvnsweeting/set-non-error-lvl-for-service-status-log
    • 92ea163 if status of service is stop, there is not an error with it
  • PR #23000: (jfindlay) set systemd service killMode to process for minion @ 2015-04-24T03:42:39Z

    • ISSUE #22993: (jetpak) salt-minion restart causes all spawned daemons to die on centos7 (systemd) | refs: #23000
    • 2e09789 Merge pull request #23000 from jfindlay/systemd_kill
    • 3d575e2 set systemd service killMode to process for minion
  • PR #22999: (jtand) Added retry_dns to minion doc. @ 2015-04-24T03:30:24Z

    • ISSUE #22707: (arthurlogilab) retry_dns of master configuration is missing from the documentation | refs: #22999
    • b5c059a Merge pull request #22999 from jtand/fix_22707
    • 8486e17 Added retry_dns to minion doc.
  • PR #22990: (techhat) Use the proper cloud conf variable @ 2015-04-23T17:48:07Z

    • 27dc877 Merge pull request #22990 from techhat/2014.7
    • d33bcbc Use the proper cloud conf variable
  • PR #22976: (multani) Improve state_output documentation @ 2015-04-23T12:24:22Z

    • 13dff65 Merge pull request #22976 from multani/fix/state-output-doc
    • 19efd41 Improve state_output documentation
  • PR #22955: (terminalmage) Fix regression introduced yesterday in dockerio module @ 2015-04-22T18:56:39Z

    • 89fa185 Merge pull request #22955 from terminalmage/dockerio-run-fix
    • b4472ad Fix regression introduced yesterday in dockerio module
  • PR #22954: (rallytime) Backport #22909 to 2014.7 @ 2015-04-22T18:56:20Z

    • PR #22909: (mguegan) Fix compatibility with pkgin > 0.7 | refs: #22954
    • 46ef227 Merge pull request #22954 from rallytime/bp-22909
    • 70c1cd3 Fix compatibility with pkgin > 0.7
  • PR #22856: (jfindlay) increase timeout and decrease tries for route53 records @ 2015-04-22T16:47:01Z

    • ISSUE #18720: (Reiner030) timeouts when setting Route53 records | refs: #22856
    • c9ae593 Merge pull request #22856 from jfindlay/route53_timeout
    • ba4a786 add route53 record sync wait, default=False
    • ea2fd50 increase timeout and tries for route53 records
  • PR #22946: (s0undt3ch) Test with a more recent pip version to avoid a traceback @ 2015-04-22T16:25:17Z

    • a178d44 Merge pull request #22946 from s0undt3ch/2014.7
    • bc87749 Test with a more recent pip version to avoid a traceback
  • PR #22945: (garethgreenaway) Fixes to scheduler @ 2015-04-22T16:25:00Z

    • de339be Merge pull request #22945 from garethgreenaway/22571_2014_7_schedule_pillar_refresh_seconds_exceptions
    • bfa6d25 Fixing a reported issue when using a scheduled job from pillar with splay. _seconds element that acted as a backup of the actual seconds was being removed when pillar was refreshed and causing exceptions. This fix moves some splay related code out of the if else condition so it's checked whether the job is in the job queue or not.
  • PR #22887: (hvnsweeting) fix #18843 @ 2015-04-22T15:47:05Z

    • ISSUE #18843: (calvinhp) State user.present will fail to create home if user exists and homedir doesn't
    • 12d2b91 Merge pull request #22887 from hvnsweeting/18843-fix-user-present-home
    • 7fe7b08 run user.chhome once to avoid any side-effect when run it twice
    • 19de995 clarify the usage of home arg
    • d6dc09a enhance doc, as usermod on ubuntu 12.04 will not CREATE home
    • 0ce4d7f refactor: force to use boolean
    • 849d19e log debug the creating dir process
    • c4e95b9 fix #18843: usermod won't create a dir if old home does not exist
  • PR #22930: (jfindlay) localemod.gen_locale now always returns a boolean @ 2015-04-22T15:37:39Z

    • ISSUE #21140: (holms) locale.present state executed successfully, although originally fails | refs: #22930 #22829
    • ISSUE #2417: (ffa) Module standards | refs: #22829
    • PR #22829: (F30) Always return a boolean in gen_locale() | refs: #22930
    • b7de7bd Merge pull request #22930 from jfindlay/localegen_bool
    • 399399f localemod.gen_locale now always returns a boolean
  • PR #22933: (hvnsweeting) add test for #18843 @ 2015-04-22T15:27:18Z

    • ISSUE #18843: (calvinhp) State user.present will fail to create home if user exists and homedir doesn't
    • 11bcf14 Merge pull request #22933 from hvnsweeting/18843-test
    • b13db32 add test for #18843
  • PR #22925: (rallytime) Backport #22895 to 2014.7 | refs: #23113 @ 2015-04-22T02:30:26Z

    • PR #22895: (aletourneau) pam_tally counter was not reset to 0 after a succesfull login | refs: #22925
    • 6890752 Merge pull request #22925 from rallytime/bp-22895
    • 3852d96 Pylint fix
    • 90f7829 Fixed pylint issues
    • 5ebf159 Cleaned up pull request
    • a08ac47 pam_tally counter was not reset to 0 after a succesfull login
  • PR #22914: (cachedout) Call proper returner function in jobs.list_jobs @ 2015-04-22T00:49:01Z

    • ISSUE #22790: (whiteinge) jobs.list_jobs runner tracebacks on 'missing' argument | refs: #22914
    • eca37eb Merge pull request #22914 from cachedout/issue_22790
    • d828d6f Call proper returner function in jobs.list_jobs
  • PR #22918: (JaseFace) Add a note to the git_pillar docs stating that GitPython is the only currently supported provider @ 2015-04-22T00:48:26Z

    • 44f3409 Merge pull request #22918 from JaseFace/git-pillar-provider-doc-note
    • 0aee5c2 Add a note to the git_pillar docs stating that GitPython is the only currently supported provider
  • PR #22907: (techhat) Properly merge cloud configs to create profiles @ 2015-04-21T22:02:44Z

    • 31c461f Merge pull request #22907 from techhat/cloudconfig
    • 3bf4e66 Properly merge cloud configs to create profiles
  • PR #22894: (0xf10e) Fix issue #22782 @ 2015-04-21T18:55:18Z

    • f093975 Merge pull request #22894 from 0xf10e/2014.7
    • 58fa24c Clarify doc on kwarg 'roles' for user_present().
    • f0ae2eb Improve readability by renaming tenant_role
  • PR #22902: (rallytime) Change state example to use proper kwarg @ 2015-04-21T18:50:47Z

    • ISSUE #12003: (MarkusMuellerAU) [state.dockerio] docker.run TypeError: run() argument after ** must be a mapping, not str | refs: #22902
    • c802ba7 Merge pull request #22902 from rallytime/docker_doc_fix
    • 8f70346 Change state example to use proper kwarg
  • PR #22898: (terminalmage) dockerio: better error message for native exec driver @ 2015-04-21T18:02:58Z

    • 81771a7 Merge pull request #22898 from terminalmage/issue12003
    • c375309 dockerio: better error message for native exec driver
  • PR #22897: (rallytime) Add param documentation for file.replace state @ 2015-04-21T17:31:04Z

    • ISSUE #22825: (paolodina) Issue using file.replace in state file | refs: #22897
    • e2ec4ec Merge pull request #22897 from rallytime/fix-22825
    • 9c51630 Add param documentation for file.replace state
  • PR #22850: (bersace) Fix pillar and salt fileserver mixed @ 2015-04-21T17:04:33Z

    • ISSUE #22844: (bersace) LocalClient file cache confuse pillar and state files | refs: #22850
    • fd53889 Merge pull request #22850 from bersace/fix-pillar-salt-mixed
    • 31b98e7 Initialize state file client after pillar loading
    • f6bebb7 Use saltenv
  • PR #22818: (twangboy) Added documentation regarding pip in windows @ 2015-04-21T03:58:59Z

    • 1380fec Merge pull request #22818 from twangboy/upd_pip_docs
    • cb999c7 Update pip.py
    • 3cc5c97 Added documentation regarding pip in windows
  • PR #22872: (rallytime) Prevent stacktrace on os.path.exists in hosts module @ 2015-04-21T02:54:40Z

    • b2bf17f Merge pull request #22872 from rallytime/fix_hosts_stacktrace
    • c88a1ea Prevent stacktrace on os.path.exists in hosts module
  • PR #22853: (s0undt3ch) Don't assume package installation order. @ 2015-04-21T02:42:41Z

    • 03af523 Merge pull request #22853 from s0undt3ch/2014.7
    • b62df62 Don't assume package installation order.
  • PR #22877: (s0undt3ch) Don't fail on make clean just because the directory does not exist @ 2015-04-21T02:40:47Z

    • 9211e36 Merge pull request #22877 from s0undt3ch/hotfix/clean-docs-fix
    • 95d6887 Don't fail on make clean just because the directory does not exist
  • PR #22873: (thatch45) Type check the version since it will often be numeric @ 2015-04-21T02:38:11Z

    • 5bdbd08 Merge pull request #22873 from thatch45/type_check
    • 53b8376 Type check the version since it will often be numeric
  • PR #22870: (twangboy) Added ability to send a version with a space in it @ 2015-04-20T23:18:28Z

    • c965b0a Merge pull request #22870 from twangboy/fix_installer_again
    • 3f180cf Added ability to send a version with a space in it
  • PR #22863: (rallytime) Backport #20974 to 2014.7 @ 2015-04-20T19:29:37Z

    • PR #20974: (JohannesEbke) Fix expr_match usage in salt.utils.check_whitelist_blacklist | refs: #22863
    • 2973eb1 Merge pull request #22863 from rallytime/bp-20974
    • 14913a4 Fix expr_match usage in salt.utils.check_whitelist_blacklist
  • PR #22578: (hvnsweeting) gracefully handle when salt-minion cannot decrypt key @ 2015-04-20T15:24:45Z

    • c45b92b Merge pull request #22578 from hvnsweeting/2014-7-fix-compile-pillar
    • f75b24a gracefully handle when salt-minion cannot decrypt key
  • PR #22800: (terminalmage) Improve error logging for pygit2 SSH-based remotes @ 2015-04-18T17:18:55Z

    • ISSUE #21979: (yrdevops) gitfs: error message not descriptive enough when libgit2 was compiled without libssh2 | refs: #22800
    • 900c7a5 Merge pull request #22800 from terminalmage/issue21979
    • 8f1c008 Clarify that for pygit2, receiving 0 objects means repo is up-to-date
    • 98885f7 Add information about libssh2 requirement for pygit2 ssh auth
    • 09468d2 Fix incorrect log message
    • 2093bf8 Adjust loglevels for gitfs errors
    • 9d394df Improve error logging for pygit2 SSH-based remotes
  • PR #22813: (twangboy) Updated instructions for building salt @ 2015-04-18T04:10:07Z

    • e99f2fd Merge pull request #22813 from twangboy/win_doc_fix
    • adc421a Fixed some formatting issues
    • 8901b3b Updated instructions for building salt
  • PR #22810: (basepi) [2014.7] More msgpack gating for salt-ssh @ 2015-04-17T22:28:24Z

    • ISSUE #22708: (Bilge) salt-ssh file.accumulated error: NameError: global name 'msgpack' is not defined | refs: #22810
    • fe1de89 Merge pull request #22810 from basepi/salt-ssh.more.msgpack.gating
    • d4da8e6 Gate msgpack in salt/modules/saltutil.py
    • 02303b2 Gate msgpack in salt/modules/data.py
    • d7e8741 Gate salt.states.file.py msgpack
  • PR #22803: (rallytime) Allow map file to work with softlayer @ 2015-04-17T20:34:42Z

    • ISSUE #17144: (xpender) salt-cloud -m fails with softlayer | refs: #22803
    • 11df71e Merge pull request #22803 from rallytime/fix-17144
    • ce88b6a Allow map file to work with softlayer
  • PR #22807: (rallytime) Add 2014.7.5 links to windows installation docs @ 2015-04-17T20:32:13Z

    • cd43a95 Merge pull request #22807 from rallytime/windows_docs_update
    • 5931a58 Replace all 4s with 5s
    • eadaead Add 2014.7.5 links to windows installation docs
  • PR #22795: (rallytime) Added release note for 2014.7.5 release @ 2015-04-17T18:05:36Z

    • 0b295e2 Merge pull request #22795 from rallytime/release_notes
    • fde1fee Remove extra line
    • b19b95d Added release note for 2014.7.5 release
  • PR #22759: (twangboy) Final edits to the batch files for running salt @ 2015-04-17T04:31:15Z

    • ISSUE #22740: (lorengordon) New Windows installer assumes salt is installed to the current directory | refs: #22759
    • PR #22754: (twangboy) Removed redundant \ and " | refs: #22759
    • 3c91459 Merge pull request #22759 from twangboy/fix_bat_one_last_time
    • 075f82e Final edits to the batch files for running salt
  • PR #22760: (thatch45) Fix issues with the syndic @ 2015-04-17T04:30:48Z

    • 20d3f2b Merge pull request #22760 from thatch45/syndic_fix
    • e2db624 Fix issues with the syndic not resolving the master when the interface is set
  • PR #22762: (twangboy) Fixed version not showing in Add/Remove Programs @ 2015-04-17T04:29:46Z

    • 54c4584 Merge pull request #22762 from twangboy/fix_installer
    • 4d25af8 Fixed version not showing in Add/Remove Programs

Salt 2014.1.0 Release Notes - Codename Hydrogen

Note

Due to a change in master to minion communication, 2014.1.0 minions are not compatible with older-version masters. Please upgrade masters first. More info on backwards-compatibility policy here, under the "Upgrading Salt" subheading.

Note

A change in the grammar in the state compiler makes module.run in requisites illegal syntax. Its use is replaced simply with the word module. In other words you will need to change requisites like this:

require:
    module.run: some_module_name

to:

require:
    module: some_module_name

This is a breaking change. We apologize for the inconvenience, we needed to do this to remove some ambiguity in parsing requisites.

release:2014-02-24

The 2014.1.0 release of Salt is a major release which not only increases stability but also brings new capabilities in virtualization, cloud integration, and more. This release brings a great focus on the expansion of testing making roughly double the coverage in the Salt tests, and comes with many new features.

2014.1.0 is the first release to follow the new date-based release naming system. See the version numbers page for more details.

Major Features
Salt Cloud Merged into Salt

Salt Cloud is a tool for provisioning salted minions across various cloud providers. Prior to this release, Salt Cloud was a separate project but this marks its full integration with the Salt distribution. A Getting Started guide and additional documentation for Salt Cloud can be found here:

Google Compute Engine

Alongside Salt Cloud comes new support for the Google Compute Engine. Salt Stack can now deploy and control GCE virtual machines and the application stacks that they run.

For more information on Salt Stack and GCE, please see this blog post.

Documentation for Salt and GCE can be found here.

Salt Virt

Salt Virt is a cloud controller that supports virtual machine deployment, inspection, migration, and integration with many aspects of Salt.

Salt Virt has undergone a major overhaul with this release and now supports many more features and includes a number of critical improvements.

Docker Integration

Salt now ships with states and an execution module to manage Docker containers.

Substantial Testing Expansion

Salt continues to increase its unit/regression test coverage. This release includes over 300 new tests.

BSD Package Management

BSD package management has been entirely rewritten. FreeBSD 9 and older now default to using pkg_add, while FreeBSD 10 and newer will use pkgng. FreeBSD 9 can be forced to use pkgng, however, by specifying the following option in the minion config file:

providers:
  pkg: pkgng

In addition, support for installing software from the ports tree has been added. See the documentation for the ports state and execution module for more information.

Network Management for Debian/Ubuntu

Initial support for management of network interfaces on Debian-based distros has been added. See the documentation for the network state and the debian_ip for more information.

IPv6 Support for iptables State/Module

The iptables state and module now have IPv6 support. A new parameter family has been added to the states and execution functions, to distinguish between IPv4 and IPv6. The default value for this parameter is ipv4, specifying ipv6 will use ip6tables to manage firewall rules.

GitFS Improvements

Several performance improvements have been made to the Git fileserver backend. Additionally, file states can now use any any SHA1 commit hash as a fileserver environment:

/etc/httpd/httpd.conf:
  file.managed:
    - source: salt://webserver/files/httpd.conf
    - saltenv: 45af879

This applies to the functions in the cp module as well:

salt '*' cp.get_file salt://readme.txt /tmp/readme.txt saltenv=45af879
MinionFS

This new fileserver backend allows files which have been pushed from the minion to the master (using cp.push) to be served up from the salt fileserver. The path for these files takes the following format:

salt://minion-id/path/to/file

minion-id is the id of the "source" minion, the one from which the files were pushed to the master. /path/to/file is the full path of the file.

The MinionFS Walkthrough contains a more thorough example of how to use this backend.

saltenv

To distinguish between fileserver environments and execution functions which deal with environment variables, fileserver environments are now specified using the saltenv parameter. env will continue to work, but is deprecated and will be removed in a future release.

Grains Caching

A caching layer has been added to the Grains system, which can help speed up minion startup. Disabled by default, it can be enabled by setting the minion config option grains_cache:

grains_cache: True

# Seconds before grains cache is considered to be stale.
grains_cache_expiration: 300

If set to True, the grains loader will read from/write to a msgpack-serialized file containing the grains data.

Additional command-line parameters have been added to salt-call, mainly for testing purposes:

  • --skip-grains will completely bypass the grains loader when salt-call is invoked.
  • --refresh-grains-cache will force the grains loader to bypass the grains cache and refresh the grains, writing a new grains cache file.
Improved Command Logging Control

When using the cmd module, either on the CLI or when developing Salt execution modules, a new keyword argument output_loglevel allows for greater control over how (or even if) the command and its output are logged. For example:

salt '*' cmd.run 'tail /var/log/messages' output_loglevel=debug

The package management modules (apt, yumpkg, etc.) have been updated to log the copious output generated from these commands at loglevel debug.

Note

To keep a command from being logged, output_loglevel=quiet can be used.

Prior to this release, this could be done using quiet=True. This argument is still supported, but will be removed in a future Salt release.

PagerDuty Support

Initial support for firing events via PagerDuty has been added. See the documentation for the pagerduty module.

Virtual Terminal

Sometimes the subprocess module is not good enough, and, in fact, not even askpass is. This virtual terminal is still in it's infant childhood, needs quite some love, and was originally created to replace askpass, but, while developing it, it immediately proved that it could do so much more. It's currently used by salt-cloud when bootstrapping salt on clouds which require the use of a password.

Proxy Minions

Initial basic support for Proxy Minions is in this release. Documentation can be found here.

Proxy minions are a developing feature in Salt that enables control of devices that cannot run a minion. Examples include network gear like switches and routers that run a proprietary OS but offer an API, or "dumb" devices that just don't have the horsepower or ability to handle a Python VM.

Proxy minions can be difficult to write, so a simple REST-based example proxy is included. A Python bottle-based webserver can be found at https://github.com/cro/salt-proxy-rest as an endpoint for this proxy.

This is an ALPHA-quality feature. There are a number of issues with it currently, mostly centering around process control, logging, and inability to work in a masterless configuration.

Additional Bugfixes (Release Candidate Period)

Below are many of the fixes that were implemented in salt during the release candidate phase.

  • Fix mount.mounted leaving conflicting entries in fstab (issue 7079)
  • Fix mysql returner serialization to use json (issue 9590)
  • Fix ZMQError: Operation cannot be accomplished in current state errors (issue 6306)
  • Rbenv and ruby improvements
  • Fix quoting issues with mysql port (issue 9568)
  • Update mount module/state to support multiple swap partitions (issue 9520)
  • Fix archive state to work with bsdtar
  • Clarify logs for minion ID caching
  • Add numeric revision support to git state (issue 9718)
  • Update master_uri with master_ip (issue 9694)
  • Add comment to Debian mod_repo (issue 9923)
  • Fix potential undefined loop variable in rabbitmq state (issue 8703)
  • Fix for salt-virt runner to delete key on VM deletion
  • Fix for salt-run -d to limit results to specific runner or function (issue 9975)
  • Add tracebacks to jinja renderer when applicable (issue 10010)
  • Fix parsing in monit module (issue 10041)
  • Fix highstate output from syndic minions (issue 9732)
  • Quiet logging when dealing with passwords/hashes (issue 10000)
  • Fix for multiple remotes in git_pillar (issue 9932)
  • Fix npm installed command (issue 10109)
  • Add safeguards for utf8 errors in zcbuildout module
  • Fix compound commands (issue 9746)
  • Add systemd notification when master is started
  • Many doc improvements

Salt 2014.1.1 Release Notes

release:2014-03-18

Version 2014.1.1 is a bugfix release for 2014.1.0. The changes include:

Salt 2014.1.10 Release Notes

release:2014-08-01

Note

Version 2014.1.9 contained a regression which caused inaccurate Salt version detection, and thus was never packaged for general release. This version contains the version detection fix, but is otherwise identical to 2014.1.9.

Version 2014.1.10 is another bugfix release for 2014.1.0. Changes include:

  • Ensure salt-ssh will not continue if permissions on a temporary directory are not correct.
  • Use the bootstrap script distributed with Salt instead of relying on an external resource
  • Remove unused testing code
  • Ensure salt states are placed into the .salt directory in salt-ssh
  • Use a randomized path for temporary files in a salt-cloud deployment
  • Clean any stale directories to ensure a fresh copy of salt-ssh during a deployment

Salt 2014.1.10 fixes security issues documented by CVE-2014-3563: "Insecure tmp-file creation in seed.py, salt-ssh, and salt-cloud." Upgrading is recommended.

Salt 2014.1.11 Release Notes

release:2014-08-29

Version 2014.1.11 is another bugfix release for 2014.1.0. Changes include:

  • Fix for minion_id with byte-order mark (BOM) (issue 12296)
  • Fix runas deprecation in at module
  • Fix trailing slash befhavior for file.makedirs_ (issue 14019)
  • Fix chocolatey path (issue 13870)
  • Fix git_pillar infinite loop issues (issue 14671)
  • Fix json outputter null case
  • Fix for minion error if one of multiple masters are down (issue 14099)

Salt 2014.1.12 Release Notes

release:2014-10-08

Version 2014.1.12 is another bugfix release for 2014.1.0. Changes include:

Salt 2014.1.13 Release Notes

release:2014-10-14

Version 2014.1.13 is another bugfix release for 2014.1.0. Changes include:

  • Fix sftp_file by checking the exit status code of scp (which broke salt-cloud) (issue 16599)

Salt 2014.1.2 Release Notes

release:2014-04-15

Version 2014.1.2 is another bugfix release for 2014.1.0. The changes include:

  • Fix username detection when su'ed to root on FreeBSD (issue 11628)
  • Fix minionfs backend for file.recurse states
  • Fix 32-bit packages of different arches than the CPU arch, on 32-bit RHEL/CentOS (issue 11822)
  • Fix bug with specifying alternate home dir on user creation (FreeBSD) (issue 11790)
  • Don’t reload site module on module refresh for MacOS
  • Fix regression with running execution functions in Pillar SLS (issue 11453)
  • Fix some modules missing from Windows installer
  • Don’t log an error for yum commands that return nonzero exit status on non-failure (issue 11645)
  • Fix bug in rabbitmq state (issue 8703)
  • Fix missing ssh config options (issue 10604)
  • Fix top.sls ordering (issue 10810 and issue 11691)
  • Fix salt-key --list all (issue 10982)
  • Fix win_servermanager install/remove function (issue 11038)
  • Fix interaction with tokens when running commands as root (issue 11223)
  • Fix overstate bug with find_job and **kwargs (issue 10503)
  • Fix saltenv for aptpkg.mod_repo from pkgrepo state
  • Fix environment issue causing file caching problems (issue 11189)
  • Fix bug in __parse_key in registry state (issue 11408)
  • Add minion auth retry on rejection (issue 10763)
  • Fix publish_session updating the encryption key (issue 11493)
  • Fix for bad AssertionError raised by GitPython (issue 11473)
  • Fix debian_ip to allow disabling and enabling networking on Ubuntu (issue 11164)
  • Fix potential memory leak caused by saved (and unused) events (issue 11582)
  • Fix exception handling in the MySQL module (issue 11616)
  • Fix environment-related error (issue 11534)
  • Include psutil on Windows
  • Add file.replace and file.search to Windows (issue 11471)
  • Add additional file module helpers to Windows (issue 11235)
  • Add pid to netstat output on Windows (issue 10782)
  • Fix Windows not caching new versions of installers in winrepo (issue 10597)
  • Fix hardcoded md5 hashing
  • Fix kwargs in salt-ssh (issue 11609)
  • Fix file backup timestamps (issue 11745)
  • Fix stacktrace on sys.doc with invalid eauth (issue 11293)
  • Fix git.latest with test=True (issue 11595)
  • Fix file.check_perms hardcoded follow_symlinks (issue 11387)
  • Fix certain pkg states for RHEL5/Cent5 machines (issue 11719)

Salt 2014.1.3 Release Notes

release:2014-04-15

Version 2014.1.3 is another bugfix release for 2014.1.0. It was created as a hotfix for a regression found in 2014.1.2, which was not distributed. The only change made was as follows:

  • Fix regression that caused saltutil.find_job to fail, causing premature terminations of salt CLI commands.

Changes in the not-distributed 2014.1.2, also included in 2014.1.3:

  • Fix username detection when su'ed to root on FreeBSD (issue 11628)
  • Fix minionfs backend for file.recurse states
  • Fix 32-bit packages of different arches than the CPU arch, on 32-bit RHEL/CentOS (issue 11822)
  • Fix bug with specifying alternate home dir on user creation (FreeBSD) (issue 11790)
  • Don’t reload site module on module refresh for MacOS
  • Fix regression with running execution functions in Pillar SLS (issue 11453)
  • Fix some modules missing from Windows installer
  • Don’t log an error for yum commands that return nonzero exit status on non-failure (issue 11645)
  • Fix bug in rabbitmq state (issue 8703)
  • Fix missing ssh config options (issue 10604)
  • Fix top.sls ordering (issue 10810 and issue 11691)
  • Fix salt-key --list all (issue 10982)
  • Fix win_servermanager install/remove function (issue 11038)
  • Fix interaction with tokens when running commands as root (issue 11223)
  • Fix overstate bug with find_job and **kwargs (issue 10503)
  • Fix saltenv for aptpkg.mod_repo from pkgrepo state
  • Fix environment issue causing file caching problems (issue 11189)
  • Fix bug in __parse_key in registry state (issue 11408)
  • Add minion auth retry on rejection (issue 10763)
  • Fix publish_session updating the encryption key (issue 11493)
  • Fix for bad AssertionError raised by GitPython (issue 11473)
  • Fix debian_ip to allow disabling and enabling networking on Ubuntu (issue 11164)
  • Fix potential memory leak caused by saved (and unused) events (issue 11582)
  • Fix exception handling in the MySQL module (issue 11616)
  • Fix environment-related error (issue 11534)
  • Include psutil on Windows
  • Add file.replace and file.search to Windows (issue 11471)
  • Add additional file module helpers to Windows (issue 11235)
  • Add pid to netstat output on Windows (issue 10782)
  • Fix Windows not caching new versions of installers in winrepo (issue 10597)
  • Fix hardcoded md5 hashing
  • Fix kwargs in salt-ssh (issue 11609)
  • Fix file backup timestamps (issue 11745)
  • Fix stacktrace on sys.doc with invalid eauth (issue 11293)
  • Fix git.latest with test=True (issue 11595)
  • Fix file.check_perms hardcoded follow_symlinks (issue 11387)
  • Fix certain pkg states for RHEL5/Cent5 machines (issue 11719)

Salt 2014.1.4 Release Notes

release:2014-05-05

Version 2014.1.4 is another bugfix release for 2014.1.0. Changes include:

Salt 2014.1.5 Release Notes

release:2014-06-11

Version 2014.1.5 is another bugfix release for 2014.1.0. Changes include:

  • Add function for finding cached job on the minion
  • Fix iptables save file location for Debian (issue 11730)
  • Fix for minion caching jobs when master is down
  • Bump default syndic_wait to 5 to fix syndic-related problems (issue 12262)
  • Add OpenBSD, FreeBSD, and NetBSD support for network.netstat (issue 12121)
  • Fix false positive error in logs for makeconf state (issue 9762)
  • Fix for yum fromrepo package installs when repo is disabled by default (issue 12466)
  • Fix for extra blank lines in file.blockreplace (issue 12422)
  • Fix grain detection for OpenVZ guests (issue 11877)
  • Fix get_dns_servers function for Windows win_dns_client
  • Use system locale for ports package installations
  • Use correct stop/restart procedure for Debian networking in debian_ip (issue 12614)
  • Fix for cmd_iter/cmd_iter_no_block blocking issues (issue 12617)
  • Fix traceback when syncing custom types (issue 12883)
  • Fix cleaning directory symlinks in file.directory
  • Add performance optimizations for saltutil.sync_all and state.highstate
  • Fix possible error in saltutil.running
  • Fix for kmod modules with dashes (issue 13239)
  • Fix possible race condition for Windows minions in state module reloading (issue 12370)
  • Fix bug with roster for passwd option that is loaded as a non-string object (issue 13249)
  • Keep duplicate version numbers from showing up in pkg.list_pkgs output
  • Fixes for Jinja renderer, timezone module/state (issue 12724)
  • Fix timedatectl parsing for systemd>=210 (issue 12728)
  • Fix saltenv being written to YUM repo config files (issue 12887)
  • Removed the deprecated external nodes classifier (originally accessible by setting a value for external_nodes in the master configuration file). Note that this functionality has been marked deprecated for some time and was replaced by the more general master tops system.
  • More robust escaping of ldap filter strings.
  • Fix trailing slash in gitfs_root causing files not to be available (issue 13185)

Salt 2014.1.6 Release Notes

release:2014-07-08

Version 2014.1.6 is another bugfix release for 2014.1.0. Changes include:

  • Fix extra iptables --help output (Sorry!) (issue 13648, issue 13507, issue 13527, issue 13607)
  • Fix mount.active for Solaris
  • Fix support for allow-hotplug statement in debian_ip network module
  • Add sqlite3 to esky builds
  • Fix jobs.active output (issue 9526)
  • Fix the virtual grain for Xen (issue 13534)
  • Fix eauth for batch mode (issue 9605)
  • Fix force-related issues with tomcat support (issue 12889)
  • Fix KeyError when cloud mapping
  • Fix salt-minion restart loop in Windows (issue 12086)
  • Fix detection of service virtual module on Fedora minions
  • Fix traceback with missing ipv4 grain (issue 13838)
  • Fix issue in roots backend with invalid data in mtime_map (issue 13836)
  • Fix traceback in jobs.active (issue 11151)
  • Fix master_tops and _ext_nodes issue (issue 13535, issue 13673)

Salt 2014.1.7 Release Notes

release:2014-07-09

Version 2014.1.7 is another bugfix release for 2014.1.0. Changes include:

This release was a hotfix release for the regression listed above which was present in the 2014.1.6 release. The changes included in 2014.1.6 are listed below:

  • Fix extra iptables --help output (Sorry!) (issue 13648, issue 13507, issue 13527, issue 13607)
  • Fix mount.active for Solaris
  • Fix support for allow-hotplug statement in debian_ip network module
  • Add sqlite3 to esky builds
  • Fix jobs.active output (issue 9526)
  • Fix the virtual grain for Xen (issue 13534)
  • Fix eauth for batch mode (issue 9605)
  • Fix force-related issues with tomcat support (issue 12889)
  • Fix KeyError when cloud mapping
  • Fix salt-minion restart loop in Windows (issue 12086)
  • Fix detection of service virtual module on Fedora minions
  • Fix traceback with missing ipv4 grain (issue 13838)
  • Fix issue in roots backend with invalid data in mtime_map (issue 13836)
  • Fix traceback in jobs.active (issue 11151)
  • Fix master_tops and _ext_nodes issue (issue 13535, issue 13673)

Salt 2014.1.8 Release Notes

release:2014-07-30

Note

This release contained a regression which caused inaccurate Salt version detection, and thus was never packaged for general release. Please use version 2014.1.10 instead.

Version 2014.1.8 is another bugfix release for 2014.1.0. Changes include:

  • Ensure salt-ssh will not continue if permissions on a temporary directory are not correct.
  • Use the bootstrap script distributed with Salt instead of relying on an external resource
  • Remove unused testing code
  • Ensure salt states are placed into the .salt directory in salt-ssh
  • Use a randomized path for temporary files in a salt-cloud deployment
  • Clean any stale directories to ensure a fresh copy of salt-ssh during a deployment

Salt 2014.1.9 Release Notes

release:2014-07-31

Note

This release contained a regression which caused inaccurate Salt version detection, and thus was never packaged for general release. Please use version 2014.1.10 instead.

Note

Version 2014.1.8 contained a regression which caused inaccurate Salt version detection, and thus was never packaged for general release. This version contains the version detection fix, but is otherwise identical to 2014.1.8.

Version 2014.1.9 is another bugfix release for 2014.1.0. Changes include:

  • Ensure salt-ssh will not continue if permissions on a temporary directory are not correct.
  • Use the bootstrap script distributed with Salt instead of relying on an external resource
  • Remove unused testing code
  • Ensure salt states are placed into the .salt directory in salt-ssh
  • Use a randomized path for temporary files in a salt-cloud deployment
  • Clean any stale directories to ensure a fresh copy of salt-ssh during a deployment

Salt 0.10.0 Release Notes

release:2012-06-16

0.10.0 has arrived! This release comes with MANY bug fixes, and new capabilities which greatly enhance performance and reliability. This release is primarily a bug fix release with many new tests and many repaired bugs. This release also introduces a few new key features which were brought in primarily to repair bugs and some limitations found in some of the components of the original architecture.

Major Features
Event System

The Salt Master now comes equipped with a new event system. This event system has replaced some of the back end of the Salt client and offers the beginning of a system which will make plugging external applications into Salt. The event system relies on a local ZeroMQ publish socket and other processes can connect to this socket and listen for events. The new events can be easily managed via Salt's event library.

Unprivileged User Updates

Some enhancements have been added to Salt for running as a user other than root. These new additions should make switching the user that the Salt Master is running as very painless, simply change the user option in the master configuration and restart the master, Salt will take care of all of the particulars for you.

Peer Runner Execution

Salt has long had the peer communication system used to allow minions to send commands via the salt master. 0.10.0 adds a new capability here, now the master can be configured to allow for minions to execute Salt runners via the peer_run option in the salt master configuration.

YAML Parsing Updates

In the past the YAML parser for sls files would return the incorrect numbers when the file mode was set with a preceding 0. The YAML parser used in Salt has been modified to no longer convert these number into octal but to keep them as the correct value so that sls files can be a little cleaner to write.

State Call Data Files

It was requested that the minion keep a local cache of the most recent executed state run. This has been added and now with state runs the data is stored in a msgpack file in the minion's cachedir.

Turning Off the Job Cache

A new option has been added to the master configuration file. In previous releases the Salt client would look over the Salt job cache to read in the minion return data. With the addition of the event system the Salt client can now watch for events directly from the master worker processes.

This means that the job cache is no longer a hard requirement. Keep in mind though, that turning off the job cache means that historic job execution data cannot be retrieved.

Test Updates
Minion Swarms Are Faster

To continue our efforts with testing Salt's ability to scale the minionswarm script has been updated. The minionswarm can now start up minions much faster than it could before and comes with a new feature allowing modules to be disabled, thus lowering the minion's footprint when making a swarm. These new updates have allows us to test

# python minionswarm.py -m 20 --master salt-master
Many Fixes

To get a good idea for the number of bugfixes this release offers take a look at the closed tickets for 0.10.0, this is a very substantial update:

https://github.com/saltstack/salt/issues?milestone=12&state=closed

Master and Minion Stability Fixes

As Salt deployments grow new ways to break Salt are discovered. 0.10.0 comes with a number of fixes for the minions and master greatly improving Salt stability.

Salt 0.10.1 Release Notes

release:2012-06-19

Salt 0.10.2 Release Notes

release:2012-07-30

0.10.2 is out! This release comes with enhancements to the pillar interface, cleaner ways to access the salt-call capabilities in the API, minion data caching and the event system has been added to salt minions.

There have also been updates to the ZeroMQ functions, many more tests (thanks to sponsors, the code sprint and many contributors) and a swath of bug fixes.

Major Features
Ext Pillar Modules

The ranks of available Salt modules directories sees a new member in 0.10.2. With the popularity of pillar a higher demand has arisen for ext_pillar interfaces to be more like regular Salt module additions. Now ext_pillar interfaces can be added in the same way as other modules, just drop it into the pillar directory in the salt source.

Minion Events

In 0.10.0 an event system was added to the Salt master. 0.10.2 adds the event system to the minions as well. Now event can be published on a local minion as well.

The minions can also send events back up to the master. This means that Salt is able to communicate individual events from the minions back up to the Master which are not associated with command.

Minion Data Caching

When pillar was introduced the landscape for available data was greatly enhanced. The minion's began sending grain data back to the master on a regular basis.

The new config option on the master called minion_data_cache instructs the Salt master to maintain a cache of the minion's grains and pillar data in the cachedir. This option is turned off by default to avoid hitting the disk more, but when enabled the cache is used to make grain matching from the salt command more powerful, since the minions that will match can be predetermined.

Backup Files

By default all files replaced by the file.managed and file.recurse states we simply deleted. 0.10.2 adds a new option. By setting the backup option to minion the files are backed up before they are replaced.

The backed up files are located in the cachedir under the file_backup directory. On a default system this will be at: /var/cache/salt/file_backup

Configuration files

salt-master and salt-minion automatically load additional configuration files from master.d/*.conf respective minion.d/*.conf where master.d/minion.d is a directory in the same directory as the main configuration file.

Salt Key Verification

A number of users complained that they had inadvertently deleted the wrong salt authentication keys. 0.10.2 now displays what keys are going to be deleted and verifies that they are the keys that are intended for deletion.

Key auto-signing

If autosign_file is specified in the configuration file incoming keys will be compared to the list of keynames in autosign_file. Regular expressions as well as globbing is supported.

The file must only be writable by the user otherwise the file will be ignored. To relax the permission and allow group write access set the permissive_pki_access option.

Module changes
Improved OpenBSD support

New modules for managing services and packages were provided by Joshua Elsasser to further improve the support for OpenBSD.

Existing modules like the disk module were also improved to support OpenBSD.

SQL Modules

The MySQL and PostgreSQL modules have both received a number of additions thanks to the work of Avi Marcus and Roman Imankulov.

ZFS Support on FreeBSD

A new ZFS module has been added by Kurtis Velarde for FreeBSD supporting various ZFS operations like creating, extending or removing zpools.

Augeas

A new Augeas module by Ulrich Dangel for editing and verifying config files.

Native Debian Service module

The support for the Debian was further improved with an new service module for Debian by Ahmad Khayyat supporting disable and enable.

Cassandra

Cassandra support has been added by Adam Garside. Currently only status and diagnostic information are supported.

Networking

The networking support for RHEL has been improved and supports bonding support as well as zeroconf configuration.

Monit

Basic monit support by Kurtis Velarde to control services via monit.

nzbget

Basic support for controlling nzbget by Joseph Hall

Bluetooth

Baisc bluez support for managing and controlling Bluetooth devices. Supports scanning as well as pairing/unpairing by Joseph Hall.

Test Updates
Consistency Testing

Another testing script has been added. A bug was found in pillar when many minions generated pillar data at the same time. The new consist.py script is the tests directory was created to reproduce bugs where data should always be consistent.

Many Fixes

To get a good idea for the number of bugfixes this release offers take a look at the closed tickets for 0.10.2, this is a very substantial update:

https://github.com/saltstack/salt/issues?milestone=24&page=1&state=closed

Master and Minion Stability Fixes

As Salt deployments grow new ways to break Salt are discovered. 0.10.2 comes with a number of fixes for the minions and master greatly improving Salt stability.

Salt 0.10.3 Release Notes

release:2012-09-30

The latest taste of Salt has come, this release has many fixes and feature additions. Modifications have been made to make ZeroMQ connections more reliable, the beginning of the ACL system is in place, a new command line parsing system has been added, dynamic module distribution has become more environment aware, the new master_finger option and many more!

Major Features
ACL System

The new ACL system has been introduced. The ACL system allows for system users other than root to execute salt commands. Users can be allowed to execute specific commands in the same way that minions are opened up to the peer system.

The configuration value to open up the ACL system is called client_acl and is configured like so:

client_acl:
  fred:
    - test..*
    - pkg.list_pkgs

Where fred is allowed access to functions in the test module and to the pkg.list_pkgs function.

Master Finger Option

The master_finger option has been added to improve the security of minion provisioning. The master_finger option allows for the fingerprint of the master public key to be set in the configuration file to double verify that the master is valid. This option was added in response to a motivation to pre-authenticate the master when provisioning new minions to help prevent man in the middle attacks in some situations.

Salt Key Fingerprint Generation

The ability to generate fingerprints of keys used by Salt has been added to salt-key. The new option finger accepts the name of the key to generate and display a fingerprint for.

salt-key -F master

Will display the fingerprints for the master public and private keys.

Parsing System

Pedro Algavio, aka s0undt3ch, has added a substantial update to the command line parsing system that makes the help message output much cleaner and easier to search through. Salt parsers now have --versions-report besides usual --version info which you can provide when reporting any issues found.

Key Generation

We have reduced the requirements needed for salt-key to generate minion keys. You're no longer required to have salt configured and it's common directories created just to generate keys. This might prove useful if you're batch creating keys to pre-load on minions.

Startup States

A few configuration options have been added which allow for states to be run when the minion daemon starts. This can be a great advantage when deploying with Salt because the minion can apply states right when it first runs. To use startup states set the startup_states configuration option on the minion to highstate.

New Exclude Declaration

Some users have asked about adding the ability to ensure that other sls files or ids are excluded from a state run. The exclude statement will delete all of the data loaded from the specified sls file or will delete the specified id:

exclude:
  - sls: http
  - id: /etc/vimrc
Max Open Files

While we're currently unable to properly handle ZeroMQ's abort signals when the max open files is reached, due to the way that's handled on ZeroMQ's, we have minimized the chances of this happening without at least warning the user.

More State Output Options

Some major changes have been made to the state output system. In the past state return data was printed in a very verbose fashion and only states that failed or made changes were printed by default. Now two options can be passed to the master and minion configuration files to change the behavior of the state output. State output can be set to verbose (default) or non-verbose with the state_verbose option:

state_verbose: False

It is noteworthy that the state_verbose option used to be set to False by default but has been changed to True by default in 0.10.3 due to many requests for the change.

Te next option to be aware of new and called state_output. This option allows for the state output to be set to full (default) or terse.

The full output is the standard state output, but the new terse output will print only one line per state making the output much easier to follow when executing a large state system.

state_output: terse
state.file.append Improvements

The salt state file.append() tries not to append existing text. Previously the matching check was being made line by line. While this kind of check might be enough for most cases, if the text being appended was multi-line, the check would not work properly. This issue is now properly handled, the match is done as a whole ignoring any white space addition or removal except inside commas. For those thinking that, in order to properly match over multiple lines, salt will load the whole file into memory, that's not true. For most cases this is not important but an erroneous order to read a 4GB file, if not properly handled, like salt does, could make salt chew that amount of memory. Salt has a buffered file reader which will keep in memory a maximum of 256KB and iterates over the file in chunks of 32KB to test for the match, more than enough, if not, explain your usage on a ticket. With this change, also salt.modules.file.contains(), salt.modules.file.contains_regex(), salt.modules.file.contains_glob() and salt.utils.find now do the searching and/or matching using the buffered chunks approach explained above.

Two new keyword arguments were also added, makedirs, and source. The first, makedirs will create the necessary directories in order to append to the specified file, of course, it only applies if we're trying to append to a non-existing file on a non-existing directory:

/tmp/salttest/file-append-makedirs:
    file.append:
        text: foo
        makedirs: True

The second, source, allows one to append the contents of a file instead of specifying the text.

/tmp/salttest/file-append-source:

file.append:
    - source: salt://testfile
Security Fix

A timing vulnerability was uncovered in the code which decrypts the AES messages sent over the network. This has been fixed and upgrading is strongly recommended.

Salt 0.10.4 Release Notes

release:2012-10-23

Salt 0.10.4 is a monumental release for the Salt team, with two new module systems, many additions to allow granular access to Salt, improved platform support and much more.

This release is also exciting because we have been able to shorten the release cycle back to under a month. We are working hard to keep up the aggressive pace and look forward to having releases happen more frequently!

This release also includes a serious security fix and all users are very strongly recommended to upgrade. As usual, upgrade the master first, and then the minion to ensure that the process is smooth.

Major Features
External Authentication System

The new external authentication system allows for Salt to pass through authentication to any authentication system to determine if a user has permission to execute a Salt command. The Unix PAM system is the first supported system with more to come!

The external authentication system allows for specific users to be granted access to execute specific functions on specific minions. Access is configured in the master configuration file, and uses the new access control system:

external_auth:
  pam:
    thatch:
      - 'web*':
        - test.*
        - network.*

The configuration above allows the user thatch to execute functions in the test and network modules on minions that match the web* target.

Access Control System

All Salt systems can now be configured to grant access to non-administrative users in a granular way. The old configuration continues to work. Specific functions can be opened up to specific minions from specific users in the case of external auth and client ACLs, and for specific minions in the case of the peer system.

Access controls are configured like this:

client_acl:
  fred:
    - web\*:
      - pkg.list_pkgs
      - test.*
      - apache.*
Target by Network

A new matcher has been added to the system which allows for minions to be targeted by network. This new matcher can be called with the -S flag on the command line and is available in all places that the matcher system is available. Using it is simple:

$ salt -S '192.168.1.0/24' test.ping
$ salt -S '192.168.1.100' test.ping
Nodegroup Nesting

Previously a nodegroup was limited by not being able to include another nodegroup, this restraint has been lifted and now nodegroups will be expanded within other nodegroups with the N@ classifier.

Salt Key Delete by Glob

The ability to delete minion keys by glob has been added to salt-key. To delete all minion keys whose minion name starts with 'web':

$ salt-key -d 'web*'
Master Tops System

The external_nodes system has been upgraded to allow for modular subsystems to be used to generate the top file data for a highstate run.

The external_nodes option still works but will be deprecated in the future in favor of the new master_tops option.

Example of using master_tops:

master_tops:
  ext_nodes: cobbler-external-nodes
Next Level Solaris Support

A lot of work has been put into improved Solaris support by Romeo Theriault. Packaging modules (pkgadd/pkgrm and pkgutil) and states, cron support and user and group management have all been added and improved upon. These additions along with SMF (Service Management Facility) service support and improved Solaris grain detection in 0.10.3 add up to Salt becoming a great tool to manage Solaris servers with.

Security

A vulnerability in the security handshake was found and has been repaired, old minions should be able to connect to a new master, so as usual, the master should be updated first and then the minions.

Pillar Updates

The pillar communication has been updated to add some extra levels of verification so that the intended minion is the only one allowed to gather the data. Once all minions and the master are updated to salt 0.10.4 please activate pillar 2 by changing the pillar_version in the master config to 2. This will be set to 2 by default in a future release.

Salt 0.10.5 Release Notes

release:2012-11-15

Salt 0.10.5 is ready, and comes with some great new features. A few more interfaces have been modularized, like the outputter system. The job cache system has been made more powerful and can now store and retrieve jobs archived in external databases. The returner system has been extended to allow minions to easily retrieve data from a returner interface.

As usual, this is an exciting release, with many noteworthy additions!

Major Features
External Job Cache

The external job cache is a system which allows for a returner interface to also act as a job cache. This system is intended to allow users to store job information in a central location for longer periods of time and to make the act of looking up information from jobs executed on other minions easier.

Currently the external job cache is supported via the mongo and redis returners:

ext_job_cache: redis
redis.host: salt

Once the external job cache is turned on the new ret module can be used on the minions to retrieve return information from the job cache. This can be a great way for minions to respond and react to other minions.

OpenStack Additions

OpenStack integration with Salt has been moving forward at a blistering pace. The new nova, glance, and keystone modules represent the beginning of ongoing OpenStack integration.

The Salt team has had many conversations with core OpenStack developers and is working on linking to OpenStack in powerful new ways.

Wheel System

A new API was added to the Salt Master which allows the master to be managed via an external API. This new system allows Salt API to easily hook into the Salt Master and manage configs, modify the state tree, manage the pillar and more. The main motivation for the wheel system is to enable features needed in the upcoming web UI so users can manage the master just as easily as they manage minions.

The wheel system has also been hooked into the external auth system. This allows specific users to have granular access to manage components of the Salt Master.

Render Pipes

Jack Kuan has added a substantial new feature. The render pipes system allows Salt to treat the render system like unix pipes. This new system enables sls files to be passed through specific render engines. While the default renderer is still recommended, different engines can now be more easily merged. So to pipe the output of Mako used in YAML use this shebang line:

#!mako|yaml

Salt Key Overhaul

The Salt Key system was originally developed as only a CLI interface, but as time went on it was pressed into becoming a clumsy API. This release marks a complete overhaul of Salt Key. Salt Key has been rewritten to function purely from an API and to use the outputter system. The benefit here is that the outputter system works much more cleanly with Salt Key now, and the internals of Salt Key can be used much more cleanly.

Modular Outputters

The outputter system is now loaded in a modular way. This means that output systems can be more easily added by dropping a python file down on the master that contains the function output.

Gzip from Fileserver

Gzip compression has been added as an option to the cp.get_file and cp.get_dir commands. This will make file transfers more efficient and faster, especially over slower network links.

Unified Module Configuration

In past releases of Salt, the minions needed to be configured for certain modules to function. This was difficult because it required pre-configuring the minions. 0.10.5 changes this by making all module configs on minions search the master config file for values.

Now if a single database server is needed, then it can be defined in the master config and all minions will become aware of the configuration value.

Salt Call Enhancements

The salt-call command has been updated in a few ways. Now, salt-call can take the --return option to send the data to a returner. Also, salt-call now reports executions in the minion proc system, this allows the master to be aware of the operation salt-call is running.

Death to pub_refresh and sub_timeout

The old configuration values pub_refresh and sub_timeout have been removed. These options were in place to alleviate problems found in earlier versions of ZeroMQ which have since been fixed. The continued use of these options has proven to cause problems with message passing and have been completely removed.

Git Revision Versions

When running Salt directly from git (for testing or development, of course) it has been difficult to know exactly what code is being executed. The new versioning system will detect the git revision when building and how many commits have been made since the last release. A release from git will look like this:

0.10.4-736-gec74d69

Svn Module Addition

Anthony Cornehl (twinshadow) contributed a module that adds Subversion support to Salt. This great addition helps round out Salt's VCS support.

Noteworthy Changes
Arch Linux Defaults to Systemd

Arch Linux recently changed to use systemd by default and discontinued support for init scripts. Salt has followed suit and defaults to systemd now for managing services in Arch.

Salt, Salt Cloud and Openstack

With the releases of Salt 0.10.5 and Salt Cloud 0.8.2, OpenStack becomes the first (non-OS) piece of software to include support both on the user level (with Salt Cloud) and the admin level (with Salt). We are excited to continue to extend support of other platforms at this level.

Salt 0.11.0 Release Notes

release:2012-12-14

Salt 0.11.0 is here, with some highly sought after and exciting features. These features include the new overstate system, the reactor system, a new state run scope component called __context__, the beginning of the search system (still needs a great deal of work), multiple package states, the MySQL returner and a better system to arbitrarily reference outputters.

It is also noteworthy that we are changing how we mark release numbers. For the life of the project we have been pushing every release with features and fixes as point releases. We will now be releasing point releases for only bug fixes on a more regular basis and major feature releases on a slightly less regular basis. This means that the next release will be a bugfix only release with a version number of 0.11.1. The next feature release will be named 0.12.0 and will mark the end of life for the 0.11 series.

Major Features
OverState

The overstate system is a simple way to manage rolling state executions across many minions. The overstate allows for a state to depend on the successful completion of another state.

Reactor System

The new reactor system allows for a reactive logic engine to be created which can respond to events within a salted environment. The reactor system uses sls files to match events fired on the master with actions, enabling Salt to react to problems in an infrastructure.

Your load-balanced group of webservers is under extra load? Spin up a new VM and add it to the group. Your fileserver is filling up? Send a notification to your sysadmin on call. The possibilities are endless!

Module Context

A new component has been added to the module loader system. The module context is a data structure that can hold objects for a given scope within the module.

This allows for components that are initialized to be stored in a persistent context which can greatly speed up ongoing connections. Right now the best example can be found in the cp execution module.

Multiple Package Management

A long desired feature has been added to package management. By definition Salt States have always installed packages one at a time. On most platforms this is not the fastest way to install packages. Erik Johnson, aka terminalmage, has modified the package modules for many providers and added new capabilities to install groups of packages. These package groups can be defined as a list of packages available in repository servers:

python_pkgs:
  pkg.installed:
    - pkgs:
      - python-mako
      - whoosh
      - python-git

or specify based on the location of specific packages:

python_pkgs:
  pkg.installed:
    - sources:
      - python-mako: http://some-rpms.org/python-mako.rpm
      - whoosh: salt://whoosh/whoosh.rpm
      - python-git: ftp://companyserver.net/python-git.rpm
Search System

The bones to the search system have been added. This is a very basic interface that allows for search backends to be added as search modules. The first supported search module is the whoosh search backend. Right now only the basic paths for the search system are in place, making this very experimental. Further development will involve improving the search routines and index routines for whoosh and other search backends.

The search system has been made to allow for searching through all of the state and pillar files, configuration files and all return data from minion executions.

Notable Changes

All previous versions of Salt have shared many directories between the master and minion. The default locations for keys, cached data and sockets has been shared by master and minion. This has created serious problems with running a master and a minion on the same systems. 0.11.0 changes the defaults to be separate directories. Salt will also attempt to migrate all of the old key data into the correct new directories, but if it is not successful it may need to be done manually. If your keys exhibit issues after updating make sure that they have been moved from /etc/salt/pki to /etc/salt/pki/{master,minion}.

The old setup will look like this:

/etc/salt/pki
|-- master.pem
|-- master.pub
|-- minions
|   `-- ragnarok.saltstack.net
|-- minions_pre
|-- minion.pem
|-- minion.pub
|-- minion_master.pub
|-- minions_pre
`-- minions_rejected

With the accepted minion keys in /etc/salt/pki/minions, the new setup places the accepted minion keys in /etc/salt/pki/master/minions.

/etc/salt/pki
|-- master
|   |-- master.pem
|   |-- master.pub
|   |-- minions
|   |   `-- ragnarok.saltstack.net
|   |-- minions_pre
|   `-- minions_rejected
|-- minion
|   |-- minion.pem
|   |-- minion.pub
|   `-- minion_master.pub

Salt 0.11.1 Release Notes

release:2012-12-19

Salt 0.12.0 Release Notes

release:2013-01-15

Another feature release of Salt is here! Some exciting additions are included with more ways to make salt modular and even easier management of the salt file server.

Major Features
Modular Fileserver Backend

The new modular fileserver backend allows for any external system to be used as a salt file server. The main benefit here is that it is now possible to tell the master to directly use a git remote location, or many git remote locations, automatically mapping git branches and tags to salt environments.

Windows is First Class!

A new Salt Windows installer is now available! Much work has been put in to improve Windows support. With this much easier method of getting Salt on your Windows machines, we hope even more development and progress will occur. Please file bug reports on the Salt GitHub repo issue tracker so we can continue improving.

One thing that is missing on Windows that Salt uses extensively is a software package manager and a software package repository. The Salt pkg state allows sys admins to install software across their infrastructure and across operating systems. Software on Windows can now be managed in the same way. The SaltStack team built a package manager that interfaces with the standard Salt pkg module to allow for installing and removing software on Windows. In addition, a software package repository has been built on top of the Salt fileserver. A small YAML file provides the information necessary for the package manager to install and remove software.

An interesting feature of the new Salt Windows software package repository is that one or more remote git repositories can supplement the master's local repository. The repository can point to software on the master's fileserver or on an HTTP, HTTPS, or ftp server.

New Default Outputter

Salt displays data to the terminal via the outputter system. For a long time the default outputter for Salt has been the python pretty print library. While this has been a generally reasonable outputter, it did have many failings. The new default outputter is called "nested", it recursively scans return data structures and prints them out cleanly.

If the result of the new nested outputter is not desired any other outputter can be used via the --out option, or the output option can be set in the master and minion configs to change the default outputter.

Internal Scheduler

The internal Salt scheduler is a new capability which allows for functions to be executed at given intervals on the minion, and for runners to be executed at given intervals on the master. The scheduler allows for sequences such as executing state runs (locally on the minion or remotely via an overstate) or continually gathering system data to be run at given intervals.

The configuration is simple, add the schedule option to the master or minion config and specify jobs to run, this in the master config will execute the state.over runner every 60 minutes:

schedule:
  overstate:
    function: state.over
    minutes: 60

This example for the minion configuration will execute a highstate every 30 minutes:

schedule:
  highstate:
    function: state.highstate
    minutes: 30
Optional DSL for SLS Formulas

Jack Kuan, our renderer expert, has created something that is astonishing. Salt, now comes with an optional Python based DSL, this is a very powerful interface that makes writing SLS files in pure python easier than it was with the raw py renderer. As usual this can be used with the renderer shebang line, so a single sls can be written with the DSL if pure python power is needed while keeping other sls files simple with YAML.

Set Grains Remotely

A new execution function and state module have been added that allows for grains to be set on the minion. Now grains can be set via a remote execution or via states. Use the grains.present state or the grains.setval execution functions.

Gentoo Additions

Major additions to Gentoo specific components have been made. The encompasses executions modules and states ranging from supporting the make.conf file to tools like layman.

Salt 0.12.1 Release Notes

release:2013-01-21

Salt 0.13.0 Release Notes

release:2013-02-12

The lucky number 13 has turned the corner! From CLI notifications when quitting a salt command, to substantial improvements on Windows, Salt 0.13.0 has arrived!

Major Features
Improved file.recurse Performance

The file.recurse system has been deployed and used in a vast array of situations. Fixes to the file state and module have led towards opening up new ways of running file.recurse to make it faster. Now the file.recurse state will download fewer files and will run substantially faster.

Windows Improvements

Minion stability on Windows has improved. Many file operations, including file.recurse, have been fixed and improved. The network module works better, to include network.interfaces. Both 32bit and 64bit installers are now available.

Nodegroup Targeting in Peer System

In the past, nodegroups were not available for targeting via the peer system. This has been fixed, allowing the new nodegroup expr_form argument for the publish.publish function:

salt-call publish.publish group1 test.ping expr_form=nodegroup
Blacklist Additions

Additions allowing more granular blacklisting are available in 0.13.0. The ability to blacklist users and functions in client_acl have been added, as well as the ability to exclude state formulas from the command line.

Command Line Pillar Embedding

Pillar data can now be embedded on the command line when calling state.sls and state.highstate. This allows for on the fly changes or settings to pillar and makes parameterizing state formulas even easier. This is done via the keyword argument:

salt '*' state.highstate pillar='{"cheese": "spam"}'

The above example will extend the existing pillar to hold the cheese key with a value of spam. If the cheese key is already specified in the minion's pillar then it will be overwritten.

CLI Notifications

In the past hitting ctrl-C and quitting from the salt command would just drop to a shell prompt, this caused confusion with users who expected the remote executions to also quit. Now a message is displayed showing what command can be used to track the execution and what the job id is for the execution.

Version Specification in Multiple-Package States

Versions can now be specified within multiple-package pkg.installed states. An example can be found below:

mypkgs:
  pkg.installed:
    - pkgs:
      - foo
      - bar: 1.2.3-4
      - baz
Noteworthy Changes

The configuration subsystem in Salt has been overhauled to make the opts dict used by Salt applications more portable, the problem is that this is an incompatible change with salt-cloud, and salt-cloud will need to be updated to the latest git to work with Salt 0.13.0. Salt Cloud 0.8.5 will also require Salt 0.13.0 or later to function.

The SaltStack team is sorry for the inconvenience here, we work hard to make sure these sorts of things do not happen, but sometimes hard changes get in.

Salt 0.13.1 Release Notes

release:2013-02-15

Salt 0.13.2 Release Notes

release:2013-03-13

Salt 0.13.3 Release Notes

release:2013-03-18

Salt 0.14.0 Release Notes

release:2013-03-23

Salt 0.14.0 is here! This release was held up primarily by PyCon, Scale, and illness, but has arrived! 0.14.0 comes with many new features and is breaking ground for Salt in the area of cloud management with the introduction of Salt providing basic cloud controller functionality.

Major Features
Salt - As a Cloud Controller

This is the first primitive inroad to using Salt as a cloud controller is available in 0.14.0. Be advised that this is alpha, only tested in a few very small environments.

The cloud controller is built using kvm and libvirt for the hypervisors. Hypervisors are autodetected as minions and only need to have libvirt running and kvm installed to function. The features of the Salt cloud controller are as follows:

  • Basic vm discovery and reporting
  • Creation of new virtual machines
  • Seeding virtual machines with Salt via qemu-nbd or libguestfs
  • Live migration (shared and non shared storage)
  • Delete existing VMs

It is noteworthy that this feature is still Alpha, meaning that all rights are reserved to change the interface if needs be in future releases!

Libvirt State

One of the problems with libvirt is management of certificates needed for live migration and cross communication between hypervisors. The new libvirt state makes the Salt Master hold a CA and manage the signing and distribution of keys onto hypervisors, just add a call to the libvirt state in the sls formulas used to set up a hypervisor:

libvirt_keys:
  libvirt.keys
New get Functions

An easier way to manage data has been introduced. The pillar, grains, and config execution modules have been extended with the new get function. This function works much in the same way as the get method in a python dict, but with an enhancement, nested dict components can be extracted using a : delimiter.

If a structure like this is in pillar:

foo:
  bar:
    baz: quo

Extracting it from the raw pillar in an sls formula or file template is done this way:

{{ pillar['foo']['bar']['baz'] }}

Now with the new get function the data can be safely gathered and a default can be set allowing the template to fall back if the value is not available:

{{ salt['pillar.get']('foo:bar:baz', 'qux') }}

This makes handling nested structures much easier, and defaults can be cleanly set. This new function is being used extensively in the new formulae repository of salt sls formulas.

Salt 0.14.1 Release Notes

release:2013-04-13

Salt 0.15.0 Release Notes

release:2013-05-03

The many new features of Salt 0.15.0 have arrived! Salt 0.15.0 comes with many smaller features and a few larger ones.

These features range from better debugging tools to the new Salt Mine system.

Major Features
The Salt Mine

First there was the peer system, allowing for commands to be executed from a minion to other minions to gather data live. Then there was the external job cache for storing and accessing long term data. Now the middle ground is being filled in with the Salt Mine. The Salt Mine is a system used to execute functions on a regular basis on minions and then store only the most recent data from the functions on the master, then the data is looked up via targets.

The mine caches data that is public to all minions, so when a minion posts data to the mine all other minions can see it.

IPV6 Support

0.13.0 saw the addition of initial IPV6 support but errors were encountered and it needed to be stripped out. This time the code covers more cases and must be explicitly enabled. But the support is much more extensive than before.

Copy Files From Minions to the Master

Minions have long been able to copy files down from the master file server, but until now files could not be easily copied from the minion up to the master.

A new function called cp.push can push files from the minions up to the master server. The uploaded files are then cached on the master in the master cachedir for each minion.

Better Template Debugging

Template errors have long been a burden when writing states and pillar. 0.15.0 will now send the compiled template data to the debug log, this makes tracking down the intermittent stage templates much easier. So running state.sls or state.highstate with -l debug will now print out the rendered templates in the debug information.

State Event Firing

The state system is now more closely tied to the master's event bus. Now when a state fails the failure will be fired on the master event bus so that the reactor can respond to it.

Major Syndic Updates

The Syndic system has been basically re-written. Now it runs in a completely asynchronous way and functions primarily as an event broker. This means that the events fired on the syndic are now pushed up to the higher level master instead of the old method used which waited for the client libraries to return.

This makes the syndic much more accurate and powerful, it also means that all events fired on the syndic master make it up the pipe as well making a reactor on the higher level master able to react to minions further downstream.

Peer System Updates

The Peer System has been updated to run using the client libraries instead of firing directly over the publish bus. This makes the peer system much more consistent and reliable.

Minion Key Revocation

In the past when a minion was decommissioned the key needed to be manually deleted on the master, but now a function on the minion can be used to revoke the calling minion's key:

$ salt-call saltutil.revoke_auth
Function Return Codes

Functions can now be assigned numeric return codes to determine if the function executed successfully. While not all functions have been given return codes, many have and it is an ongoing effort to fill out all functions that might return a non-zero return code.

Functions in Overstate

The overstate system was originally created to just manage the execution of states, but with the addition of return codes to functions, requisite logic can now be used with respect to the overstate. This means that an overstate stage can now run single functions instead of just state executions.

Pillar Error Reporting

Previously if errors surfaced in pillar, then the pillar would consist of only an empty dict. Now all data that was successfully rendered stays in pillar and the render error is also made available. If errors are found in the pillar, states will refuse to run.

Using Cached State Data

Sometimes states are executed purely to maintain a specific state rather than to update states with new configs. This is grounds for the new cached state system. By adding cache=True to a state call the state will not be generated fresh from the master but the last state data to be generated will be used. If no previous state data is available then fresh data will be generated.

Monitoring States

The new monitoring states system has been started. This is very young but allows for states to be used to configure monitoring routines. So far only one monitoring state is available, the disk.status state. As more capabilities are added to Salt UI the monitoring capabilities of Salt will continue to be expanded.

Salt 0.15.1 Release Notes

release:2013-05-08

The 0.15.1 release has been posted, this release includes fixes to a number of bugs in 0.15.1 and a three security patches.

Security Updates

A number of security issues have been resolved via the 0.15.1 release.

Path Injection in Minion IDs

Salt masters did not properly validate the id of a connecting minion. This can lead to an attacker uploading files to the master in arbitrary locations. In particular this can be used to bypass the manual validation of new unknown minions. Exploiting this vulnerability does not require authentication.

This issue affects all known versions of Salt.

This issue was reported by Ronald Volgers.

RSA Key Generation Fault

RSA key generation was done incorrectly, leading to very insecure keys. It is recommended to regenerate all RSA keys.

This issue can be used to impersonate Salt masters or minions, or decrypt any transferred data.

This issue can only be exploited by attackers who are able to observe or modify traffic between Salt minions and the legitimate Salt master.

A tool was included in 0.15.1 to assist in mass key regeneration, the manage.regen_keys runner.

This issue affects all known versions of Salt.

This issue was reported by Ronald Volgers.

Patch

The issue is fixed in Salt 0.15.1. Updated packages are available in the usual locations.

Specific commits:

https://github.com/saltstack/salt/commit/5dd304276ba5745ec21fc1e6686a0b28da29e6fc

Command Injection Via ext_pillar

Arbitrary shell commands could be executed on the master by an authenticated minion through options passed when requesting a pillar.

Ext pillar options have been restricted to only allow safe external pillars to be called when prompted by the minion.

This issue affects Salt versions from 0.14.0 to 0.15.0.

This issue was reported by Ronald Volgers.

Patch

The issue is fixed in Salt 0.15.1. Updated packages are available in the usual locations.

Specific commits:

https://github.com/saltstack/salt/commit/43d8c16bd26159d827d1a945c83ac28159ec5865

Salt 0.15.2 Release Notes

release:2013-05-29

Salt 0.15.3 Release Notes

release:2013-06-01

Salt 0.16.0 Release Notes

release:2013-07-01

The 0.16.0 release is an exciting one, with new features in master redundancy, and a new, powerful requisite.

Major Features
Multi-Master

This new capability allows for a minion to be actively connected to multiple salt masters at the same time. This allows for multiple masters to send out commands to minions and for minions to automatically reconnect to masters that have gone down. A tutorial is available to help get started here:

Multi Master Tutorial

Prereq, the New Requisite

The new prereq requisite is very powerful! It allows for states to execute based on a state that is expected to make changes in the future. This allows for a change on the system to be preempted by another execution. A good example is needing to shut down a service before modifying files associated with it, allowing, for instance, a webserver to be shut down allowing a load balancer to stop sending requests while server side code is updated. In this case, the prereq will only run if changes are expected to happen in the prerequired state, and the prerequired state will always run after the prereq state and only if the prereq state succeeds.

Peer System Improvements

The peer system has been revamped to make it more reliable, faster, and like the rest of Salt, async. The peer calls when an updated minion and master are used together will be much faster!

Relative Includes

The ability to include an sls relative to the defined sls has been added, the new syntax id documented here:

Includes

More State Output Options

The state_output option in the past only supported full and terse, 0.16.0 add the mixed and changes modes further refining how states are sent to users' eyes.

Improved Windows Support

Support for Salt on Windows continues to improve. Software management on Windows has become more seamless with Linux/UNIX/BSD software management. Installed software is now recognized by the short names defined in the repository SLS. This makes it possible to run salt '*' pkg.version firefox and get back results from Windows and non-Windows minions alike.

When templating files on Windows, Salt will now correctly use Windows appropriate line endings. This makes it much easier to edit and consume files on Windows.

When using the cmd state the shell option now allows for specifying Windows Powershell as an alternate shell to execute cmd.run and cmd.script. This opens up Salt to all the power of Windows Powershell and its advanced Windows management capabilities.

Several fixes and optimizations were added for the Windows networking modules, especially when working with IPv6.

A system module was added that makes it easy to restart and shutdown Windows minions.

The Salt Minion will now look for its config file in c:\salt\conf by default. This means that it's no longer necessary to specify the -c option to specify the location of the config file when starting the Salt Minion on Windows in a terminal.

Multiple Targets for pkg.removed, pkg.purged States

Both pkg.removed and pkg.purged now support the pkgs argument, which allow for multiple packages to be targeted in a single state. This, as in pkg.installed, helps speed up these states by reducing the number of times that the package management tools (apt, yum, etc.) need to be run.

Random Times in Cron States

The temporal parameters in cron.present states (minute, hour, etc.) can now be randomized by using random instead of a specific value. For example, by using the random keyword in the minute parameter of a cron state, the same cron job can be pushed to hundreds or thousands of hosts, and they would each use a randomly-generated minute. This can be helpful when the cron job accesses a network resource, and it is not desirable for all hosts to run the job concurrently.

/path/to/cron/script:
  cron.present:
    - user: root
    - minute: random
    - hour: 2

Since Salt assumes a value of * for unspecified temporal parameters, adding a parameter to the state and setting it to random will change that value from * to a randomized numeric value. However, if that field in the cron entry on the minion already contains a numeric value, then using the random keyword will not modify it.

Confirmation Prompt on Key Acceptance

When accepting new keys with salt-key -a minion-id or salt-key -A, there is now a prompt that will show the affected keys and ask for confirmation before proceeding. This prompt can be bypassed using the -y or --yes command line argument, as with other salt-key commands.

Support for Setting Password Hashes on BSD Minions

FreeBSD, NetBSD, and OpenBSD all now support setting passwords in user.present states.

Salt 0.16.1 Release Notes

release:2013-07-29

Salt 0.16.2 Release Notes

release:2013-08-01

Version 0.16.2 is a bugfix release for 0.16.0, and contains a number of fixes.

Windows
  • Only allow Administrator's group and SYSTEM user access to C:\salt. This eliminates a race condition where a non-admin user could modify a template or managed file before it is executed by the minion (which is running as an elevated user), thus avoiding a potential escalation of privileges. (issue 6361)
Grains
  • Fixed detection of virtual grain on OpenVZ hardware nodes
  • Gracefully handle lsb_release data when it is enclosed in quotes
  • LSB grains are now prefixed with lsb_distrib_ instead of simply lsb_. The old naming is not preserved, so SLS may be affected.
  • Improved grains detection on MacOS
Pillar
Peer Publishing
Minion
  • Fixed salt-key usage in minionswarm script
  • Quieted warning about SALT_MINION_CONFIG environment variable on minion startup and for CLI commands run via salt-call (issue 5956)
  • Added minion config parameter random_reauth_delay to stagger re-auth attempts when the minion is waiting for the master to approve its public key. This helps prevent SYN flooding in larger environments.
User/Group Management
  • Implement previously-ignored unique option for user.present states in FreeBSD
  • Report in state output when a group.present state attempts to use a gid in use by another group
  • Fixed regression that prevents a user.present state to set the password hash to the system default (i.e. an unset password)
  • Fixed multiple group.present states with the same group (issue 6439)
File Management
  • Fixed file.mkdir setting incorrect permissions (issue 6033)
  • Fixed cleanup of source files for templates when /tmp is in file_roots (issue 6118)
  • Fixed caching of zero-byte files when a non-empty file was previously cached at the same path
  • Added HTTP authentication support to the cp module (issue 5641)
  • Diffs are now suppressed when binary files are changed
Package/Repository Management
  • Fixed traceback when there is only one target for pkg.latest states
  • Fixed regression in detection of virtual packages (apt)
  • Limit number of pkg database refreshes to once per state.sls/state.highstate
  • YUM: Allow 32-bit packages with arches other than i686 to be managed on 64-bit systems (issue 6299)
  • Fixed incorrect reporting in pkgrepo.managed states (issue 5517)
  • Fixed 32-bit binary package installs on 64-bit RHEL-based distros, and added proper support for 32-bit packages on 64-bit Debian-based distros (issue 6303)
  • Fixed issue where requisites were inadvertently being put into YUM repo files (issue 6471)
Service Management
  • Fixed inaccurate reporting of results in service.running states when the service fails to start (issue 5894)
  • Fixed handling of custom initscripts in RHEL-based distros so that they are immediately available, negating the need for a second state run to manage the service that the initscript controls
Networking
SSH
pip
  • Properly handle -f lines in pip freeze output
  • Fixed regression in pip.installed states with specifying a requirements file (issue 6003)
  • Fixed use of editable argument in pip.installed states (issue 6025)
  • Deprecated runas parameter in execution function calls, in favor of user
MySQL
PostgreSQL
Miscellaneous

Salt 0.16.3 Release Notes

release:2013-08-09

Version 0.16.3 is another bugfix release for 0.16.0. The changes include:

  • Various documentation fixes
  • Fix proc directory regression (issue 6502)
  • Properly detect Linaro Linux (issue 6496)
  • Fix regressions in mount.mounted (issue 6522, issue 6545)
  • Skip malformed state requisites (issue 6521)
  • Fix regression in gitfs from bad import
  • Fix for watching prereq states (including recursive requisite error) (issue 6057)
  • Fix mod_watch not overriding prereq (issue 6520)
  • Don't allow functions which compile states to be called within states (issue 5623)
  • Return error for malformed top.sls (issue 6544)
  • Fix traceback in mysql.query
  • Fix regression in binary package installation for 64-bit packages on Debian-based Linux distros (issue 6563)
  • Fix traceback caused by running cp.push without having set file_recv in the master config file
  • Fix scheduler configuration in pillar (issue 6201)

Salt 0.16.4 Release Notes

release:2013-09-07

Version 0.16.4 is another bugfix release for 0.16.0, likely to be the last before 0.17.0 is released. The changes include:

Salt 0.17.0 Release Notes

release:2013-09-26

The 0.17.0 release is a very exciting release of Salt, this brings to Salt some very powerful new features and advances. The advances range from the state system to the test suite, covering new transport capabilities and making states easier and more powerful, to extending Salt Virt and much more!

The 0.17.0 release will also be the last release of Salt to follow the old 0.XX.X numbering system, the next release of Salt will change the numbering to be date based following this format:

<Year>.<Month>.<Minor>

So if the release happens in November of 2013 the number will be 13.11.0, the first bugfix release will be 13.11.1 and so forth.

Major Features
Halite

The new Halite web GUI is now available on PyPI. A great deal of work has been put into Halite to make it fully event driven and amazingly fast. The Halite UI can be started from within the Salt Master (after being installed from PyPI), or standalone, and does not require an external database to run. It is very lightweight!

This initial release of Halite is primarily the framework for the UI and the communication systems, making it easy to extend and build the UI up. It presently supports watching the event bus and firing commands over Salt.

At this time, Halite is not available as a package, but installation documentation is available at: http://docs.saltstack.com/topics/tutorials/halite.html

Halite is, like the rest of Salt, Open Source!

Much more will be coming in the future of Halite!

Salt SSH

The new salt-ssh command has been added to Salt. This system allows for remote execution and states to be run over ssh. The benefit here being, that salt can run relying only on the ssh agent, rather than requiring a minion to be deployed.

The salt-ssh system runs states in a compatible way as Salt and states created and run with salt-ssh can be moved over to a standard salt deployment without modification.

Since this is the initial release of salt-ssh, there is plenty of room for improvement, but it is fully operational, not just a bootstrap tool.

Rosters

Salt is designed to have the minions be aware of the master and the master does not need to be aware of the location of the minions. The new salt roster system was created and designed to facilitate listing the targets for salt-ssh.

The roster system, like most of Salt, is a plugin system, allowing for the list of systems to target to be derived from any pluggable backend. The rosters shipping with 0.17.0 are flat and scan. Flat is a file which is read in via the salt render system and the scan roster does simple network scanning to discover ssh servers.

State Auto Order

This is a major change in how states are evaluated in Salt. State Auto Order is a new feature that makes states get evaluated and executed in the order in which they are defined in the sls file. This feature makes it very easy to see the finite order in which things will be executed, making Salt now, fully imperative AND fully declarative.

The requisite system still takes precedence over the order in which states are defined, so no existing states should break with this change. But this new feature can be turned off by setting state_auto_order: False in the master config, thus reverting to the old lexicographical order.

state.sls Runner

The state.sls runner has been created to allow for a more powerful system for orchestrating state runs and function calls across the salt minions. This new system uses the state system for organizing executions.

This allows for states to be defined that are executed on the master to call states on minions via salt-run state.sls.

Salt Thin

Salt Thin is an exciting new component of Salt, this is the ability to execute Salt routines without any transport mechanisms installed, it is a pure python subset of Salt.

Salt Thin does not have any networking capability, but can be dropped into any system with Python installed and then salt-call can be called directly. The Salt Thin system, is used by the salt-ssh command, but can still be used to just drop salt somewhere for easy use.

Event Namespacing

Events have been updated to be much more flexible. The tags in events have all been namespaced allowing easier tracking of event names.

Mercurial Fileserver Backend

The popular git fileserver backend has been joined by the mercurial fileserver backend, allowing the state tree to be managed entirely via mercurial.

External Logging Handlers

The external logging handler system allows for Salt to directly hook into any external logging system. Currently supported are sentry and logstash.

Jenkins Testing

The testing systems in Salt have been greatly enhanced, tests for salt are now executed, via jenkins.saltstack.com, across many supported platforms. Jenkins calls out to salt-cloud to create virtual machines on Rackspace, then the minion on the virtual machine checks into the master running on Jenkins where a state run is executed that sets up the minion to run tests and executes the test suite.

This now automates the sequence of running platform tests and allows for continuous destructive tests to be run.

Salt Testing Project

The testing libraries for salt have been moved out of the main salt code base and into a standalone codebase. This has been done to ease the use of the testing systems being used in salt based projects other than Salt itself.

StormPath External Authentication

The external auth system now supports the fantastic Stormpath cloud based authentication system.

LXC Support

Extensive additions have been added to Salt for LXC support. This included the backend libs for managing LXC containers. Addition into the salt-virt system is still in the works.

Mac OS X User/Group Support

Salt is now able to manage users and groups on Minions running Mac OS X. However, at this time user passwords cannot be managed.

Django ORM External Pillar

Pillar data can now be derived from Django managed databases.

Fixes from RC to release
  • Multiple documentation fixes
  • Add multiple source files + templating for file.append (issue 6905)
  • Support sysctl configuration files in systemd>=207 (issue 7351)
  • Add file.search and file.replace
  • Fix cross-calling execution functions in provider overrides
  • Fix locale override for postgres (issue 4543)
  • Fix Raspbian identification for service/pkg support (issue 7371)
  • Fix cp.push file corruption (issue 6495)
  • Fix ALT Linux password hash specification (issue 3474)
  • Multiple salt-ssh-related fixes and improvements

Salt 0.17.1 Release Notes

release:2013-10-17

Note

THIS RELEASE IS NOT COMPATIBLE WITH PREVIOUS VERSIONS. If you update your master to 0.17.1, you must update your minions as well. Sorry for the inconvenience -- this is a result of one of the security fixes listed below.

The 0.17.1 release comes with a number of improvements to salt-ssh, many bugfixes, and a number of security updates.

Salt SSH has been improved to be faster, more featureful and more secure. Since the original release of Salt SSH was primarily a proof of concept, it has been very exciting to see its rapid adoption. We appreciate the willingness of security experts to review Salt SSH and help discover oversights and ensure that security issues only exist for such a tiny window of time.

SSH Enhancements
Shell Improvements

Improvements to Salt SSH's communication have been added that improve routine execution regardless of the target system's login shell.

Performance

Deployment of routines is now faster and takes fewer commands to execute.

Security Updates

Be advised that these security issues all apply to a small subset of Salt users and mostly apply to Salt SSH.

Insufficient Argument Validation

This issue allowed for a user with limited privileges to embed executions inside of routines to execute routines that should be restricted. This applies to users using external auth or client ACL and opening up specific routines.

Be advised that these patches address the direct issue. Additional commits have been applied to help mitigate this issue from resurfacing.

CVE

CVE-2013-4435

MITM SSH attack in salt-ssh

SSH host keys were being accepted by default and not enforced on future SSH connections. These patches set SSH host key checking by default and can be overridden by passing the -i flag to salt-ssh.

CVE

CVE-2013-4436

Affected Versions

0.17.0

Found By

Michael Scherer, Red Hat

YAML Calling Unsafe Loading Routine

It has been argued that this is not a valid security issue, as the YAML loading that was happening was only being called after an initial gateway filter in Salt has already safely loaded the YAML and would fail if non-safe routines were embedded. Nonetheless, the CVE was filed and patches applied.

CVE

CVE-2013-4438

Failure to Drop Supplementary Group on Salt Master

If a salt master was started as a non-root user by the root user, root's groups would still be applied to the running process. This fix changes the process to have only the groups of the running user.

CVE

CVE not considered necessary by submitter.

Affected Versions

0.11.0 - 0.17.0

Found By

Michael Scherer, Red Hat

Failure to Validate Minions Posting Data

This issue allowed a minion to pose as another authorized minion when posting data such as the mine data. All minions now pass through the id challenge before posting such data.

CVE

CVE-2013-4439

Affected Versions

0.15.0 - 0.17.0

Fix Reference

Version 0.17.1 is the first bugfix release for 0.17.0. The changes include:

  • Fix symbolic links in thin.tgz (issue 7482)
  • Pass env through to file.patch state (issue 7452)
  • Service provider fixes and reporting improvements (issue 7361)
  • Add --priv option for specifying salt-ssh private key
  • Fix salt-thin's salt-call on setuptools installations (issue 7516)
  • Fix salt-ssh to support passwords with spaces (issue 7480)
  • Fix regression in wildcard includes (issue 7455)
  • Fix salt-call outputter regression (issue 7456)
  • Fix custom returner support for startup states (issue 7540)
  • Fix value handling in augeas (issue 7605)
  • Fix regression in apt (issue 7624)
  • Fix minion ID guessing to use socket.getfqdn() first (issue 7558)
  • Add minion ID caching (issue 7558)
  • Fix salt-key race condition (issue 7304)
  • Add --include-all flag to salt-key (issue 7399)
  • Fix custom grains in pillar (part of issue 5716, issue 6083)
  • Fix race condition in salt-key (issue 7304)
  • Fix regression in minion ID guessing, prioritize socket.getfqdn() (issue 7558)
  • Cache minion ID on first guess (issue 7558)
  • Allow trailing slash in file.directory state
  • Fix reporting of file_roots in pillar return (issue 5449 and issue 5951)
  • Remove pillar matching for mine.get (issue 7197)
  • Sanitize args for multiple execution modules
  • Fix yumpkg mod_repo functions to filter hidden args (issue 7656)
  • Fix conflicting IDs in state includes (issue 7526)
  • Fix mysql_grants.absent string formatting issue (issue 7827)
  • Fix postgres.version so it won't return None (issue 7695)
  • Fix for trailing slashes in mount.mounted state
  • Fix rogue AttributErrors in the outputter system (issue 7845)
  • Fix for incorrect ssh key encodings resulting in incorrect key added (issue 7718)
  • Fix for pillar/grains naming regression in python renderer (issue 7693)
  • Fix args/kwargs handling in the scheduler (issue 7422)
  • Fix logfile handling for file://, tcp://, and udp:// (issue 7754)
  • Fix error handling in config file parsing (issue 6714)
  • Fix RVM using sudo when running as non-root user (issue 2193)
  • Fix client ACL and underlying logging bugs (issue 7706)
  • Fix scheduler bug with returner (issue 7367)
  • Fix user management bug related to default groups (issue 7690)
  • Fix various salt-ssh bugs (issue 7528)
  • Many various documentation fixes

Salt 0.17.2 Release Notes

release:2013-11-14

Version 0.17.2 is another bugfix release for 0.17.0. The changes include:

  • Add ability to delete key with grains.delval (issue 7872)
  • Fix possible state compiler stack trace (issue 5767)
  • Fix architecture regression in yumpkg (issue 7813)
  • Use correct ps on Debian to prevent truncating (issue 5646)
  • Fix grains targeting for new grains (issue 5737)
  • Fix bug with merging in git_pillar (issue 6992)
  • Fix print_jobs duplicate results
  • Fix apt version specification for pkg.install
  • Fix possible KeyError from ext_job_cache missing option
  • Fix auto_order for - names states (issue 7649)
  • Fix regression in new gitfs installs (directory not found error)
  • Fix escape pipe issue on Windows for file.recurse (issue 7967)
  • Fix fileclient in case of master restart (issue 7987)
  • Try to output warning if CLI command malformed (issue 6538)
  • Fix --out=quiet to actually be quiet (issue 8000)
  • Fix for state.sls in salt-ssh (issue 7991)
  • Fix for MySQL grants ordering issue (issue 5817)
  • Fix traceback for certain missing CLI args (issue 8016)
  • Add ability to disable lspci queries on master (issue 4906)
  • Fail if sls defined in topfile does not exist (issue 5998)
  • Add ability to downgrade MySQL grants (issue 6606)
  • Fix ssh_auth.absent traceback (issue 8043)
  • Add upstart detection for Debian/Raspbian (issue 8039)
  • Fix ID-related issues (issue 8052, issue 8050, and others)
  • Fix for jinja rendering issues (issue 8066 and issue 8079)
  • Fix argument parsing in salt-ssh (issue 7928)
  • Fix some GPU detection instances (issue 6945)
  • Fix bug preventing includes from other environments in SLS files
  • Fix for kwargs with dashes (issue 8102)
  • Fix salt.utils.which for windows '.exe' (issue 7904)
  • Fix apache.adduser without apachectl (issue 8123)
  • Fix issue with evaluating test kwarg in states (issue 7788)
  • Fix regression in salt.client.Caller() (issue 8078)
  • Fix apt-key silent failure
  • Fix bug where cmd.script would try to run even if caching failed (issue 7601)
  • Fix apt pkg.latest regression (issue 8067)
  • Fix for mine data not being updated (issue 8144)
  • Fix for noarch packages in yum
  • Fix a Xen detection edge case (issue 7839)
  • Fix windows __opts__ dictionary persistence (issue 7714)
  • Fix version generation for when it's part of another git repo (issue 8090)
  • Fix _handle_iorder stacktrace so that the real syntax error is shown (issue 8114 and issue 7905)
  • Fix git.latest state when a commit SHA is used (issue 8163)
  • Fix various small bugs in yumpkg.py (issue 8201)
  • Fix for specifying identify file in git.latest (issue 8094)
  • Fix for --output-file CLI arg (issue 8205)
  • Add ability to specify shutdown time for system.shutdown (issue 7833)
  • Fix for salt version using non-salt git repo info (issue 8266)
  • Add additional hints at impact of pkgrepo states when test=True (issue 8247)
  • Fix for salt-ssh files not being owned by root (issue 8216)
  • Fix retry logic and error handling in fileserver (related to issue 7755)
  • Fix file.replace with test=True (issue 8279)
  • Add flag for limiting file traversal in fileserver (issue 6928)
  • Fix for extra mine processes (issue 5729)
  • Fix for unloading custom modules (issue 7691)
  • Fix for salt-ssh opts (issue 8005 and issue 8271)
  • Fix compound matcher for grains (issue 7944)
  • Improve error reporting in ebuild module (related to issue 5393)
  • Add dir_mode to file.managed (issue 7860)
  • Improve traceroute support for FreeBSD and OS X (issue 4927)
  • Fix for matching minions under syndics (issue 7671)
  • Improve exception handling for missing ID (issue 8259)
  • Fix grain mismatch for ScientificLinux (issue 8338)
  • Add configuration option for minion_id_caching
  • Fix open mode auth errors (issue 8402)

Salt 0.17.3 Release Notes

release:2013-12-08

Note

0.17.3 had some regressions which were promptly fixed in the 0.17.4 release. Please use 0.17.4 instead.

Version 0.17.3 is another bugfix release for 0.17.0. The changes include:

  • Fix some jinja render errors (issue 8418)
  • Fix file.replace state changing file ownership (issue 8399)
  • Fix state ordering with the PyDSL renderer (issue 8446)
  • Fix for new npm version (issue 8517)
  • Fix for pip state requiring name even with requirements file (issue 8519)
  • Fix yum logging to open terminals (issue 3855)
  • Add sane maxrunning defaults for scheduler (issue 8563)
  • Fix states duplicate key detection (issue 8053)
  • Fix SUSE patch level reporting (issue 8428)
  • Fix managed file creation umask (issue 8590)
  • Fix logstash exception (issue 8635)
  • Improve argument exception handling for salt command (issue 8016)
  • Fix pecl success reporting (issue 8750)
  • Fix launchctl module exceptions (issue 8759)
  • Fix argument order in pw_user module
  • Add warnings for failing grains (issue 8690)
  • Fix hgfs problems caused by connections left open (issue 8811 and issue 8810)
  • Add Debian iptables default for iptables-persistent package (issue 8889)
  • Fix installation of packages with dots in pkg name (issue 8614)
  • Fix noarch package installation on CentOS 6 (issue 8945)
  • Fix portage_config.enforce_nice_config (issue 8252)
  • Fix salt.util.copyfile umask usage (issue 8590)
  • Fix rescheduling of failed jobs (issue 8941)
  • Fix pkg on Amazon Linux (uses yumpkg5 now) (issue 8226)
  • Fix conflicting options in postgres module (issue 8717)
  • Fix ps modules for psutil >= 0.3.0 (issue 7432)
  • Fix postgres module to return False on failure (issue 8778)
  • Fix argument passing for args with pound signs (issue 8585)
  • Fix pid of salt CLi command showing in status.pid output (issue 8720)
  • Fix rvm to run gem as the correct user (issue 8951)
  • Fix namespace issue in win_file module (issue 9060)
  • Fix masterless state paths on windows (issue 9021)
  • Fix timeout option in master config (issue 9040)

Salt 0.17.4 Release Notes

release:2013-12-10

Version 0.17.4 is another bugfix release for 0.17.0. The changes include:

  • Fix file.replace bug when replacement str is numeric (issue 9101)
  • Fix regression in file.managed (issue 9131)
  • Prevent traceback when job is None. (issue 9145)

Salt 0.17.5 Release Notes

release:2014-01-27

Version 0.17.5 is another bugfix release for 0.17.0. The changes include:

  • Fix user.present states with non-string fullname (issue 9085)
  • Fix virt.init return value on failure (issue 6870)
  • Fix reporting of file.blockreplace state when test=True
  • Fix network.interfaces when used in cron (issue 7990)
  • Fix bug in pkgrepo when switching to/from mirrorlist-based repo def (issue 9121)
  • Fix infinite recursion when cache file is corrupted
  • Add checking for rev and mirror/bare args in git.latest (issue 9107)
  • Add cmd.watch alias (points to cmd.wait) (issue 8612)
  • Fix stacktrace when prereq is not formed as a list (issue 8235)
  • Fix stdin issue with lvdisplay command (issue 9128)
  • Add pre-check function for range matcher (issue 9236)
  • Add exception handling for psutil for processes that go missing (issue 9274)
  • Allow _in requisites to match both on ID and name (issue 9061)
  • Fix multiple client timeout issues (issue 7157 and issue 9302, probably others)
  • Fix ZMQError: Operation cannot be accomplished in current state errors (issue 6306)
  • Multiple optimization in minion auth routines
  • Clarify logs for minion ID caching

Salt 0.6.0 release notes

The Salt remote execution manager has reached initial functionality! Salt is a management application which can be used to execute commands on remote sets of servers.

The whole idea behind Salt is to create a system where a group of servers can be remotely controlled from a single master, not only can commands be executed on remote systems, but salt can also be used to gather information about your server environment.

Unlike similar systems, like Func and MCollective, Salt is extremely simple to setup and use, the entire application is contained in a single package, and the master and minion daemons require no running dependencies in the way that Func requires Certmaster and MCollective requires activeMQ.

Salt also manages authentication and encryption. Rather than using SSL for encryption, salt manages encryption on a payload level, so the data sent across the network is encrypted with fast AES encryption, and authentication uses RSA keys. This means that Salt is fast, secure, and very efficient.

Messaging in Salt is executed with ZeroMQ, so the message passing interface is built into salt and does not require an external ZeroMQ server. This also adds speed to Salt since there is no additional bloat on the networking layer, and ZeroMQ has already proven itself as a very fast networking system.

The remote execution in Salt is "Lazy Execution", in that once the command is sent the requesting network connection is closed. This makes it easier to detach the execution from the calling process on the master, it also means that replies are cached, so that information gathered from historic commands can be queried in the future.

Salt also allows users to make execution modules in Python. Writers of these modules should also be pleased to know that they have access to the impressive information gathered from PuppetLabs' Facter application, making Salt module more flexible. In the future I hope to also allow Salt to group servers based on Facter information as well.

All in all Salt is fast, efficient, and clean, can be used from a simple command line client or through an API, uses message queue technology to make network execution extremely fast, and encryption is handled in a very fast and efficient manner. Salt is also VERY easy to use and VERY easy to extend.

You can find the source code for Salt on my GitHub page, I have also set up a few wiki pages explaining how to use and set up Salt. If you are using Arch Linux there is a package available in the Arch Linux AUR.

Salt 0.6.0 Source: https://cloud.github.com/downloads/saltstack/salt/salt-0.6.0.tar.gz

GitHub page: https://github.com/saltstack/salt

Wiki: https://github.com/saltstack/salt/wiki

Arch Linux Package: https://aur.archlinux.org/packages/salt-git/

I am very open to contributions, for instance I need packages for more Linux distributions as well as BSD packages and testers.

Give Salt a try, this is the initial release and is not a 1.0 quality release, but it has been working well for me! I am eager to get your feedback!

Salt 0.7.0 release notes

I am pleased to announce the release of Salt 0.7.0!

This release marks what is the first stable release of salt, 0.7.0 should be suitable for general use.

0.7.0 Brings the following new features to Salt:

  • Integration with Facter data from puppet labs
  • Allow for matching minions from the salt client via Facter information
  • Minion job threading, many jobs can be executed from the master at once
  • Preview of master clustering support - Still experimental
  • Introduce new minion modules for stats, virtualization, service management and more
  • Add extensive logging to the master and minion daemons
  • Add sys.reload_functions for dynamic function reloading
  • Greatly improve authentication
  • Introduce the saltkey command for managing public keys
  • Begin backend development preparatory to introducing butter
  • Addition of man pages for the core commands
  • Extended and cleaned configuration

0.7.0 Fixes the following major bugs:

  • Fix crash in minions when matching failed
  • Fix configuration file lookups for the local client
  • Repair communication bugs in encryption
  • Numerous fixes in the minion modules

The next release of Salt should see the following features:

  • Stabilize the cluster support
  • Introduce a remote client for salt command tiers
  • salt-ftp system for distributed file copies
  • Initial support for "butter"

Coming up next is a higher level management framework for salt called Butter. I want salt to stay as a simple and effective communication framework, and allow for more complicated executions to be managed via Butter.

Right now Butter is being developed to act as a cloud controller using salt as the communication layer, but features like system monitoring and advanced configuration control (a puppet manager) are also in the pipe.

Special thanks to Joseph Hall for the status and network modules, and thanks to Matthias Teege for tracking down some configuration bugs!

Salt can be downloaded from the following locations;

Source Tarball:

https://cloud.github.com/downloads/saltstack/salt/salt-0.7.0.tar.gz

Arch Linux Package:

https://aur.archlinux.org/packages/salt-git/

Please enjoy the latest Salt release!

Salt 0.8.0 release notes

Salt 0.8.0 is ready for general consumption! The source tarball is available on GitHub for download:

https://cloud.github.com/downloads/saltstack/salt/salt-0.8.0.tar.gz

A lot of work has gone into salt since the last release just 2 weeks ago, and salt has improved a great deal. A swath of new features are here along with performance and threading improvements!

The main new features of salt 0.8.0 are:

Salt-cp

Cython minion modules

Dynamic returners

Faster return handling

Lowered required Python version to 2.6

Advanced minion threading

Configurable minion modules

Salt-cp

The salt-cp command introduces the ability to copy simple files via salt to targeted servers. Using salt-cp is very simple, just call salt-cp with a target specification, the source file(s) and where to copy the files on the minions. For instance:

# salt-cp ‘*’ /etc/hosts /etc/hosts

Will copy the local /etc/hosts file to all of the minions.

Salt-cp is very young, in the future more advanced features will be added, and the functionality will much more closely resemble the cp command.

Cython minion modules

Cython is an amazing tool used to compile Python modules down to c. This is arguably the fastest way to run Python code, and since pyzmq requires cython, adding support to salt for cython adds no new dependencies.

Cython minion modules allow minion modules to be written in cython and therefore executed in compiled c. Simply write the salt module in cython and use the file extension “.pyx” and the minion module will be compiled when the minion is started. An example cython module is included in the main distribution called cytest.pyx:

https://github.com/saltstack/salt/blob/develop/salt/modules/cytest.pyx

Dynamic Returners

By default salt returns command data back to the salt master, but now salt can return command data to any system. This is enabled via the new returners modules feature for salt. The returners modules take the return data and sends it to a specific module. The returner modules work like minion modules, so any returner can be added to the minions.

This means that a custom data returner can be added to communicate the return data so anything from MySQL, Redis, MongoDB, and more!

There are 2 simple stock returners in the returners directory:

https://github.com/saltstack/salt/blob/develop/salt/returners

The documentation on writing returners will be added to the wiki shortly, and returners can be written in pure Python, or in cython.

Configurable Minion Modules

Minion modules may need to be configured, now the options passed to the minion configuration file can be accessed inside of the minion modules via the __opt__ dict.

Information on how to use this simple addition has been added to the wiki: Writing modules

The test module has an example of using the __opts__ dict, and how to set default options:

https://github.com/saltstack/salt/blob/develop/salt/modules/test.py

Advanced Minion Threading

In 0.7.0 the minion would block after receiving a command from the master, now the minion will spawn a thread or multiprocess. By default Python threads are used because for general use they have proved to be faster, but the minion can now be configured to use the Python multiprocessing module instead. Using multiprocessing will cause executions that are CPU bound or would otherwise exploit the negative aspects of the Python GIL to run faster and more reliably, but simple calls will still be faster with Python threading. The configuration option can be found in the minion configuration file:

https://github.com/saltstack/salt/blob/develop/conf/minion

Lowered Supported Python to 2.6

The requirement for Python 2.7 has been removed to support Python 2.6. I have received requests to take the minimum Python version back to 2.4, but unfortunately this will not be possible, since the ZeroMQ Python bindings do not support Python 2.4.

Salt 0.8.0 is a very major update, it also changes the network protocol slightly which makes communication with older salt daemons impossible, your master and minions need to be upgraded together!

I could use some help bringing salt to the people! Right now I only have packages for Arch Linux, Fedora 14 and Gentoo. We need packages for Debian and people willing to help test on more platforms. We also need help writing more minion modules and returner modules. If you want to contribute to salt please hop on the mailing list and send in patches, make a fork on GitHub and send in pull requests! If you want to help but are not sure where you can, please email me directly or post tot he mailing list!

I hope you enjoy salt, while it is not yet 1.0 salt is completely viable and usable!

-Thomas S. Hatch

Salt 0.8.7 release notes

It has been a month since salt 0.8.0, and it has been a long month! But Salt is still coming along strong. 0.8.7 has a lot of changes and a lot of updates. This update makes Salt’s ZeroMQ back end better, strips Facter from the dependencies, and introduces interfaces to handle more capabilities.

Many of the major updates are in the background, but the changes should shine through to the surface. A number of the new features are still a little thin, but the back end to support expansion is in place.

I also recently gave a presentation to the Utah Python users group in Salt Lake City, the slides from this presentation are available here: https://cloud.github.com/downloads/saltstack/salt/Salt.pdf

The video from this presentation will be available shortly.

The major new features and changes in Salt 0.8.7 are:

  • Revamp ZeroMQ topology on the master for better scalability
  • State enforcement
  • Dynamic state enforcement managers
  • Extract the module loader into salt.loader
  • Make Job ids more granular
  • Replace Facter functionality with the new salt grains interface
  • Support for “virtual” salt modules
  • Introduce the salt-call command
  • Better debugging for minion modules

The new ZeroMQ topology allows for better scalability, this will be required by the need to execute massive file transfers to multiple machines in parallel and state management. The new ZeroMQ topology is available in the aforementioned presentation.

0.8.7 introduces the capability to declare states, this is similar to the capabilities of Puppet. States in salt are declared via state data structures. This system is very young, but the core feature set is available. Salt states work around rendering files which represent Salt high data. More on the Salt state system will be documented in the near future.

The system for loading salt modules has been pulled out of the minion class to be a standalone module, this has enabled more dynamic loading of Salt modules and enables many of the updates in 0.8.7 –

https://github.com/saltstack/salt/blob/develop/salt/loader.py

Salt Job ids are now microsecond precise, this was needed to repair a race condition unveiled by the speed improvements in the new ZeroMQ topology.

The new grains interface replaces the functionality of Facter, the idea behind grains differs from Facter in that the grains are only used for static system data, dynamic data needs to be derived from a call to a salt module. This makes grains much faster to use, since the grains data is generated when the minion starts.

Virtual salt modules allows for a salt module to be presented as something other than its module name. The idea here is that based on information from the minion decisions about which module should be presented can be made. The best example is the pacman module. The pacman module will only load on Arch Linux minions, and will be called pkg. Similarly the yum module will be presented as pkg when the minion starts on a Fedora/RedHat system.

The new salt-call command allows for minion modules to be executed from the minion. This means that on the minion a salt module can be executed, this is a great tool for testing Salt modules. The salt-call command can also be used to view the grains data.

In previous releases when a minion module threw an exception very little data was returned to the master. Now the stack trace from the failure is returned making debugging of minion modules MUCH easier.

Salt is nearing the goal of 1.0, where the core feature set and capability is complete!

Salt 0.8.7 can be downloaded from GitHub here: https://cloud.github.com/downloads/saltstack/salt/salt-0.8.7.tar.gz

-Thomas S Hatch

Salt 0.8.8 release notes

Salt 0.8.8 is here! This release adds a great deal of code and some serious new features. The latest release can be downloaded here: https://cloud.github.com/downloads/saltstack/salt/salt-0.8.8.tar.gz

Improved Documentation has been set up for salt using sphinx thanks to the efforts of Seth House. This new documentation system will act as the back end to the salt website which is still under heavy development. The new sphinx documentation system has also been used to greatly clean up the salt manpages. The salt 7 manpage in particular now contains extensive information which was previously only in the wiki. The new documentation can be found at: http://docs.saltstack.com/ We still have a lot to add, and when the domain is set up I will post another announcement.

More additions have been made to the ZeroMQ setup, particularly in the realm of file transfers. Salt 0.8.8 introduces a built in, stateless, encrypted file server which allows salt minions to download files from the salt master using the same encryption system used for all other salt communications. The main motivation for the salt file server has been to facilitate the new salt state system.

Much of the salt code has been cleaned up and a new cleaner logging system has been introduced thanks to the efforts of Pedro Algarvio. These additions will allow for much more flexible logging to be executed by salt, and fixed a great deal of my poor spelling in the salt docstrings! Pedro Algarvio has also cleaned up the API, making it easier to embed salt into another application.

The biggest addition to salt found in 0.8.8 is the new state system. The salt module system has received a new front end which allows salt to be used as a configuration management system. The configuration management system allows for system configuration to be defined in data structures. The configuration management system, or as it is called in salt, the “salt state system” supports many of the features found in other configuration managers, but allows for system states to be written in a far simpler format, executes at blazing speeds, and operates via the salt minion matching system. The state system also operates within the normal scope of salt, and requires no additional configuration to use.

The salt state system can enforce the following states with many more to come: Packages Files Services Executing commands Hosts

The system used to define the salt states is based on a data structure, the data structure used to define the salt states has been made to be as easy to use as possible. The data structure is defined by default using a YAML file rendered via a Jinja template. This means that the state definition language supports all of the data structures that YAML supports, and all of the programming constructs and logic that Jinja supports. If the user does not like YAML or Jinja the states can be defined in yaml-mako, json-jinja, or json-mako. The system used to render the states is completely dynamic, and any rendering system can be added to the capabilities of Salt, this means that a rendering system that renders XML data in a cheetah template, or whatever you can imagine, can be easily added to the capabilities of salt.

The salt state system also supports isolated environments, as well as matching code from several environments to a single salt minion.

The feature base for Salt has grown quite a bit since my last serious documentation push. As we approach 0.9.0 the goals are becoming very clear, and the documentation needs a lot of work. The main goals for 0.9.0 are to further refine the state system, fix any bugs we find, get Salt running on as many platforms as we can, and get the documentation filled out. There is a lot more to come as Salt moves forward to encapsulate a much larger scope, while maintaining supreme usability and simplicity.

If you would like a more complete overview of Salt please watch the Salt presentation: Slides: https://cloud.github.com/downloads/saltstack/salt/Salt.pdf

-Thomas S Hatch

Salt 0.8.9 Release Notes

Salt 0.8.9 has finally arrived! Unfortunately this is much later than I had hoped to release 0.8.9, life has been very crazy over the last month. But despite challenges, Salt has moved forward!

This release, as expected, adds few new features and many refinements. One of the most exciting aspect of this release is that the development community for salt has grown a great deal and much of the code is from contributors.

Also, I have filled out the documentation a great deal. So information on States is properly documented, and much of the documentation that was out of date has been filled in.

Download!

The Salt source can be downloaded from the salt GitHub site:

https://cloud.github.com/downloads/saltstack/salt/salt-0.8.9.tar.gz

Or from PyPI:

https://pypi.python.org/packages/source/s/salt/salt-0.8.9.tar.gz

Here s the md5sum:

7d5aca4633bc22f59045f59e82f43b56

For instructions on how to set up Salt please see the Installation instructions.

New Features
Salt Run

A big feature is the addition of Salt run, the salt-run command allows for master side execution modules to be made that gather specific information or execute custom routines from the master.

Documentation for salt-run can be found here

Refined Outputters

One problem often complained about in salt was the fact that the output was so messy. Thanks to help from Jeff Schroeder a cleaner interface for the command output for the Salt CLI has been made. This new interface makes adding new printout formats easy and additions to the capabilities of minion modules makes it possible to set the printout mode or outputter for functions in minion modules.

Cross Calling Salt Modules

Salt modules can now call each other, the __salt__ dict has been added to the predefined references in minion modules. This new feature is documented in the modules documentation.

Watch Option Added to Salt State System

Now in Salt states you can set the watch option, this will allow watch enabled states to change based on a change in the other defined states. This is similar to subscribe and notify statements in puppet.

Root Dir Option

Travis Cline has added the ability to define the option root_dir which allows the salt minion to operate in a subdir. This is a strong move in supporting the minion running as an unprivileged user

Config Files Defined in Variables

Thanks again to Travis Cline, the master and minion configuration file locations can be defined in environment variables now.

New Modules

Quite a few new modules, states, returners, and runners have been made.

New Minion Modules
apt

Support for apt-get has been added, this adds greatly improved Debian and Ubuntu support to Salt!

useradd and groupadd

Support for manipulating users and groups on Unix-like systems.

moosefs

Initial support for reporting on aspects of the distributed file system, MooseFS. For more information on MooseFS please see: http://www.moosefs.org

Thanks to Joseph Hall for his work on MooseFS support.

mount

Manage mounts and the fstab.

puppet

Execute puppet on remote systems.

shadow

Manipulate and manage the user password file.

ssh

Interact with ssh keys.

New States
user and group

Support for managing users and groups in Salt States.

mount

Enforce mounts and the fstab.

New Returners
mongo_return

Send the return information to a MongoDB server.

New Runners
manage

Display minions that are up or down.

Salt 0.9.0 Release Notes

release:2011-08-27

Salt 0.9.0 is here. This is an exciting release, 0.9.0 includes the new network topology features allowing peer salt commands and masters of masters via the syndic interface.

0.9.0 also introduces many more modules, improvements to the API and improvements to the ZeroMQ systems.

Download!

The Salt source can be downloaded from the salt GitHub site:

https://cloud.github.com/downloads/saltstack/salt/salt-0.9.0.tar.gz

Or from PyPI:

https://pypi.python.org/packages/source/s/salt/salt-0.9.0.tar.gz

Here is the md5sum:

9a925da04981e65a0f237f2e77ddab37

For instructions on how to set up Salt please see the Installation instructions.

New Features
Salt Syndic

The new Syndic interface allows a master to be commanded via another higher level salt master. This is a powerful solution allowing a master control structure to exist, allowing salt to scale to much larger levels then before.

Peer Communication

0.9.0 introduces the capability for a minion to call a publication on the master and receive the return from another set of minions. This allows salt to act as a communication channel between minions and as a general infrastructure message bus.

Peer communication is turned off by default but can be enabled via the peer option in the master configuration file. Documentation on the new Peer interface.

Easily Extensible API

The minion and master classes have been redesigned to allow for specialized minion and master servers to be easily created. An example on how this is done for the master can be found in the master.py salt module:

https://github.com/saltstack/salt/blob/develop/salt/master.py

The Master class extends the SMaster class and set up the main master server.

The minion functions can now also be easily added to another application via the SMinion class, this class can be found in the minion.py module:

https://github.com/saltstack/salt/blob/develop/salt/minion.py

Cleaner Key Management

This release changes some of the key naming to allow for multiple master keys to be held based on the type of minion gathering the master key.

The -d option has also been added to the salt-key command allowing for easy removal of accepted public keys.

The --gen-keys option is now available as well for salt-key, this allows for a salt specific RSA key pair to be easily generated from the command line.

Improved 0MQ Master Workers

The 0MQ worker system has been further refined to be faster and more robust. This new system has been able to handle a much larger load than the previous setup. The new system uses the IPC protocol in 0MQ instead of TCP.

New Modules

Quite a few new modules have been made.

New Minion Modules
apache

Work directly with apache servers, great for managing balanced web servers

cron

Read out the contents of a systems crontabs

mdadm

Module to manage raid devices in Linux, appears as the raid module

mysql

Gather simple data from MySQL databases

ps

Extensive utilities for managing processes

publish

Used by the peer interface to allow minions to make publications

Salt 0.9.1 Release Notes

release:2011-08-29

Salt 0.9.2 Release Notes

release:2011-09-17

Salt 0.9.2 has arrived! 0.9.2 is primarily a bugfix release, the exciting component in 0.9.2 is greatly improved support for salt states. All of the salt states interfaces have been more thoroughly tested and the new salt-states git repo is growing with example of how to use states.

This release introduces salt states for early developers and testers to start helping us clean up the states interface and make it ready for the world!

0.9.2 also fixes a number of bugs found on Python 2.6.

Download!

The Salt source can be downloaded from the salt GitHub site:

https://cloud.github.com/downloads/saltstack/salt/salt-0.9.2.tar.gz

Or from PyPI:

https://pypi.python.org/packages/source/s/salt/salt-0.9.2.tar.gz

For instructions on how to set up Salt please see the Installation instructions.

New Features
Salt-Call Additions

The salt-call command has received an overhaul, it now hooks into the outputter system so command output looks clean, and the logging system has been hooked into salt-call, so the -l option allows the logging output from salt minion functions to be displayed.

The end result is that the salt-call command can execute the state system and return clean output:

# salt-call state.highstate
State System Fixes

The state system has been tested and better refined. As of this release the state system is ready for early testers to start playing with. If you are interested in working with the state system please check out the (still very small) salt-states GitHub repo:

https://github.com/saltstack/salt-states

This git repo is the active development branch for determining how a clean salt-state database should look and act. Since the salt state system is still very young a lot of help is still needed here. Please fork the salt-states repo and help us develop a truly large and scalable system for configuration management!

Notable Bug Fixes
Python 2.6 String Formatting

Python 2.6 does not support format strings without an index identifier, all of them have been repaired.

Cython Loading Disabled by Default

Cython loading requires a development tool chain to be installed on the minion, requiring this by default can cause problems for most Salt deployments. If Cython auto loading is desired it will need to be turned on in the minion config.

Salt 0.9.3 Release Notes

release:2011-11-05

Salt 0.9.3 is finally arrived. This is another big step forward for Salt, new features range from proper FreeBSD support to fixing issues seen when attaching a minion to a master over the Internet.

The biggest improvements in 0.9.3 though can be found in the state system, it has progressed from something ready for early testers to a system ready to compete with platforms such as Puppet and Chef. The backbone of the state system has been greatly refined and many new features are available.

Download!

The Salt source can be downloaded from the salt GitHub site:

https://cloud.github.com/downloads/saltstack/salt/salt-0.9.3.tar.gz

Or from PyPI:

https://pypi.python.org/packages/source/s/salt/salt-0.9.3.tar.gz

For instructions on how to set up Salt please see the Installation instructions.

New Features
WAN Support

Recently more people have been testing Salt minions connecting to Salt Masters over the Internet. It was found that Minions would commonly loose their connection to the master when working over the internet. The minions can now detect if the connection has been lost and reconnect to the master, making WAN connections much more reliable.

State System Fixes

Substantial testing has gone into the state system and it is ready for real world usage. A great deal has been added to the documentation for states and the modules and functions available to states have been cleanly documented.

A number of State System bugs have also been founds and repaired, the output from the state system has also been refined to be extremely clear and concise.

Error reporting has also been introduced, issues found in sls files will now be clearly reported when executing Salt States.

Extend Declaration

The Salt States have also gained the extend declaration. This declaration allows for states to be cleanly modified in a post environment. Simply said, if there is an apache.sls file that declares the apache service, then another sls can include apache and then extend it:

include:
  - apache

extend:
  apache:
    service:
      - require:
        - pkg: mod_python

mod_python:
  pkg:
    - installed

The notable behavior with the extend functionality is that it literally extends or overwrites a declaration set up in another sls module. This means that Salt will behave as though the modifications were made directly to the apache sls. This ensures that the apache service in this example is directly tied to all requirements.

Highstate Structure Specification

This release comes with a clear specification of the Highstate data structure that is used to declare Salt States. This specification explains everything that can be declared in the Salt SLS modules.

The specification is extremely simple, and illustrates how Salt has been able to fulfill the requirements of a central configuration manager within a simple and easy to understand format and specification.

SheBang Renderer Switch

It came to our attention that having many renderers means that there may be a situation where more than one State Renderer should be available within a single State Tree.

The method chosen to accomplish this was something already familiar to developers and systems administrators, a SheBang. The Python State Renderer displays this new capability.

Python State Renderer

Until now Salt States could only be declared in yaml or json using Jinja or Mako. A new, very powerful, renderer has been added, making it possible to write Salt States in pure Python:

#!py

def run():
    '''
    Install the python-mako package
    '''
    return {'include': ['python'],
            'python-mako': {'pkg': ['installed']}}

This renderer is used by making a run function that returns the Highstate data structure. Any capabilities of Python can be used in pure Python sls modules.

This example of a pure Python sls module is the same as this example in yaml:

include:
  - python

python-mako:
  pkg:
    - installed
FreeBSD Support

Additional support has been added for FreeBSD, this is Salt's first branch out of the Linux world and proves the viability of Salt on non-Linux platforms.

Salt remote execution already worked on FreeBSD, and should work without issue on any Unix-like platform. But this support comes in the form of package management and user support, so Salt States also work on FreeBSD now.

The new freebsdpkg module provides package management support for FreeBSD and the new pw_user and pw_group provide user and group management.

Module and State Additions
Cron Support

Support for managing the system crontab has been added, declaring a cron state can be done easily:

date > /tmp/datestamp:
  cron:
    - present
    - user: fred
    - minute: 5
    - hour: 3
File State Additions

The file state has been given a number of new features, primarily the directory, recurse, symlink, and absent functions.

file.directory

Make sure that a directory exists and has the right permissions.

/srv/foo:
  file:
    - directory
    - user: root
    - group: root
    - mode: 1755
file.symlink

Make a symlink.

/var/lib/www:
  file:
    - symlink
    - target: /srv/www
    - force: True
file.recurse

The recurse state function will recursively download a directory on the master file server and place it on the minion. Any change in the files on the master will be pushed to the minion. The recurse function is very powerful and has been tested by pushing out the full Linux kernel source.

/opt/code:
  file:
    - recurse
    - source: salt://linux
file.absent

Make sure that the file is not on the system, recursively deletes directories, files, and symlinks.

/etc/httpd/conf.d/somebogusfile.conf:
  file:
    - absent
Sysctl Module and State

The sysctl module and state allows for sysctl components in the kernel to be managed easily. the sysctl module contains the following functions:

sysctl.show
Return a list of sysctl parameters for this minion
sysctl.get
Return a single sysctl parameter for this minion
sysctl.assign
Assign a single sysctl parameter for this minion
sysctl.persist
Assign and persist a simple sysctl parameter for this minion

The sysctl state allows for sysctl parameters to be assigned:

vm.swappiness:
  sysctl:
    - present
    - value: 20
Kernel Module Management

A module for managing Linux kernel modules has been added. The new functions are as follows:

kmod.available
Return a list of all available kernel modules
kmod.check_available
Check to see if the specified kernel module is available
kmod.lsmod
Return a dict containing information about currently loaded modules
kmod.load
Load the specified kernel module
kmod.remove
Unload the specified kernel module

The kmod state can enforce modules be either present or absent:

kvm_intel:
  kmod:
    - present
Ssh Authorized Keys

The ssh_auth state can distribute ssh authorized keys out to minions. Ssh authorized keys can be present or absent.

AAAAB3NzaC1kc3MAAACBAL0sQ9fJ5bYTEyYvlRBsJdDOo49CNfhlWHWXQRqul6rwL4KIuPrhY7hBw0tV7UNC7J9IZRNO4iGod9C+OYutuWGJ2x5YNf7P4uGhH9AhBQGQ4LKOLxhDyT1OrDKXVFw3wgY3rHiJYAbd1PXNuclJHOKL27QZCRFjWSEaSrUOoczvAAAAFQD9d4jp2dCJSIseSkk4Lez3LqFcqQAAAIAmovHIVSrbLbXAXQE8eyPoL9x5C+x2GRpEcA7AeMH6bGx/xw6NtnQZVMcmZIre5Elrw3OKgxcDNomjYFNHuOYaQLBBMosyO++tJe1KTAr3A2zGj2xbWO9JhEzu8xvSdF8jRu0N5SRXPpzSyU4o1WGIPLVZSeSq1VFTHRT4lXB7PQAAAIBXUz6ZO0bregF5xtJRuxUN583HlfQkXvxLqHAGY8WSEVlTnuG/x75wolBDbVzeTlxWxgxhafj7P6Ncdv25Wz9wvc6ko/puww0b3rcLNqK+XCNJlsM/7lB8Q26iK5mRZzNsGeGwGTyzNIMBekGYQ5MRdIcPv5dBIP/1M6fQDEsAXQ==:
  ssh_auth:
    - present
    - user: frank
    - enc: dsa
    - comment: 'Frank's key'

Salt 0.9.4 Release Notes

release:2011-11-27

Salt 0.9.4 has arrived. This is a critical update that repairs a number of key bugs found in 0.9.3. But this update is not without feature additions as well! 0.9.4 adds support for Gentoo portage to the pkg module and state system. Also there are 2 major new state additions, the failhard option and the ability to set up finite state ordering with the order option.

This release also sees our largest increase in community contributions. These contributors have and continue to be the life blood of the Salt project, and the team continues to grow. I want to put out a big thanks to our new and existing contributors.

Download!

The Salt source can be downloaded from the salt GitHub site:

https://cloud.github.com/downloads/saltstack/salt/salt-0.9.4.tar.gz

Or from PyPI:

https://pypi.python.org/packages/source/s/salt/salt-0.9.4.tar.gz

For instructions on how to set up Salt please see the Installation instructions.

New Features
Failhard State Option

Normally, when a state fails Salt continues to execute the remainder of the defined states and will only refuse to execute states that require the failed state.

But the situation may exist, where you would want all state execution to stop if a single state execution fails. The capability to do this is called failing hard.

State Level Failhard

A single state can have a failhard set, this means that if this individual state fails that all state execution will immediately stop. This is a great thing to do if there is a state that sets up a critical config file and setting a require for each state that reads the config would be cumbersome. A good example of this would be setting up a package manager early on:

/etc/yum.repos.d/company.repo:
  file:
    - managed
    - source: salt://company/yumrepo.conf
    - user: root
    - group: root
    - mode: 644
    - order: 1
    - failhard: True

In this situation, the yum repo is going to be configured before other states, and if it fails to lay down the config file, than no other states will be executed.

Global Failhard

It may be desired to have failhard be applied to every state that is executed, if this is the case, then failhard can be set in the master configuration file. Setting failhard in the master configuration file will result in failing hard when any minion gathering states from the master have a state fail.

This is NOT the default behavior, normally Salt will only fail states that require a failed state.

Using the global failhard is generally not recommended, since it can result in states not being executed or even checked. It can also be confusing to see states failhard if an admin is not actively aware that the failhard has been set.

To use the global failhard set failhard: True in the master configuration

Finite Ordering of State Execution

When creating salt sls files, it is often important to ensure that they run in a specific order. While states will always execute in the same order, that order is not necessarily defined the way you want it.

A few tools exist in Salt to set up the correct state ordering, these tools consist of requisite declarations and order options.

The Order Option

Before using the order option, remember that the majority of state ordering should be done with requisite statements, and that a requisite statement will override an order option.

The order option is used by adding an order number to a state declaration with the option order:

vim:
  pkg:
    - installed
    - order: 1

By adding the order option to 1 this ensures that the vim package will be installed in tandem with any other state declaration set to the order 1.

Any state declared without an order option will be executed after all states with order options are executed.

But this construct can only handle ordering states from the beginning. Sometimes you may want to send a state to the end of the line, to do this set the order to last:

vim:
  pkg:
    - installed
    - order: last

Substantial testing has gone into the state system and it is ready for real world usage. A great deal has been added to the documentation for states and the modules and functions available to states have been cleanly documented.

A number of State System bugs have also been founds and repaired, the output from the state system has also been refined to be extremely clear and concise.

Error reporting has also been introduced, issues found in sls files will now be clearly reported when executing Salt States.

Gentoo Support

Additional experimental support has been added for Gentoo. This is found in the contribution from Doug Renn, aka nestegg.

Salt 0.9.5 Release Notes

release:2012-01-15

Salt 0.9.5 is one of the largest steps forward in the development of Salt.

0.9.5 comes with many milestones, this release has seen the community of developers grow out to an international team of 46 code contributors and has many feature additions, feature enhancements, bug fixes and speed improvements.

Warning

Be sure to read the upgrade instructions about the switch to msgpack before upgrading!

Community

Nothing has proven to have more value to the development of Salt that the outstanding community that has been growing at such a great pace around Salt. This has proven not only that Salt has great value, but also the expandability of Salt is as exponential as I originally intended.

0.9.5 has received over 600 additional commits since 0.9.4 with a swath of new committers. The following individuals have contributed to the development of 0.9.5:

  • Aaron Bull Schaefer
  • Antti Kaihola
  • Bas Tichelaar
  • Brad Barden
  • Brian Wagner
  • Byron Clark
  • Chris Scheller
  • Christer Edwards
  • Clint Savage
  • Corey Quinn
  • David Boucha
  • Eivind Uggedal
  • Eric Poelke
  • Evan Borgstrom
  • Jed Glazner
  • Jeff Schroeder
  • Jeffrey C. Ollie
  • Jonas Buckner
  • Kent Tenney
  • Martin Schnabel
  • Maxim Burgerhout
  • Mitch Anderson
  • Nathaniel Whiteinge
  • Seth House
  • Thomas S Hatch
  • Thomas Schreiber
  • Tor Hveem
  • lzyeval
  • syphernl

This makes 21 new developers since 0.9.4 was released!

To keep up with the growing community follow Salt on Ohloh (http://www.ohloh.net/p/salt), to join the Salt development community, fork Salt on GitHub, and get coding (https://github.com/saltstack/salt)!

Major Features
SPEED! Pickle to msgpack

For a few months now we have been talking about moving away from Python pickles for network serialization, but a preferred serialization format had not yet been found. After an extensive performance testing period involving everything from JSON to protocol buffers, a clear winner emerged. Message Pack (http://msgpack.org/) proved to not only be the fastest and most compact, but also the most "salt like". Message Pack is simple, and the code involved is very small. The msgpack library for Python has been added directly to Salt.

This move introduces a few changes to Salt. First off, Salt is no longer a "noarch" package, since the msgpack lib is written in C. Salt 0.9.5 will also have compatibility issues with 0.9.4 with the default configuration.

We have gone through great lengths to avoid backwards compatibility issues with Salt, but changing the serialization medium was going to create issues regardless. Salt 0.9.5 is somewhat backwards compatible with earlier minions. A 0.9.5 master can command older minions, but only if the serial config value in the master is set to pickle. This will tell the master to publish messages in pickle format and will allow the master to receive messages in both msgpack and pickle formats.

Therefore the suggested methods for upgrading are either to just upgrade everything at once, or:

  1. Upgrade the master to 0.9.5
  2. Set serial to pickle in the master config
  3. Upgrade the minions
  4. Remove the serial option from the master config

Since pickles can be used as a security exploit the ability for a master to accept pickles from minions at all will be removed in a future release.

C Bindings for YAML

All of the YAML rendering is now done with the YAML C bindings. This speeds up all of the sls files when running states.

Experimental Windows Support

David Boucha has worked tirelessly to bring initial support to Salt for Microsoft Windows operating systems. Right now the Salt Minion can run as a native Windows service and accept commands.

In the weeks and months to come Windows will receive the full treatment and will have support for Salt States and more robust support for managing Windows systems. This is a big step forward for Salt to move entirely outside of the Unix world, and proves Salt is a viable cross platform solution. Big Thanks to Dave for his contribution here!

Dynamic Module Distribution

Many Salt users have expressed the desire to have Salt distribute in-house modules, states, renderers, returners, and grains. This support has been added in a number of ways:

Modules via States

Now when salt modules are deployed to a minion via the state system as a file, then the modules will be automatically loaded into the active running minion - no restart required - and into the active running state. So custom state modules can be deployed and used in the same state run.

Modules via Module Environment Directories

Under the file_roots each environment can now have directories that are used to deploy large groups of modules. These directories sync modules at the beginning of a state run on the minion, or can be manually synced via the Salt module salt.modules.saltutil.sync_all.

The directories are named:

  • _modules
  • _states
  • _grains
  • _renderers
  • _returners

The modules are pushed to their respective scopes on the minions.

Module Reloading

Modules can now be reloaded without restarting the minion, this is done by calling the salt.modules.sys.reload_modules function.

But wait, there's more! Now when a salt module of any type is added via states the modules will be automatically reloaded, allowing for modules to be laid down with states and then immediately used.

Finally, all modules are reloaded when modules are dynamically distributed from the salt master.

Enable / Disable Added to Service

A great deal of demand has existed for adding the capability to set services to be started at boot in the service module. This feature also comes with an overhaul of the service modules and initial systemd support.

This means that the service state can now accept - enable: True to make sure a service is enabled at boot, and - enable: False to make sure it is disabled.

Compound Target

A new target type has been added to the lineup, the compound target. In previous versions the desired minions could only be targeted via a single specific target type, but now many target specifications can be declared.

These targets can also be separated by and/or operators, so certain properties can be used to omit a node:

salt -C 'webserv* and G@os:Debian or E@db.*' test.ping

will match all minions with ids starting with webserv via a glob and minions matching the os:Debian grain. Or minions that match the db.* regular expression.

Node Groups

Often the convenience of having a predefined group of minions to execute targets on is desired. This can be accomplished with the new nodegroups feature. Nodegroups allow for predefined compound targets to be declared in the master configuration file:

nodegroups:
  group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com and bl*.domain.com'
  group2: 'G@os:Debian and foo.domain.com'

And then used via the -N option:

salt -N group1 test.ping
Minion Side Data Store

The data module introduces the initial approach into storing persistent data on the minions, specific to the minions. This allows for data to be stored on minions that can be accessed from the master or from the minion.

The Minion datastore is young, and will eventually provide an interface similar to a more mature key/value pair server.

Major Grains Improvement

The Salt grains have been overhauled to include a massive amount of extra data. this includes hardware data, os data and salt specific data.

Salt -Q is Useful Now

In the past the salt query system, which would display the data from recent executions would be displayed in pure Python, and it was unreadable.

0.9.5 has added the outputter system to the -Q option, thus enabling the salt query system to return readable output.

Packaging Updates

Huge strides have been made in packaging Salt for distributions. These additions are thanks to our wonderful community where the work to set up packages has proceeded tirelessly.

FreeBSD

Salt on FreeBSD? There a port for that:

http://svnweb.freebsd.org/ports/head/sysutils/py-salt/

This port was developed and added by Christer Edwards. This also marks the first time Salt has been included in an upstream packaging system!

Fedora and Red Hat Enterprise

Salt packages have been prepared for inclusion in the Fedora Project and in EPEL for Red Hat Enterprise 5 and 6. These packages are the result of the efforts made by Clint Savage (herlo).

Debian/Ubuntu

A team of many contributors have assisted in developing packages for Debian and Ubuntu. Salt is still actively seeking inclusion in upstream Debian and Ubuntu and the package data that has been prepared is being pushed through the needed channels for inclusion.

These packages have been prepared with the help of:

  • Corey
  • Aaron Toponce
  • and`
More to Come

We are actively seeking inclusion in more distributions. Primarily getting Salt into Gentoo, SUSE, OpenBSD, and preparing Solaris support are all turning into higher priorities.

Refinement

Salt continues to be refined into a faster, more stable and more usable application. 0.9.5 comes with more debug logging, more bug fixes and more complete support.

More Testing, More BugFixes

0.9.5 comes with more bugfixes due to more testing than any previous release. The growing community and the introduction a a dedicated QA environment have unearthed many issues that were hiding under the covers. This has further refined and cleaned the state interface, taking care of things from minor visual issues to repairing misleading data.

Custom Exceptions

A custom exception module has been added to throw salt specific exceptions. This allows Salt to give much more granular error information.

New Modules
data

The new data module manages a persistent datastore on the minion. Big thanks to bastichelaar for his help refining this module

freebsdkmod

FreeBSD kernel modules can now be managed in the same way Salt handles Linux kernel modules.

This module was contributed thanks to the efforts of Christer Edwards

gentoo_service

Support has been added for managing services in Gentoo. Now Gentoo services can be started, stopped, restarted, enabled, disabled, and viewed.

pip

The pip module introduces management for pip installed applications. Thanks goes to whitinge for the addition of the pip module

rh_service

The rh_service module enables Red Hat and Fedora specific service management. Now Red Hat like systems come with extensive management of the classic init system used by Red Hat

saltutil

The saltutil module has been added as a place to hold functions used in the maintenance and management of salt itself. Saltutil is used to salt the salt minion. The saltutil module is presently used only to sync extension modules from the master server.

systemd

Systemd support has been added to Salt, now systems using this next generation init system are supported on systems running systemd.

virtualenv

The virtualenv module has been added to allow salt to create virtual Python environments. Thanks goes to whitinge for the addition of the virtualenv module

win_disk

Support for gathering disk information on Microsoft Windows minions The windows modules come courtesy of Utah_Dave

win_service

The win_service module adds service support to Salt for Microsoft Windows services

win_useradd

Salt can now manage local users on Microsoft Windows Systems

yumpkg5

The yumpkg module introduces in 0.9.4 uses the yum API to interact with the yum package manager. Unfortunately, on Red Hat 5 systems salt does not have access to the yum API because the yum API is running under Python 2.4 and Salt needs to run under Python 2.6.

The yumpkg5 module bypasses this issue by shelling out to yum on systems where the yum API is not available.

New States
mysql_database

The new mysql_database state adds the ability to systems running a mysql server to manage the existence of mysql databases.

The mysql states are thanks to syphernl

mysql_user

The mysql_user state enables mysql user management.

virtualenv

The virtualenv state can manage the state of Python virtual environments. Thanks to Whitinge for the virtualenv state

New Returners
cassandra_returner

A returner allowing Salt to send data to a cassandra server. Thanks to Byron Clark for contributing this returner

Salt 0.9.6 Release Notes

release:2012-01-21

Salt 0.9.6 is a release targeting a few bugs and changes. This is primarily targeting an issue found in the names declaration in the state system. But a few other bugs were also repaired, like missing support for grains in extmods.

Due to a conflict in distribution packaging msgpack will no longer be bundled with Salt, and is required as a dependency.

New Features
HTTP and ftp support in files.managed

Now under the source option in the file.managed state a HTTP or ftp address can be used instead of a file located on the salt master.

Allow Multiple Returners

Now the returner interface can define multiple returners, and will also return data back to the master, making the process less ambiguous.

Minion Memory Improvements

A number of modules have been taken out of the minion if the underlying systems required by said modules are not present on the minion system. A number of other modules need to be stripped out in this same way which should continue to make the minion more efficient.

Minions Can Locally Cache Return Data

A new option, cache_jobs, has been added to the minion to allow for all of the historically run jobs to cache on the minion, allowing for looking up historic returns. By default cache_jobs is set to False.

Pure Python Template Support For file.managed

Templates in the file.managed state can now be defined in a Python script. This script needs to have a run function that returns the string that needs to be in the named file.

Salt 0.9.7 Release Notes

release:2012-02-15

Salt 0.9.7 is here! The latest iteration of Salt brings more features and many fixes. This release is a great refinement over 0.9.6, adding many conveniences under the hood, as well as some features that make working with Salt much better.

A few highlights include the new Job system, refinements to the requisite system in states, the mod_init interface for states, external node classification, search path to managed files in the file state, and refinements and additions to dynamic module loading.

0.9.7 also introduces the long developed (and oft changed) unit test framework and the initial unit tests.

Major Features
Salt Jobs Interface

The new jobs interface makes the management of running executions much cleaner and more transparent. Building on the existing execution framework the jobs system allows clear introspection into the active running state of the running Salt interface.

The Jobs interface is centered in the new minion side proc system. The minions now store msgpack serialized files under /var/cache/salt/proc. These files keep track of the active state of processes on the minion.

Functions in the saltutil Module

A number of functions have been added to the saltutil module to manage and view the jobs:

running - Returns the data of all running jobs that are found in the proc directory.

find_job - Returns specific data about a certain job based on job id.

signal_job - Allows for a given jid to be sent a signal.

term_job - Sends a termination signal (SIGTERM, 15) to the process controlling the specified job.

kill_job Sends a kill signal (SIGKILL, 9) to the process controlling the specified job.

The jobs Runner

A convenience runner front end and reporting system has been added as well. The jobs runner contains functions to make viewing data easier and cleaner.

The jobs runner contains a number of functions...

active

The active function runs saltutil.running on all minions and formats the return data about all running jobs in a much more usable and compact format. The active function will also compare jobs that have returned and jobs that are still running, making it easier to see what systems have completed a job and what systems are still being waited on.

lookup_jid

When jobs are executed the return data is sent back to the master and cached. By default is is cached for 24 hours, but this can be configured via the keep_jobs option in the master configuration.

Using the lookup_jid runner will display the same return data that the initial job invocation with the salt command would display.

list_jobs

Before finding a historic job, it may be required to find the job id. list_jobs will parse the cached execution data and display all of the job data for jobs that have already, or partially returned.

External Node Classification

Salt can now use external node classifiers like Cobbler's cobbler-ext-nodes.

Salt uses specific data from the external node classifier. In particular the classes value denotes which sls modules to run, and the environment value sets to another environment.

An external node classification can be set in the master configuration file via the external_nodes option: http://salt.readthedocs.org/en/latest/ref/configuration/master.html#external-nodes

External nodes are loaded in addition to the top files. If it is intended to only use external nodes, do not deploy any top files.

State Mod Init System

An issue arose with the pkg state. Every time a package was run Salt would need to refresh the package database. This made systems with slower package metadata refresh speeds much slower to work with. To alleviate this issue the mod_init interface has been added to salt states.

The mod_init interface is a function that can be added to a state file. This function is called with the first state called. In the case of the pkg state, the mod_init function sets up a tag which makes the package database only refresh on the first attempt to install a package.

In a nutshell, the mod_init interface allows a state to run any command that only needs to be run once, or can be used to set up an environment for working with the state.

Source File Search Path

The file state continues to be refined, adding speed and capabilities. This release adds the ability to pass a list to the source option. This list is then iterated over until the source file is found, and the first found file is used.

The new syntax looks like this:

/etc/httpd/conf/httpd.conf:
  file:
    - managed
    - source:
      - salt://httpd/httpd.conf
      - http://myserver/httpd.conf: md5=8c1fe119e6f1fd96bc06614473509bf1

The source option can take sources in the list from the salt file server as well as an arbitrary web source. If using an arbitrary web source the checksum needs to be passed as well for file verification.

Refinements to the Requisite System

A few discrepancies were still lingering in the requisite system, in particular, it was not possible to have a require and a watch requisite declared in the same state declaration.

This issue has been alleviated, as well as making the requisite system run more quickly.

Initial Unit Testing Framework

Because of the module system, and the need to test real scenarios, the development of a viable unit testing system has been difficult, but unit testing has finally arrived. Only a small amount of unit testing coverage has been developed, much more coverage will be in place soon.

A huge thanks goes out to those who have helped with unit testing, and the contributions that have been made to get us where we are. Without these contributions unit tests would still be in the dark.

Compound Targets Expanded

Originally only support for and and or were available in the compound target. 0.9.7 adds the capability to negate compound targets with not.

Nodegroups in the Top File

Previously the nodegroups defined in the master configuration file could not be used to match nodes for states. The nodegroups support has been expanded and the nodegroups defined in the master configuration can now be used to match minions in the top file.

Salt 0.9.8 Release Notes

release:2012-03-21

Salt 0.9.8 is a big step forward, with many additions and enhancements, as well as a number of precursors to advanced future developments.

This version of Salt adds much more power to the command line, making the old hard timeout issues a thing of the past and adds keyword argument support. These additions are also available in the salt client API, making the available API tools much more powerful.

The new pillar system allows for data to be stored on the master and assigned to minions in a granular way similar to the state system. It also allows flexibility for users who want to keep data out of their state tree similar to 'external lookup' functionality in other tools.

A new way to extend requisites was added, the "requisite in" statement. This makes adding requires or watch statements to external state decs much easier.

Additions to requisites making them much more powerful have been added as well as improved error checking for sls files in the state system. A new provider system has been added to allow for redirecting what modules run in the background for individual states.

Support for OpenSUSE has been added and support for Solaris has begun serious development. Windows support has been significantly enhanced as well.

The matcher and target systems have received a great deal of attention. The default behavior of grain matching has changed slightly to reflect the rest of salt and the compound matcher system has been refined.

A number of impressive features with keyword arguments have been added to both the CLI and to the state system. This makes states much more powerful and flexible while maintaining the simple configuration everyone loves.

The new batch size capability allows for executions to be rolled through a group of targeted minions a percentage or specific number at a time. This was added to prevent the "thundering herd" problem when targeting large numbers of minions for things like service restarts or file downloads.

Upgrade Considerations
Upgrade Issues

There was a previously missed oversight which could cause a newer minion to crash an older master. That oversight has been resolved so the version incompatibility issue will no longer occur. When upgrading to 0.9.8 make sure to upgrade the master first, followed by the minions.

Debian/Ubuntu Packages

The original Debian/Ubuntu packages were called salt and included all salt applications. New packages in the ppa are split by function. If an old salt package is installed then it should be manually removed and the new split packages need to be freshly installed.

On the master:

# apt-get purge salt
# apt-get install salt-{master,minion}

On the minions:

# apt-get purge salt
# apt-get install salt-minion

And on any Syndics:

# apt-get install salt-syndic

The official Salt PPA for Ubuntu is located at: https://launchpad.net/~saltstack/+archive/salt

Major Features
Pillar

Pillar offers an interface to declare variable data on the master that is then assigned to the minions. The pillar data is made available to all modules, states, sls files etc. It is compiled on the master and is declared using the existing renderer system. This means that learning pillar should be fairly trivial to those already familiar with salt states.

CLI Additions

The salt command has received a serious overhaul and is more powerful than ever. Data is returned to the terminal as it is received, and the salt command will now wait for all running minions to return data before stopping. This makes adding very large --timeout arguments completely unnecessary and gets rid of long running operations returning empty {} when the timeout is exceeded.

When calling salt via sudo, the user originally running salt is saved to the log for auditing purposes. This makes it easy to see who ran what by just looking through the minion logs.

The salt-key command gained the -D and --delete-all arguments for removing all keys. Be careful with this one!

Running States Without a Master

The addition of running states without a salt-master has been added to 0.9.8. This feature allows for the unmodified salt state tree to be read locally from a minion. The result is that the UNMODIFIED state tree has just become portable, allowing minions to have a local copy of states or to manage states without a master entirely.

This is accomplished via the new file client interface in Salt that allows for the salt:// URI to be redirected to custom interfaces. This means that there are now two interfaces for the salt file server, calling the master or looking in a local, minion defined file_roots.

This new feature can be used by modifying the minion config to point to a local file_roots and setting the file_client option to local.

Keyword Arguments and States

State modules now accept the **kwargs argument. This results in all data in a sls file assigned to a state being made available to the state function.

This passes data in a transparent way back to the modules executing the logic. In particular, this allows adding arguments to the pkg.install module that enable more advanced and granular controls with respect to what the state is capable of.

An example of this along with the new debconf module for installing ldap client packages on Debian:

ldap-client-packages:
  pkg:
    - debconf: salt://debconf/ldap-client.ans
    - installed
    - names:
      - nslcd
      - libpam-ldapd
      - libnss-ldapd
Keyword Arguments and the CLI

In the past it was required that all arguments be passed in the proper order to the salt and salt-call commands. As of 0.9.8, keyword arguments can be passed in the form of kwarg=argument.

# salt -G 'type:dev' git.clone \
    repository=https://github.com/saltstack/salt.git cwd=/tmp/salt user=jeff
Matcher Refinements and Changes

A number of fixes and changes have been applied to the Matcher system. The most noteworthy is the change in the grain matcher. The grain matcher used to use a regular expression to match the passed data to a grain, but now defaults to a shell glob like the majority of match interfaces in Salt. A new option is available that still uses the old style regex matching to grain data called grain-pcre. To use regex matching in compound matches use the letter P.

For example, this would match any ArchLinux or Fedora minions:

# salt --grain-pcre 'os:(Arch:Fed).*' test.ping

And the associated compound matcher suitable for top.sls is P:

P@os:(Arch|Fed).*

NOTE: Changing the grains matcher from pcre to glob is backwards incompatible.

Support has been added for matching minions with Yahoo's range library. This is handled by passing range syntax with -R or --range arguments to salt.

More information at: https://github.com/ytoolshed/range/wiki/%22yamlfile%22-module-file-spec

Requisite "in"

A new means to updating requisite statements has been added to make adding watchers and requires to external states easier. Before 0.9.8 the only way to extend the states that were watched by a state outside of the sls was to use an extend statement:

include:
  - http
extend:
  apache:
    service:
      - watch:
        - pkg: tomcat

tomcat:
  pkg:
    - installed

But the new Requisite in statement allows for easier extends for requisites:

include:
  - http

tomcat:
  pkg:
    - installed
    - watch_in:
      - service: apache

Requisite in is part of the extend system, so still remember to always include the sls that is being extended!

Providers

Salt predetermines what modules should be mapped to what uses based on the properties of a system. These determinations are generally made for modules that provide things like package and service management. The apt module maps to pkg on Debian and the yum module maps to pkg on Fedora for instance.

Sometimes in states, it may be necessary for a non-default module to be used for the desired functionality. For instance, an Arch Linux system may have been set up with systemd support. Instead of using the default service module detected for Arch Linux, the systemd module can be used:

http:
  service:
    - running
    - enable: True
    - provider: systemd

Default providers can also be defined in the minion config file:

providers:
  service: systemd

When default providers are passed in the minion config, then those providers will be applied to all functionality in Salt, this means that the functions called by the minion will use these modules, as well as states.

Requisite Glob Matching

Requisites can now be defined with glob expansion. This means that if there are many requisites, they can be defined on a single line.

To watch all files in a directory:

http:
  service:
    - running
    - enable: True
    - watch:
      - file: /etc/http/conf.d/*

This example will watch all defined files that match the glob /etc/http/conf.d/*

Batch Size

The new batch size option allows commands to be executed while maintaining that only so many hosts are executing the command at one time. This option can take a percentage or a finite number:

salt '*' -b 10 test.ping

salt -G 'os:RedHat' --batch-size 25% apache.signal restart

This will only run test.ping on 10 of the targeted minions at a time and then restart apache on 25% of the minions matching os:RedHat at a time and work through them all until the task is complete. This makes jobs like rolling web server restarts behind a load balancer or doing maintenance on BSD firewalls using carp much easier with salt.

Module Updates

This is a list of notable, but non-exhaustive updates with new and existing modules.

Windows support has seen a flurry of support this release cycle. We've gained all new file, network, and shadow modules. Please note that these are still a work in progress.

For our ruby users, new rvm and gem modules have been added along with the associated states

The virt module gained basic Xen support.

The yum module gained Scientific Linux support.

The pkg module on Debian, Ubuntu, and derivatives force apt to run in a non-interactive mode. This prevents issues when package installation waits for confirmation.

A pkg module for OpenSUSE's zypper was added.

The service module on Ubuntu natively supports upstart.

A new debconf module was contributed by our community for more advanced control over deb package deployments on Debian based distributions.

The mysql.user state and mysql module gained a password_hash argument.

The cmd module and state gained a shell keyword argument for specifying a shell other than /bin/sh on Linux / Unix systems.

New git and mercurial modules have been added for fans of distributed version control.

In Progress Development
Master Side State Compiling

While we feel strongly that the advantages gained with minion side state compiling are very critical, it does prevent certain features that may be desired. 0.9.8 has support for initial master side state compiling, but many more components still need to be developed, it is hoped that these can be finished for 0.9.9.

The goal is that states can be compiled on both the master and the minion allowing for compilation to be split between master and minion. Why will this be great? It will allow storing sensitive data on the master and sending it to some minions without all minions having access to it. This will be good for handling ssl certificates on front-end web servers for instance.

Solaris Support

Salt 0.9.8 sees the introduction of basic Solaris support. The daemon runs well, but grains and more of the modules need updating and testing.

Windows Support

Salt states on windows are now much more viable thanks to contributions from our community! States for file, service, local user, and local group management are more fully fleshed out along with network and disk modules. Windows users can also now manage registry entries using the new "reg" module.

Salt 0.9.9 Release Notes

release:2012-04-27

0.9.9 is out and comes with some serious bug fixes and even more serious features. This release is the last major feature release before 1.0.0 and could be considered the 1.0.0 release candidate.

A few updates include more advanced kwargs support, the ability for salt states to more safely configure a running salt minion, better job directory management and the new state test interface.

Many new tests have been added as well, including the new minion swarm test that allows for easier testing of Salt working with large groups of minions. This means that if you have experienced stability issues with Salt before, particularly in larger deployments, that these bugs have been tested for, found, and killed.

Major Features
State Test Interface

Until 0.9.9 the only option when running states to see what was going to be changed was to print out the highstate with state.show_highstate and manually look it over. But now states can be run to discover what is going to be changed.

Passing the option test=True to many of the state functions will now cause the salt state system to only check for what is going to be changed and report on those changes.

salt '*' state.highstate test=True

Now states that would have made changes report them back in yellow.

State Syntax Update

A shorthand syntax has been added to sls files, and it will be the default syntax in documentation going forward. The old syntax is still fully supported and will not be deprecated, but it is recommended to move to the new syntax in the future. This change moves the state function up into the state name using a dot notation. This is in-line with how state functions are generally referred to as well:

The new way:

/etc/sudoers:
  file.present:
    - source: salt://sudo/sudoers
    - user: root
    - mode: 400
Use and Use_in Requisites

Two new requisite statements are available in 0.9.9. The use and use_in requisite and requisite-in allow for the transparent duplication of data between states. When a state "uses" another state it copies the other state's arguments as defaults. This was created in direct response to the new network state, and allows for many network interfaces to be configured in the same way easily. A simple example:

root_file:
  file.absent:
    - name: /tmp/nothing
    - user: root
    - mode: 644
    - group: root
    - use_in:
      - file: /etc/vimrc

fred_file:
  file.absent:
    - name: /tmp/nothing
    - user: fred
    - group: marketing
    - mode: 660

/files/marketing/district7.rst:
  file.present:
    - source: salt://marketing/district7.rst
    - template: jinja
    - use:
      - file: fred_file

/etc/vimrc:
  file.present:
    - source: salt://edit/vimrc

This makes the 2 lower state decs inherit the options from their respectively "used" state decs.

Network State

The new network state allows for the configuration of network devices via salt states and the ip salt module. This addition has been given to the project by Jeff Hutchins and Bret Palsson from Jive Communications.

Currently the only network configuration backend available is for Red Hat based systems, like Red Hat Enterprise, CentOS, and Fedora.

Exponential Jobs

Originally the jobs executed were stored on the master in the format: <cachedir>/jobs/jid/{minion ids} But this format restricted the number of jobs in the cache to the number of subdirectories allowed on the filesystem. Ext3 for instance limits subdirectories to 32000. To combat this the new format for 0.9.9 is: <cachedir>/jobs/jid_hash[:2]/jid_hash[2:]/{minion ids} So that now the number of maximum jobs that can be run before the cleanup cycle hits the job directory is substantially higher.

ssh_auth Additions

The original ssh_auth state was limited to accepting only arguments to apply to a public key, and the key itself. This was restrictive due to the way the we learned that many people were using the state, so the key section has been expanded to accept options and arguments to the key that over ride arguments passed in the state. This gives substantial power to using ssh_auth with names:

sshkeys:
  ssh_auth:
    - present
    - user: backup
    - enc: ssh-dss
    - options:
      - option1="value1"
      - option2="value2 flag2"
    - comment: backup
    - names:
      - AAAAB3NzaC1yc2EAAAABIwAAAQEAlyE26SMFFVY5YJvnL7AF5CRTPtAigSW1U887ASfBt6FDa7Qr1YdO5ochiLoz8aSiMKd5h4dhB6ymHbmntMPjQena29jQjXAK4AK0500rMShG1Y1HYEjTXjQxIy/SMjq2aycHI+abiVDn3sciQjsLsNW59t48Udivl2RjWG7Eo+LYiB17MKD5M40r5CP2K4B8nuL+r4oAZEHKOJUF3rzA20MZXHRQuki7vVeWcW7ie8JHNBcq8iObVSoruylXav4aKG02d/I4bz/l0UdGh18SpMB8zVnT3YF5nukQQ/ATspmhpU66s4ntMehULC+ljLvZL40ByNmF0TZc2sdSkA0111==
      - AAAAB3NzaC1yc2EAAAABIwAAAQEAlyE26SMFFVY5YJvnL7AF5CRTPtAigSW1U887ASfBt6FDa7Qr1YdO5ochiLoz8aSiMKd5h4dhB6ymHbmntMPjQena29jQjXAK4AK0500rMShG1Y1HYEjTXjQxIy/SMjq2aycHI+abiVDn3sciQjsLsNW59t48Udivl2RjWG7Eo+LYiB17MKD5M40r5CP2K4B8nuL+r4oAZEHKOJUF3rzA20MZXHRQuki7vVeWcW7ie8JHNBcq8iObVSoruylXav4aKG02d/I4bz/l0UdGh18SpMB8zVnT3YF5nukQQ/ATspmhpU66s4ntMehULC+ljLvZL40ByNmF0TZc2sdSkA0222== override
      - ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAlyE26SMFFVY5YJvnL7AF5CRTPtAigSW1U887ASfBt6FDa7Qr1YdO5ochiLoz8aSiMKd5h4dhB6ymHbmntMPjQena29jQjXAK4AK0500rMShG1Y1HYEjTXjQxIy/SMjq2aycHI+abiVDn3sciQjsLsNW59t48Udivl2RjWG7Eo+LYiB17MKD5M40r5CP2K4B8nuL+r4oAZEHKOJUF3rzA20MZXHRQuki7vVeWcW7ie8JHNBcq8iObVSoruylXav4aKG02d/I4bz/l0UdGh18SpMB8zVnT3YF5nukQQ/ATspmhpU66s4ntMehULC+ljLvZL40ByNmF0TZc2sdSkA0333== override
      - ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAlyE26SMFFVY5YJvnL7AF5CRTPtAigSW1U887ASfBt6FDa7Qr1YdO5ochiLoz8aSiMKd5h4dhB6ymHbmntMPjQena29jQjXAK4AK0500rMShG1Y1HYEjTXjQxIy/SMjq2aycHI+abiVDn3sciQjsLsNW59t48Udivl2RjWG7Eo+LYiB17MKD5M40r5CP2K4B8nuL+r4oAZEHKOJUF3rzA20MZXHRQuki7vVeWcW7ie8JHNBcq8iObVSoruylXav4aKG02d/I4bz/l0UdGh18SpMB8zVnT3YF5nukQQ/ATspmhpU66s4ntMehULC+ljLvZL40ByNmF0TZc2sdSkA0444==
      - option3="value3",option4="value4 flag4" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAlyE26SMFFVY5YJvnL7AF5CRTPtAigSW1U887ASfBt6FDa7Qr1YdO5ochiLoz8aSiMKd5h4dhB6ymHbmntMPjQena29jQjXAK4AK0500rMShG1Y1HYEjTXjQxIy/SMjq2aycHI+abiVDn3sciQjsLsNW59t48Udivl2RjWG7Eo+LYiB17MKD5M40r5CP2K4B8nuL+r4oAZEHKOJUF3rzA20MZXHRQuki7vVeWcW7ie8JHNBcq8iObVSoruylXav4aKG02d/I4bz/l0UdGh18SpMB8zVnT3YF5nukQQ/ATspmhpU66s4ntMehULC+ljLvZL40ByNmF0TZc2sdSkA0555== override
      - option3="value3" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAlyE26SMFFVY5YJvnL7AF5CRTPtAigSW1U887ASfBt6FDa7Qr1YdO5ochiLoz8aSiMKd5h4dhB6ymHbmntMPjQena29jQjXAK4AK0500rMShG1Y1HYEjTXjQxIy/SMjq2aycHI+abiVDn3sciQjsLsNW59t48Udivl2RjWG7Eo+LYiB17MKD5M40r5CP2K4B8nuL+r4oAZEHKOJUF3rzA20MZXHRQuki7vVeWcW7ie8JHNBcq8iObVSoruylXav4aKG02d/I4bz/l0UdGh18SpMB8zVnT3YF5nukQQ/ATspmhpU66s4ntMehULC+ljLvZL40ByNmF0TZc2sdSkA0666==
LocalClient Additions

To follow up the recent additions in 0.9.8 of additional kwargs support, 0.9.9 also adds the capability to send kwargs into commands via a dict. This addition to the LocalClient api can be used like so:

import salt.client

client = salt.client.LocalClient('/etc/salt/master')
ret = client.cmd('*', 'cmd.run', ['ls -l'], kwarg={'cwd': '/etc'})

This update has been added to all cmd methods in the LocalClient class.

Better Self Salting

One problem faced with running Salt states, is that it has been difficult to manage the Salt minion via states, this is due to the fact that if the minion is called to restart while a state run is happening then the state run would be killed. 0.9.9 slightly changes the process scope of the state runs, so now when salt is executing states it can safely restart the salt-minion daemon.

In addition to daemonizing the state run, the apt module also daemonizes. This update makes it possible to cleanly update the salt-minion package on Debian/Ubuntu systems without leaving apt in an inconsistent state or killing the active minion process mid-execution.

Wildcards for SLS Modules

Now, when including sls modules in include statements or in the top file, shell globs can be used. This can greatly simplify listing matched sls modules in the top file and include statements:

base:
  '*':
    - files*
    - core*
include:
  - users.dev.*
  - apache.ser*
External Pillar

Since the pillar data is just, data, it does not need to come expressly from the pillar interface. The external pillar system allows for hooks to be added making it possible to extract pillar data from any arbitrary external interface. The external pillar interface is configured via the ext_pillar option. Currently interfaces exist to gather external pillar data via hiera or via a shell command that sends yaml data to the terminal:

ext_pillar:
  - cmd_yaml: cat /etc/salt/ext.yaml
  - hiera: /etc/hirea.yaml

The initial external pillar interfaces and extra interfaces can be added to the file salt/pillar.py, it is planned to add more external pillar interfaces. If the need arises a new module loader interface will be created in the future to manage external pillar interfaces.

Single State Executions

The new state.single function allows for single states to be cleanly executed. This is a great tool for setting up a small group of states on a system or for testing out the behavior of single states:

salt '*' state.single user.present name=wade uid=2000

The test interface functions here as well, so changes can also be tested against as:

salt '*' state.single user.present name=wade uid=2000 test=True
New Tests

A few exciting new test interfaces have been added, the minion swarm allows not only testing of larger loads, but also allows users to see how Salt behaves with large groups of minions without having to create a large deployment.

Minion Swarm

The minion swarm test system allows for large groups of minions to be tested against easily without requiring large numbers of servers or virtual machines. The minion swarm creates as many minions as a system can handle and roots them in the /tmp directory and connects them to a master.

The benefit here is that we were able to replicate issues that happen only when there are large numbers of minions. A number of elusive bugs which were causing stability issues in masters and minions have since been hunted down. Bugs that used to take careful watch by users over several days can now be reliably replicated in minutes, and fixed in minutes.

Using the swarm is easy, make sure a master is up for the swarm to connect to, and then use the minionswarm.py script in the tests directory to spin up as many minions as you want. Remember, this is a fork bomb, don't spin up more than your hardware can handle!

python minionswarm.py -m 20 --master salt-master
Shell Tests

The new Shell testing system allows us to test the behavior of commands executed from a high level. This allows for the high level testing of salt runners and commands like salt-key.

Client Tests

Tests have been added to test the aspects of the client APIs and ensure that the client calls work, and that they manage passed data, in a desirable way.

Salt Based Projects

A number of unofficial open source projects, based on Salt, or written to enhance Salt have been created.

Salt Sandbox

Created by Aaron Bull Schaefer, aka "elasticdog".

https://github.com/elasticdog/salt-sandbox

Salt Sandbox is a multi-VM Vagrant-based Salt development environment used for creating and testing new Salt state modules outside of your production environment. It's also a great way to learn firsthand about Salt and its remote execution capabilities.

Salt Sandbox will set up three separate virtual machines:

  • salt.example.com - the Salt master server
  • minion1.example.com - the first Salt minion machine
  • minion2.example.com - the second Salt minion machine

These VMs can be used in conjunction to segregate and test your modules based on node groups, top file environments, grain values, etc. You can even test modules on different Linux distributions or release versions to better match your production infrastructure.

Security disclosure policy

email:security@saltstack.com
gpg key ID:4EA0793D
gpg key fingerprint:
 8ABE 4EFC F0F4 B24B FF2A  AF90 D570 F2D3 4EA0 793D

gpg public key:

-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

mQINBFO15mMBEADa3CfQwk5ED9wAQ8fFDku277CegG3U1hVGdcxqKNvucblwoKCb
hRK6u9ihgaO9V9duV2glwgjytiBI/z6lyWqdaD37YXG/gTL+9Md+qdSDeaOa/9eg
7y+g4P+FvU9HWUlujRVlofUn5Dj/IZgUywbxwEybutuzvvFVTzsn+DFVwTH34Qoh
QIuNzQCSEz3Lhh8zq9LqkNy91ZZQO1ZIUrypafspH6GBHHcE8msBFgYiNBnVcUFH
u0r4j1Rav+621EtD5GZsOt05+NJI8pkaC/dDKjURcuiV6bhmeSpNzLaXUhwx6f29
Vhag5JhVGGNQxlRTxNEM86HEFp+4zJQ8m/wRDrGX5IAHsdESdhP+ljDVlAAX/ttP
/Ucl2fgpTnDKVHOA00E515Q87ZHv6awJ3GL1veqi8zfsLaag7rw1TuuHyGLOPkDt
t5PAjsS9R3KI7pGnhqI6bTOi591odUdgzUhZChWUUX1VStiIDi2jCvyoOOLMOGS5
AEYXuWYP7KgujZCDRaTNqRDdgPd93Mh9JI8UmkzXDUgijdzVpzPjYgFaWtyK8lsc
Fizqe3/Yzf9RCVX/lmRbiEH+ql/zSxcWlBQd17PKaL+TisQFXcmQzccYgAxFbj2r
QHp5ABEu9YjFme2Jzun7Mv9V4qo3JF5dmnUk31yupZeAOGZkirIsaWC3hwARAQAB
tDBTYWx0U3RhY2sgU2VjdXJpdHkgVGVhbSA8c2VjdXJpdHlAc2FsdHN0YWNrLmNv
bT6JAj4EEwECACgFAlO15mMCGwMFCQeGH4AGCwkIBwMCBhUIAgkKCwQWAgMBAh4B
AheAAAoJENVw8tNOoHk9z/MP/2vzY27fmVxU5X8joiiturjlgEqQw41IYEmWv1Bw
4WVXYCHP1yu/1MC1uuvOmOd5BlI8YO2C2oyW7d1B0NorguPtz55b7jabCElekVCh
h/H4ZVThiwqgPpthRv/2npXjIm7SLSs/kuaXo6Qy2JpszwDVFw+xCRVL0tH9KJxz
HuNBeVq7abWD5fzIWkmGM9hicG/R2D0RIlco1Q0VNKy8klG+pOFOW886KnwkSPc7
JUYp1oUlHsSlhTmkLEG54cyVzrTP/XuZuyMTdtyTc3mfgW0adneAL6MARtC5UB/h
q+v9dqMf4iD3wY6ctu8KWE8Vo5MUEsNNO9EA2dUR88LwFZ3ZnnXdQkizgR/Aa515
dm17vlNkSoomYCo84eN7GOTfxWcq+iXYSWcKWT4X+h/ra+LmNndQWQBRebVUtbKE
ZDwKmiQz/5LY5EhlWcuU4lVmMSFpWXt5FR/PtzgTdZAo9QKkBjcv97LYbXvsPI69
El1BLAg+m+1UpE1L7zJT1il6PqVyEFAWBxW46wXCCkGssFsvz2yRp0PDX8A6u4yq
rTkt09uYht1is61joLDJ/kq3+6k8gJWkDOW+2NMrmf+/qcdYCMYXmrtOpg/wF27W
GMNAkbdyzgeX/MbUBCGCMdzhevRuivOI5bu4vT5s3KdshG+yhzV45bapKRd5VN+1
mZRquQINBFO15mMBEAC5UuLii9ZLz6qHfIJp35IOW9U8SOf7QFhzXR7NZ3DmJsd3
f6Nb/habQFIHjm3K9wbpj+FvaW2oWRlFVvYdzjUq6c82GUUjW1dnqgUvFwdmM835
1n0YQ2TonmyaF882RvsRZrbJ65uvy7SQxlouXaAYOdqwLsPxBEOyOnMPSktW5V2U
IWyxsNP3sADchWIGq9p5D3Y/loyIMsS1dj+TjoQZOKSj7CuRT98+8yhGAY8YBEXu
9r3I9o6mDkuPpAljuMc8r09Im6az2egtK/szKt4Hy1bpSSBZU4W/XR7XwQNywmb3
wxjmYT6Od3Mwj0jtzc3gQiH8hcEy3+BO+NNmyzFVyIwOLziwjmEcw62S57wYKUVn
HD2nglMsQa8Ve0e6ABBMEY7zGEGStva59rfgeh0jUMJiccGiUDTMs0tdkC6knYKb
u/fdRqNYFoNuDcSeLEw4DdCuP01l2W4yY+fiK6hAcL25amjzc+yYo9eaaqTn6RAT
bzdhHQZdpAMxY+vNT0+NhP1Zo5gYBMR65Zp/VhFsf67ijb03FUtdw9N8dHwiR2m8
vVA8kO/gCD6wS2p9RdXqrJ9JhnHYWjiVuXR+f755ZAndyQfRtowMdQIoiXuJEXYw
6XN+/BX81gJaynJYc0uw0MnxWQX+A5m8HqEsbIFUXBYXPgbwXTm7c4IHGgXXdwAR
AQABiQIlBBgBAgAPBQJTteZjAhsMBQkHhh+AAAoJENVw8tNOoHk91rcQAIhxLv4g
duF/J1Cyf6Wixz4rqslBQ7DgNztdIUMjCThg3eB6pvIzY5d3DNROmwU5JvGP1rEw
hNiJhgBDFaB0J/y28uSci+orhKDTHb/cn30IxfuAuqrv9dujvmlgM7JUswOtLZhs
5FYGa6v1RORRWhUx2PQsF6ORg22QAaagc7OlaO3BXBoiE/FWsnEQCUsc7GnnPqi7
um45OJl/pJntsBUKvivEU20fj7j1UpjmeWz56NcjXoKtEvGh99gM5W2nSMLE3aPw
vcKhS4yRyLjOe19NfYbtID8m8oshUDji0XjQ1z5NdGcf2V1YNGHU5xyK6zwyGxgV
xZqaWnbhDTu1UnYBna8BiUobkuqclb4T9k2WjbrUSmTwKixokCOirFDZvqISkgmN
r6/g3w2TRi11/LtbUciF0FN2pd7rj5mWrOBPEFYJmrB6SQeswWNhr5RIsXrQd/Ho
zvNm0HnUNEe6w5YBfA6sXQy8B0Zs6pcgLogkFB15TuHIIIpxIsVRv5z8SlEnB7HQ
Io9hZT58yjhekJuzVQB9loU0C/W0lzci/pXTt6fd9puYQe1DG37pSifRG6kfHxrR
if6nRyrfdTlawqbqdkoqFDmEybAM9/hv3BqriGahGGH/hgplNQbYoXfNwYMYaHuB
aSkJvrOQW8bpuAzgVyd7TyNFv+t1kLlfaRYJ
=wBTJ
-----END PGP PUBLIC KEY BLOCK-----

The SaltStack Security Team is available at security@saltstack.com for security-related bug reports or questions.

We request the disclosure of any security-related bugs or issues be reported non-publicly until such time as the issue can be resolved and a security-fix release can be prepared. At that time we will release the fix and make a public announcement with upgrade instructions and download locations.

Security response procedure

SaltStack takes security and the trust of our customers and users very seriously. Our disclosure policy is intended to resolve security issues as quickly and safely as is possible.

  1. A security report sent to security@saltstack.com is assigned to a team member. This person is the primary contact for questions and will coordinate the fix, release, and announcement.
  2. The reported issue is reproduced and confirmed. A list of affected projects and releases is made.
  3. Fixes are implemented for all affected projects and releases that are actively supported. Back-ports of the fix are made to any old releases that are actively supported.
  4. Packagers are notified via the salt-packagers mailing list that an issue was reported and resolved, and that an announcement is incoming.
  5. A new release is created and pushed to all affected repositories. The release documentation provides a full description of the issue, plus any upgrade instructions or other relevant details.
  6. An announcement is made to the salt-users and salt-announce mailing lists. The announcement contains a description of the issue and a link to the full release documentation and download locations.

Receiving security announcements

The fastest place to receive security announcements is via the salt-announce mailing list. This list is low-traffic.

Frequently Asked Questions

Is Salt open-core?

No. Salt is 100% committed to being open-source, including all of our APIs. It is developed under the Apache 2.0 license, allowing it to be used in both open and proprietary projects.

I think I found a bug! What should I do?

The salt-users mailing list as well as the salt IRC channel can both be helpful resources to confirm if others are seeing the issue and to assist with immediate debugging.

To report a bug to the Salt project, please follow the instructions in reporting a bug.

What ports should I open on my firewall?

Minions need to be able to connect to the Master on TCP ports 4505 and 4506. Minions do not need any inbound ports open. More detailed information on firewall settings can be found here.

I'm seeing weird behavior (including but not limited to packages not installing their users properly)

This is often caused by SELinux. Try disabling SELinux or putting it in permissive mode and see if the weird behavior goes away.

My script runs every time I run a state.highstate. Why?

You are probably using cmd.run rather than cmd.wait. A cmd.wait state will only run when there has been a change in a state that it is watching.

A cmd.run state will run the corresponding command every time (unless it is prevented from running by the unless or onlyif arguments).

More details can be found in the documentation for the cmd states.

When I run test.ping, why don't the Minions that aren't responding return anything? Returning False would be helpful.

When you run test.ping the Master tells Minions to run commands/functions, and listens for the return data, printing it to the screen when it is received. If it doesn't receive anything back, it doesn't have anything to display for that Minion.

There are a couple options for getting information on Minions that are not responding. One is to use the verbose (-v) option when you run salt commands, as it will display "Minion did not return" for any Minions which time out.

salt -v '*' pkg.install zsh

Another option is to use the manage.down runner:

salt-run manage.down

Also, if the Master is under heavy load, it is possible that the CLI will exit without displaying return data for all targeted Minions. However, this doesn't mean that the Minions did not return; this only means that the Salt CLI timed out waiting for a response. Minions will still send their return data back to the Master once the job completes. If any expected Minions are missing from the CLI output, the jobs.list_jobs runner can be used to show the job IDs of the jobs that have been run, and the jobs.lookup_jid runner can be used to get the return data for that job.

salt-run jobs.list_jobs
salt-run jobs.lookup_jid 20130916125524463507

If you find that you are often missing Minion return data on the CLI, only to find it with the jobs runners, then this may be a sign that the worker_threads value may need to be increased in the master config file. Additionally, running your Salt CLI commands with the -t option will make Salt wait longer for the return data before the CLI command exits. For instance, the below command will wait up to 60 seconds for the Minions to return:

salt -t 60 '*' test.ping

How does Salt determine the Minion's id?

If the Minion id is not configured explicitly (using the id parameter), Salt will determine the id based on the hostname. Exactly how this is determined varies a little between operating systems and is described in detail here.

I'm trying to manage packages/services but I get an error saying that the state is not available. Why?

Salt detects the Minion's operating system and assigns the correct package or service management module based on what is detected. However, for certain custom spins and OS derivatives this detection fails. In cases like this, an issue should be opened on our tracker, with the following information:

  1. The output of the following command:

    salt <minion_id> grains.items | grep os
    
  2. The contents of /etc/lsb-release, if present on the Minion.

I'm using gitfs and my custom modules/states/etc are not syncing. Why?

In versions of Salt 0.16.3 or older, there is a bug in gitfs which can affect the syncing of custom types. Upgrading to 0.16.4 or newer will fix this.

Why aren't my custom modules/states/etc. available on my Minions?

Custom modules are only synced to Minions when state.highstate, saltutil.sync_modules, or saltutil.sync_all is run. Similarly, custom states are only synced to Minions when state.highstate, saltutil.sync_states, or saltutil.sync_all is run.

Other custom types (renderers, outputters, etc.) have similar behavior, see the documentation for the saltutil module for more information.

Module X isn't available, even though the shell command it uses is installed. Why?

This is most likely a PATH issue. Did you custom-compile the software which the module requires? RHEL/CentOS/etc. in particular override the root user's path in /etc/init.d/functions, setting it to /sbin:/usr/sbin:/bin:/usr/bin, making software installed into /usr/local/bin unavailable to Salt when the Minion is started using the initscript. In version 2014.1.0, Salt will have a better solution for these sort of PATH-related issues, but recompiling the software to install it into a location within the PATH should resolve the issue in the meantime. Alternatively, you can create a symbolic link within the PATH using a file.symlink state.

/usr/bin/foo:
  file.symlink:
    - target: /usr/local/bin/foo

Can I run different versions of Salt on my Master and Minion?

This depends on the versions. In general, it is recommended that Master and Minion versions match.

When upgrading Salt, the master(s) should always be upgraded first. Backwards compatibility for minions running newer versions of salt than their masters is not guaranteed.

Whenever possible, backwards compatibility between new masters and old minions will be preserved. Generally, the only exception to this policy is in case of a security vulnerability.

Recent examples of backwards compatibility breakage include the 0.17.1 release (where all backwards compatibility was broken due to a security fix), and the 2014.1.0 release (which retained compatibility between 2014.1.0 masters and 0.17 minions, but broke compatibility for 2014.1.0 minions and older masters).

Does Salt support backing up managed files?

Yes. Salt provides an easy to use addition to your file.managed states that allow you to back up files via backup_mode, backup_mode can be configured on a per state basis, or in the minion config (note that if set in the minion config this would simply be the default method to use, you still need to specify that the file should be backed up!).

What is the best way to restart a Salt daemon using Salt?

Updating the salt-minion package requires a restart of the salt-minion service. But restarting the service while in the middle of a state run interrupts the process of the minion running states and sending results back to the master. It's a tricky problem to solve, and we're working on it, but in the meantime one way of handling this (on Linux and UNIX-based operating systems) is to use at (a job scheduler which predates cron) to schedule a restart of the service. at is not installed by default on most distros, and requires a service to be running (usually called atd) in order to schedule jobs. Here's an example of how to upgrade the salt-minion package at the end of a Salt run, and schedule a service restart for one minute after the package update completes.

Linux/Unix

salt-minion:
  pkg.installed:
    - name: salt-minion
    - version: 2014.1.7-3.el6
    - order: last
  service.running:
    - name: salt-minion
    - require:
      - pkg: salt-minion
  cmd.wait:
    - name: echo service salt-minion restart | at now + 1 minute
    - watch:
      - pkg: salt-minion

To ensure that at is installed and atd is running, the following states can be used (be sure to double-check the package name and service name for the distro the minion is running, in case they differ from the example below.

at:
  pkg.installed:
    - name: at
  service.running:
    - name: atd
    - enable: True

An alternative to using the atd daemon is to fork and disown the process.

restart_minion:
  cmd.run:
    - name: |
        exec 0>&- # close stdin
        exec 1>&- # close stdout
        exec 2>&- # close stderr
        nohup /bin/sh -c 'sleep 10 && salt-call --local service.restart salt-minion' &
    - python_shell: True
    - order: last

Windows

For Windows machines, restarting the minion at can be accomplished by adding the following state:

schedule-start:
  cmd.run:
    - name: 'start powershell "Restart-Service -Name salt-minion"'
    - order: last

or running immediately from the command line:

salt -G kernel:Windows cmd.run 'start powershell "Restart-Service -Name salt-minion"'

Salting the Salt Master

In order to configure a master server via states, the Salt master can also be "salted" in order to enforce state on the Salt master as well as the Salt minions. Salting the Salt master requires a Salt minion to be installed on the same machine as the Salt master. Once the Salt minion is installed, the minion configuration file must be pointed to the local Salt master:

master: 127.0.0.1

Once the Salt master has been "salted" with a Salt minion, it can be targeted just like any other minion. If the minion on the salted master is running, the minion can be targeted via any usual salt command. Additionally, the salt-call command can execute operations to enforce state on the salted master without requiring the minion to be running.

More information about salting the Salt master can be found in the salt-formula for salt itself:

https://github.com/saltstack-formulas/salt-formula

Glossary

Auto-Order
The evaluation of states in the order that they are defined in a SLS file. See also: ordering.
Bootstrap
A stand-alone Salt project which can download and install a Salt master and/or a Salt minion onto a host. See also: salt-bootstrap.
Compound Matcher
A combination of many target definitions that can be combined with boolean operators. See also: targeting.
EAuth
Shorthand for 'external authentication'. A system for calling to a system outside of Salt in order to authenticate users and determine if they are allowed to issue particular commands to Salt. See also: external auth.
Environment
A directory tree containing state files which can be applied to minions. See also: top file.
Execution Function
A Python function inside an Execution Module that may take arguments and performs specific system-management tasks. See also: the list of execution modules.
External Job Cache
An external data-store that can archive information about jobs that have been run. A default returner. See also: ext_job_cache, the list of returners.
Execution Module
A Python module that contains execution functions which directly perform various system-management tasks on a server. Salt ships with a number of execution modules but users can also write their own execution modules to perform specialized tasks. See also: the list of execution modules.
External Pillar
A module that accepts arbitrary arguments and returns a dictionary. The dictionary is automatically added to a pillar for a minion.
Event
A notice emitted onto an event bus. Events are often driven by requests for actions to occur on a minion or master and the results of those actions. See also: Salt Reactor.
File Server
A local or remote location for storing both Salt-specific files such as top files or SLS files as well as files that can be distributed to minions, such as system configuration files. See also: Salt's file server.
Grain
A key-value pair which contains a fact about a system, such as its hostname, network addresses. See also: targeting with grains.
Highdata
The data structure in a SLS file the represents a set of state declarations. See also: state layers.
Highstate
The collection of states to be applied to a system. See also: state layers.
Jinja
A templating language which allows variables and simple logic to be dynamically inserted into static text files when they are rendered. See also: Salt's Jinja documentation.
Job
The complete set of tasks to be performed by the execution of a Salt command are a single job. See also: jobs runner.
Job ID
A unique identifier to represent a given job.
Low State
The collection of processed states after requisites and order are evaluated. See also: state layers.
Master
A central Salt daemon which from which commands can be issued to listening minions.
Masterless
A minion which does not require a Salt master to operate. All configuration is local. See also: file_client.
Master Tops
A system for the master that allows hooks into external systems to generate top file data.
Mine
A facility to collect arbitrary data from minions and store that data on the master. This data is then available to all other minions. [Sometimes referred to as Salt Mine.] See also: Salt Mine.
Minion
A server running a Salt minion daemon which can listen to commands from a master and perform the requested tasks. Generally, minions are servers which are to be controlled using Salt.
Minion ID
A globally unique identifier for a minion. See also: id.
Multi-Master
The ability for a minion to be actively connected to multiple Salt masters at the same time in high-availability environments.
Node Group
A pre-defined group of minions declared in the master configuration file. See also: targeting.
Outputter
A formatter for defining the characteristics of output data from a Salt command. See also: list of outputters.
Overstate
A system by which a Master can issue function calls to minions in a deterministic order. See also: overstate.
Peer Communication
The ability for minions to communicate directly with other minions instead of brokering commands through the Salt master. See also: peer communication.
Pillar
A simple key-value store for user-defined data to be made available to a minion. Often used to store and distribute sensitive data to minions. See also: Pillar, list of Pillar modules.
Proxy Minion
A minion which can control devices that are unable to run a Salt minion locally, such as routers and switches.
PyDSL
A Pythonic domain-specific-language used as a Salt renderer. PyDSL can be used in cases where adding pure Python into SLS files is beneficial. See also: PyDSL.
Reactor
An interface for listening to events and defining actions that Salt should taken upon receipt of given events. See also: Reactor.
Render Pipe
Allows SLS files to be rendered by multiple renderers, with each renderer receiving the output of the previous. See also: composing renderers.
Renderer
Responsible for translating a given data serialization format such as YAML or JSON into a Python data structure that can be consumed by Salt. See also: list of renderers.
Returner
Allows for the results of a Salt command to be sent to a given data-store such as a database or log file for archival. See also: list of returners.
Roster
A flat-file list of target hosts. (Currently only used by salt-ssh.)
Runner Module
A module containing a set of runner functions. See also: list of runner modules.
Runner Function
A function which is is called by the salt-run command and executes on the master instead of on a minion. See also: Runner Module.
Salt Cloud
A suite of tools used to create and deploy systems on many hosted cloud providers. See also: salt-cloud.
Salt SSH
A configuration management and remote orchestration system that does not require that any software besides SSH be installed on systems to be controlled.
Salt Thin
A subset of the normal Salt distribution that does not include any transport routines. A Salt Thin bundle can be dropped onto a host and used directly without any requirement that the host be connected to a network. Used by Salt SSH. See also: thin runner.
Salt Virt
Used to manage the creation and deployment of virtual machines onto a set of host machines. Often used to create and deploy private clouds. See also: virt runner.
SLS Module
Contains a set of state declarations.
State Compiler
Translates highdata into lowdata.
State Declaration
A data structure which contains a unique ID and describes one or more states of a system such as ensuring that a package is installed or a user is defined. See also: highstate structure.
State Function
A function contained inside a state module which can manages the application of a particular state to a system. State functions frequently call out to one or more execution modules to perform a given task.
State Module
A module which contains a set of state functions. See also: list of state modules.
State Run
The application of a set of states on a set of systems.
Syndic
A forwarder which can relay messages between tiered masters. See also: Syndic.
Target
Minion(s) to which a given salt command will apply. See also: targeting.
Top File
Determines which SLS files should be applied to various systems and organizes those groups of systems into environments. See also: top file, list of master top modules.
__virtual__
A function in a module that is called on module load to determine whether or not the module should be available to a minion. This function commonly contains logic to determine if all requirements for a module are available, such as external libraries.
Worker
A master process which can send notices and receive replies from minions. See also: worker_threads.