Consul Storage

Yorc heavily relies on Consul to synchronize Yorc instances and store configurations, runtime data and TOSCA data models. This leads to generate an important load pressure on Consul, under specific circumstances (thousands of deployments having each an high number of TOSCA types and templates and poor latency performances on networks between Yorc and Consul server) you may experience some performance issues specially during the initialization phase of a deployment. This is because this is when we store most of the data into Consul. To deal with this we recommend to carefully read the Consul documentation on performance and update the default configuration if needed.

You can find below some configuration options that could be adapted to fit your specific use case:

  • Yorc stores keys into Consul in highly parallel way, to prevent consuming too much connections and specially getting into a max open file descriptor issue we use a mechanism that limits the number of open connections to Consul. The number open connections can be set using option_pub_routines_cmd. The default value of 500 was determined empirically to fit the default 1024 maximum number of open file descriptors. Increasing this value could improve performances but you should also update accordingly the maximum number of open file descriptors.
  • Yorc 3.2 brings an experimental feature that allows to pack some keys storage into Consul transactions. This mode is not enabled by default and can be activated by setting the environment variable named YORC_CONSUL_STORE_TXN_TIMEOUT to a valid Go duration. Consul transactions can contain up to 64 operations, YORC_CONSUL_STORE_TXN_TIMEOUT defines a timeout after which we stop waiting for new operations to pack into a single transaction and submit it as it to Consul. We had pretty good results by setting this property to 10ms. This feature may become the default in the future after being tested against different use cases and getting some feedback from production deployments.

TOSCA Operations

As described in TOSCA Supported Operations implementations, Yorc supports these builtin implementations for operations to execute on remote hosts :

  • Bash scripts
  • Python scripts
  • Ansible Playbooks

It is recommended to implement operations as Ansible Playbooks to get the best execution performance.

When operations are not implemented using Ansible playbooks, the following Yorc Server Ansible configuration settings allow to improve the performance of scripts execution on remote hosts :

  • use_openssh: Prefer OpenSSH over Paramiko, a Python implementation of SSH (used by default) to provision remote hosts. OpenSSH have several optimization like reusing connections that should improve performance but may lead to issues on older systems See Ansible documentation on Remote connection information.
  • cache_facts: Caches Ansible facts (values fetched on remote hosts about network/hardware/OS/virtualization configuration) so that these facts are not recomputed each time a new operation is a run for a given deployment.
  • archive_artifacts: Archives operation bash/python scripts locally, copies this archive and unarchives it on remote hosts (requires tar to be installed on remote hosts), to avoid multiple time consuming remote copy operations of individual scripts.