TOSCA support in Yorc

TOSCA stands for Topology and Orchestration Specification for Cloud Applications. It is an OASIS standard specification. Currently Yorc implements the version TOSCA Simple Profile in YAML Version 1.2 of this specification.

Yorc is a TOSCA based orchestrator, meaning that it consumes TOSCA definitions and services templates to perform applications deployments and lifecycle management.

Yorc is also workflow driven meaning that it will execute workflows defined in a TOSCA service template to perform a deployment. The easiest way to generate valid TOSCA definitions for Yorc is to use a TOSCA web composer called Alien4Cloud.

Yorc provides an Alien4Cloud (A4C) plugin that allows A4C to interact with Yorc.

Alien4Cloud provides a great documentation on writing TOSCA components.

Bellow are the specificities of Yorc

TOSCA Operations

Supported TOSCA functions

Yorc supports the following functions that could be used in value assignments (generally for attributes and operation inputs).

  • get_input: [input_name]: Will retrieve the value of a Topology’s input
  • get_property: [<entity_name>, <optional_cap_name>, <property_name>, <nested_property_name_or_index_1>, ..., <nested_property_name_or_index_n> ]: Will retrieve the value of a property in a given entity. <entity_name> could be the name of a given node or relationship template, SELF for the entity holding this function, HOST for one of the hosts (in the hosted-on hierarchy) of the entity holding this function, SOURCE, TARGET respectively for the source or the target entity in case of a relationship. <optional_cap_name> is optional and allows to specify that we target a property in a capability rather directly on the node.
  • get_attribute: [<entity_name>, <optional_cap_name>, <property_name>, <nested_property_name_or_index_1>, ..., <nested_property_name_or_index_n> ]: see get_property above
  • concat: [<string_value_expressions_*>]: concats the result of each nested expression. Ex: concat: [ "http://", get_attribute: [ SELF, public_address ], ":", get_attribute: [ SELF, port ] ]
  • get_operation_output: [<modelable_entity_name>, <interface_name>, <operation_name>, <output_variable_name>]: Retrieves the output of an operation
  • get_secret: [<secret_path>, <optional_implementation_specific_options>]: instructs to look for the value within a connected vault instead of within the Topology. Resulting value is considered as a secret by Yorc.

Supported Operations implementations

Currently Yorc supports as builtin implementations for operations:

  • Bash scripts
  • Python scripts
  • Ansible Playbooks

New implementations can be plugged into Yorc using its plugin mechanism.

Execution Context

Python and Bash scripts are executed by a wrapper script used to retrieve operations outputs. This script itself is executed using a bash -l command meaning that the login profile of the user used to connect to the host will be loaded.


Defining operations inputs with the same name as Bash reserved variables like USER, HOME, HOSTNAME and so on… may lead to unexpected results… Avoid to use them.

Injected Environment Variables

When operation scripts are called, some environment variables are injected by Yorc.

  • For Python and Bash scripts those variables are injected as environment variables.
  • For Python scripts they are also injected as global variables of the script and can be used directly.
  • For Ansible playbooks they are injected as Playbook variables.

Operation outputs

TOSCA defines a function called get_operation_output, this function instructs Yorc to retrieve a value at the end of a operation. In order to allow Yorc to retrieve those values you should depending on your operation implementation:

  • in Bash scripts you should export a variable named as the output variable (case sensitively)
  • in Python scripts you should define a variable (globally to your script root not locally to a class or function) named as the output variable (case sensitively)
  • in Ansible playbooks you should set a fact named as the output variable (case sensitively)

Node operation

For node operation script, the following variables are available:

  • NODE: the node name.
  • INSTANCE: the unique instance ID.
  • INSTANCES: A comma separated list of all available instance IDs.
  • HOST: the node name of the node that host the current one.
  • DEPLOYMENT_ID: the unique deployment identifier.

In addition, any inputs parameters defined on the operation definition are also injected.

Relationship operation

For relationship operation script, the following variables are available:

  • TARGET_NODE: The node name that is targeted by the relationship.
  • TARGET_INSTANCE: The instance ID that is targeted by the relatonship.
  • TARGET_INSTANCES: Comma separated list of all available instance IDs for the target node.
  • TARGET_HOST: The node name of the node that host the node that is targeted by the relationship.
  • SOURCE_NODE: The node name that is the source of the relationship.
  • SOURCE_INSTANCE: The instance ID of the source of the relationship.
  • SOURCE_INSTANCES: Comma separated list of all available source instance IDs.
  • SOURCE_HOST: The node name of the node that host the node that is the source of the relationship.
  • DEPLOYMENT_ID: the unique deployment identifier.

In addition, properties and attributes of the target capability of the relationship are injected automatically. We do this as they can only be retrieved by knowing in advance the actual name of the capability, which is not very practical when designing a generic operation. As a target component may have several capabilities that match the relationship capability type we inject the following variables:

  • TARGET_CAPABILITY_NAMES: comma-separated list of matching capabilities names. It could be used to loop over the injected variables
  • TARGET_CAPABILITY_<capabilityName>_TYPE: actual type of the capability
  • TARGET_CAPABILITY_TYPE: actual type of the capability of the first matching capability
  • TARGET_CAPABILITY_<capabilityName>_PROPERTY_<propertyName>: value of a property
  • TARGET_CAPABILITY_PROPERTY_<propertyName>: value of a property for the first matching capability
  • TARGET_CAPABILITY_<capabilityName>_<instanceName>_ATTRIBUTE_<attributeName>: value of an attribute of a given instance
  • TARGET_CAPABILITY_<instanceName>_ATTRIBUTE_<attributeName>: value of an attribute of a given instance for the first matching capability

Finally, any inputs parameters defined on the operation definition are also injected.

Attribute and multiple instances

When an operation defines an input, the value is available by fetching an environment variable. If you have multiple instances,
you’ll be able to fetch the input value for all instances by prefixing the input name by the instance ID.

Let’s imagine you have an relationship’s configure interface operation defined like this:

    TARGET_IP: { get_attribute: [TARGET, ip_address] }
  implementation: scripts/

Let’s imagine we have a node named MyNodeS with 2 instances: MyNodeS_1, MyNodeS_2. The node MyNodeS is connected to the target node MyNodeT which has also 2 instances MyNodeT_1 and MyNodeT_2.

When the script is executed for the relationship instance that connects MyNodeS_1 to MyNodeT_1, the following variables will be available:


Orchestrator-hosted Operations

In the general case an operation is an implementation of a step within a node’s lifecycle (install a software package for instance). Those operations should be executed on the Compute that hosts the node. Yorc handles this case seamlessly and execute your implementation artifacts on the required host.

But sometimes you may want to model in TOSCA an interaction with something (generally a service) that is not hosted on a compute of your application. For those usecases the TOSCA specification support a tag called operation_host this tag could be set either on an operation implementation or on a workflow step. If set to the keyword ORCHESTRATOR this tag indicates that the operation should be executed on the host of the orchestrator.

For executing those kind of operations Yorc supports two different behaviors. The first one is to execute implementation artifacts directly on the orchestrator’s host. But we think that running user-defined bash or python scripts directly on the orchestrator’s host may be dangerous. So, Yorc offers an alternative that allows to run those scripts in a sandboxed environment implemented by a Docker container. This is the recommended solution.

Choosing one or the other solution is done by configuration see ansible hosted operations options in the configuration section. If a default_sandbox option is provided, it will be used to start a docker sandbox. Otherwise if unsandboxed_operations_allowed is set to true (defaults to false) then operations are executed on orchestrator’s host. Otherwise Yorc will rise an error if an orchestrator hosted operation should be executed.

In order to let Yorc interact with Docker to manage sandboxes some requirements should be met on the Yorc’s host:

  • Docker service should be installed and running
  • Docker CLI should be installed
  • the pip package docker_py should be installed

Yorc uses standard Docker’s APIs so DOCKER_HOST and DOCKER_CERT_PATH environment variables could be used to configure the way Yorc interacts with Docker.

In order to execute operations on containers, either the following requirements should be met on Docker images used as sandboxes:

  • the /usr/bin/env command should be present
  • a python 2 interpreter compatible with ansible 2.7.9 should be available as the python command

or Yorc configuration should provide a python interpreter path through the Ansible behavioral inventory parameter ansible_python_interpreter, like below in a Yaml Yorc configuration excerpt specifying to use python3 :

    - ansible_python_interpreter=/usr/bin/python3

See Ansible Inventory Configuration section for more details.

Apart from the requirements described above, you can install whatever you want in your Docker image as prerequisites of your operations artifacts.

Yorc will automatically pull the required Docker image and start a separated Docker sandbox before each orchestrator-hosted operation and automatically destroy it after the operation execution.


Currently setting operation_host on operation implementation is supported in Yorc but not in Alien4Cloud. That said, when using Alien4Cloud workflows will automatically be generated with operation_host=ORCHESTRATOR for nodes that are not hosted on a Compute.