Yorc Supported infrastructures =============================== This section describes the state of our integration with supported infrastructures and their specificities .. _yorc_infras_hostspool_section: Hosts Pool ---------- .. only:: html |prod| The Hosts Pool is a very special kind of infrastructure. It consists in registering existing Compute nodes into a pool managed by Yorc. Those compute nodes could be physical or virtual machines, containers or whatever as long as Yorc can SSH into it. Yorc will be responsible to allocate and release hosts for deployments. This is safe to use it concurrently in a Yorc cluster, Yorc instances will synchronize amongst themselves to ensure consistency of the pool. To sum up this infrastructure type is really great when you want to use an infrastructure that is not yet supported by Yorc. Just take care you're responsible for handling the compatibility or conflicts of what is already installed and what will be by Yorc on your hosts pool. The best practice is using container isolation. This is especially true if a host can be shared by several apps by specifying in Tosca with the Compute **shareable** property. Hosts management ~~~~~~~~~~~~~~~~ Yorc comes with a REST API that allows to manage hosts in the pool and to easily integrate it with other systems. The Yorc CLI leverage this REST API to make it user friendly, please refer to :ref:`yorc_cli_hostspool_section` for more informations Hosts Pool labels & filters ~~~~~~~~~~~~~~~~~~~~~~~~~~~ It is strongly recommended to associate labels to your hosts. Labels allow to filter hosts based on criteria. Labels are just a couple of key/value pair .. _yorc_infras_hostspool_filters_section: Filters Grammar ^^^^^^^^^^^^^^^ There are four kinds of filters supported by Yorc: * Filters based on the presence of a label ``label_identifier`` will match if a label with the given name is associated with a host whatever its value is. * Filters based on equality to a value ``label_identifier (=|==|!=) value`` will match if the value associated with the given label is equals (``=`` and ``==``) or different (``!=``) to the given value * Filters based on sets ``label_identifier (in | not in) (value [, other_value])`` will match if the value associated with the given label is one (``in``) or is not one (``not in``) of the given values * Filters based on comparisons ``label_identifier (< | <= | > | >=) number[unit]`` will match if the value associated with the given label is a number and matches the comparison sign. A unit could be associated with the number, currently supported units are golang durations ("ns", "us" , "ms", "s", "m" or "h"), bytes units ("B", "KiB", "KB", "MiB", "MB", "GiB", "GB", "TiB", "TB", "PiB", "PB", "EiB", "EB") and `International System of Units (SI) `_. The case of the unit does not matter. Here are some example: * ``gpu`` * ``os.distribution != windows`` * ``os.architecture == x86_64`` * ``environment = "Q&A"`` * ``environment in ( "Q&A", dev, edge)`` * ``gpu.type not in (k20, m60)`` * ``gpu_nb > 1`` * ``os.mem_size >= 4 GB`` * ``os.disk_size < 1tb`` * ``max_allocation_time <= 120h`` Implicit filters & labels ^^^^^^^^^^^^^^^^^^^^^^^^^ TOSCA allows to specify `requirements on Compute hardware `_ and `Compute operating system `_ . These are capabilities named ``host`` and ``os`` in the `TOSCA node Compute `_ . If those are specified in the topology, Yorc will automatically add a filter ``host. >= `` or ``os. = `` This will allow to select hosts matching the required criteria. This means that it is strongly recommended to add the following labels to your hosts: * ``host.num_cpus`` (ie. host.num_cpus=4) * ``host.cpu_frequency`` (ie. host.cpu_frequency=3 GHz) * ``host.disk_size`` (ie. host.disk_size=50 GB) * ``host.mem_size`` (ie. host.mem_size=4GB) * ``os.architecture`` (ie. os.architecture=x86_64) * ``os.type`` (ie. os.type=linux) * ``os.distribution`` (ie. os.distribution=ubuntu) * ``os.version`` (ie. os.version=17.10) Some labels are also automatically exposed as TOSCA Compute instance attributes: * if present a label named ``private_address`` will be used as attribute ``private_address`` and ``ip_address`` of the Compute. If not set the connection host will be used instead this allows to specify a network different for the applicative communication and for the orchestrator communication * if present a label named ``public_address`` will be used as attribute ``public_address`` of the Compute. * if present, following labels will fill the ``networks`` attribute of the Compute node: * ``networks..network_name`` (ie. ``networks.0.network_name``) * ``networks..network_id`` (ie. ``networks.0.network_id``) * ``networks..addresses`` as a coma separated list of addresses (ie. ``networks.0.addresses``) The resources host pool labels (``host.num_cpus``, ``host.disk_size``, ``host.mem_size``) are automatically decreased and increased respectively when a host pool is allocated and released only if you specify any of these Tosca ``host`` resources capabilities Compute in its Alien4Cloud applications. If you apply a new configuration on allocated hosts with new host resources labels, they will be recalculated depending on existing allocations resources. .. _yorc_infras_slurm_section: Slurm ----- .. only:: html |prod| `Slurm `_ is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters. It is wildly used in High Performance Computing and it is the default scheduler of the `Bull Super Computer Suite `_ . Yorc interacts with Slurm to allocate nodes on its cluster. Resources based scheduling ~~~~~~~~~~~~~~~~~~~~~~~~~~ TOSCA allows to specify `requirements on Compute nodes `_ if specified ``num_cpus`` and ``mem_size`` requirements are used to allocate only the required resoures on computes. This allows to share a Slurm managed compute across several deployments. If not specified a whole compute node will be allocated. Yorc also support `Slurm GRES `_ based scheduling. This is generally used to request a host with a specific type of resource (consumable or not) such as GPUs. Future work ~~~~~~~~~~~ * We plan to soon work on modeling Slurm Jobs in TOSCA and execute them thanks to Yorc. * We also plan to support `Singularity `_ , a container system similar to Docker but designed to integrate well HPC environments. This feature, as it will leverage some Bull HPC proprietary integration with Slurm, will be part of a premium version of Yorc. .. _yorc_infras_google_section: Google Cloud Platform --------------------- .. only:: html |dev| The Google Cloud Platform integration within Yorc allows to provision Compute nodes on top of `Google Compute Engine `_. This part is ready for production. It is planned to support soon the following features and have them production-ready: * Persistent Disks provisioning * Virtual Private Cloud Networks provisioning Future work ~~~~~~~~~~~ The following feature is planned to be supported in a next release: * `Google Kubernetes Engine `_ to deploy containers .. _yorc_infras_aws_section: AWS --- .. only:: html |dev| The AWS integration within Yorc allows to provision Compute nodes and Elastic IPs on top of `AWS EC2 `_ this part is ready for production but we plan to support soon the following features to make it production-ready: * Support Elastic Block Store provisioning * Support Networks provisioning with Virtual Private Cloud Future work ~~~~~~~~~~~ * We plan to work on modeling `AWS Batch Jobs `_ in TOSCA and execute them thanks to Yorc. * We plan to work on `AWS ECS `_ to deploy containers .. _yorc_infras_openstack_section: OpenStack --------- .. only:: html |prod| The `OpenStack `_ integration within Yorc is production-ready. We support Compute, Block Storage, Virtual Networks and Floating IPs provisioning. Future work ~~~~~~~~~~~ * We plan to work on modeling `OpenStack Mistral workflows `_ in TOSCA and execute them thanks to Yorc. * We plan to work on `OpenStack Zun `_ to deploy containers directly on top of OpenStack .. _yorc_infras_kubernetes_section: Kubernetes ---------- .. only:: html |incubation| Kubernetes support is in a kind of Proof Of Concept phase for now. We are currently working on a total refactoring of this part. .. |prod| image:: https://img.shields.io/badge/stability-production%20ready-green.svg .. |dev| image:: https://img.shields.io/badge/stability-stable%20but%20some%20features%20missing-yellow.svg .. |incubation| image:: https://img.shields.io/badge/stability-incubating-orange.svg