diff options
Diffstat (limited to 'playbooks')
15 files changed, 1289 insertions, 521 deletions
diff --git a/playbooks/provisioning/openstack/README.md b/playbooks/provisioning/openstack/README.md index 5e45add51..a2f553f4c 100644 --- a/playbooks/provisioning/openstack/README.md +++ b/playbooks/provisioning/openstack/README.md @@ -1,595 +1,258 @@  # OpenStack Provisioning -This repository contains playbooks and Heat templates to provision +This directory contains [Ansible][ansible] playbooks and roles to create  OpenStack resources (servers, networking, volumes, security groups, -etc.). The result is an environment ready for openshift-ansible. +etc.). The result is an environment ready for OpenShift installation +via [openshift-ansible]. -## Dependencies for localhost (ansible control/admin node) +We provide everything necessary to be able to install OpenShift on +OpenStack (including the DNS and load balancer servers when +necessary). In addition, we work on providing integration with the +OpenStack-native services (storage, lbaas, baremetal as a service, +dns, etc.). -* [Ansible 2.3](https://pypi.python.org/pypi/ansible) -* [Ansible-galaxy](https://pypi.python.org/pypi/ansible-galaxy-local-deps) -* [jinja2](http://jinja.pocoo.org/docs/2.9/) -* [shade](https://pypi.python.org/pypi/shade) -* python-jmespath / [jmespath](https://pypi.python.org/pypi/jmespath) -* python-dns / [dnspython](https://pypi.python.org/pypi/dnspython) -* Become (sudo) is not required. -**NOTE**: You can use a Docker image with all dependencies set up. -Find more in the [Deployment section](#deployment). +## OpenStack Requirements -### Optional Dependencies for localhost -**Note**: When using rhel images, `rhel-7-server-openstack-10-rpms` repository is required in order to install these packages. +Before you start the installation, you need to have an OpenStack +environment to connect to. You can use a public cloud or an OpenStack +within your organisation. It is also possible to +use [Devstack][devstack] or [TripleO][tripleo]. In the case of +TripleO, we will be running on top of the **overcloud**. -* `python-openstackclient` -* `python-heatclient` +The OpenStack release must be Newton (for Red Hat OpenStack this is +version 10) or newer. It must also satisfy these requirements: -## Dependencies for OpenStack hosted cluster nodes (servers) +* Heat (Orchestration) must be available +* The deployment image (CentOS 7 or RHEL 7) must be loaded +* The deployment flavor must be available to your user +  - `m1.medium` / 4GB RAM + 40GB disk should be enough for testing +  - look at +    the [Minimum Hardware Requirements page][hardware-requirements] +    for production +* The keypair for SSH must be available in openstack +* `keystonerc` file that lets you talk to the openstack services +   * NOTE: only Keystone V2 is currently supported -There are no additional dependencies for the cluster nodes. Required -configuration steps are done by Heat given a specific user data config -that normally should not be changed. +Optional: +* External Neutron network with a floating IP address pool -## Required galaxy modules -In order to pull in external dependencies for DNS configuration steps, -the following commads need to be executed: +## Installation -    ansible-galaxy install \ -      -r openshift-ansible-contrib/playbooks/provisioning/openstack/galaxy-requirements.yaml \ -      -p openshift-ansible-contrib/roles +There are four main parts to the installation: -Alternatively you can install directly from github: +1. [Preparing Ansible and dependencies](#1-preparing-ansible-and-dependencies) +2. [Configuring the desired OpenStack environment and OpenShift cluster](#2-configuring-the-openstack-environment-and-openshift-cluster) +3. [Creating the OpenStack resources (VMs, networking, etc.)](#3-creating-the-openstack-resources-vms-networking-etc) +4. [Installing OpenShift](#4-installing-openshift) -    ansible-galaxy install git+https://github.com/redhat-cop/infra-ansible,master \ -      -p openshift-ansible-contrib/roles +This guide is going to install [OpenShift Origin][origin] +with [CentOS 7][centos7] images with minimal customisation. -Notes: -* This assumes we're in the directory that contains the clonned -openshift-ansible-contrib repo in its root path. -* When trying to install a different version, the previous one must be removed first -(`infra-ansible` directory from [roles](https://github.com/openshift/openshift-ansible-contrib/tree/master/roles)). -Otherwise, even if there are differences between the two versions, installation of the newer version is skipped. +We will create the VMs for running OpenShift, in a new Neutron +network, assign Floating IP addresses and configure DNS. -## What does it do +The OpenShift cluster will have a single Master node that will run +`etcd`, a single Infra node and two App nodes. -* Create Nova servers with floating IP addresses attached -* Assigns Cinder volumes to the servers -* Set up an `openshift` user with sudo privileges -* Optionally attach Red Hat subscriptions -* Sets up a bind-based DNS server or configures the cluster servers to use an external DNS server. -* Supports mixed in-stack/external DNS servers for dynamic updates. -* When deploying more than one master, sets up a HAproxy server +You can look at +the [Advanced Configuration page][advanced-configuration] for +additional options. -## Set up -### Copy the sample inventory +### 1. Preparing Ansible and dependencies -    cp -r openshift-ansible-contrib/playbooks/provisioning/openstack/sample-inventory inventory +First, you need to select where to run [Ansible][ansible] from (the +*Ansible host*). This can be the computer you read this guide on or an +OpenStack VM you'll create specifically for this purpose. -### Copy ansible config +We will use +a +[Docker image that has all the dependencies installed][control-host-image] to +make things easier. If you don't want to use Docker, take a look at +the [Ansible host dependencies][ansible-dependencies] and make sure +they're installed. -    cp openshift-ansible-contrib/playbooks/provisioning/openstack/sample-inventory/ansible.cfg ansible.cfg +Your *Ansible host* needs to have the following: -### Update `inventory/group_vars/all.yml` +1. Docker +2. `keystonerc` file with your OpenStack credentials +3. SSH private key for logging in to your OpenShift nodes -#### DNS configuration variables +Assuming your private key is `~/.ssh/id_rsa` and `keystonerc` in your +current directory: -Pay special attention to the values in the first paragraph -- these -will depend on your OpenStack environment. - -Note that the provsisioning playbooks update the original Neutron subnet -created with the Heat stack to point to the configured DNS servers. -So the provisioned cluster nodes will start using those natively as -default nameservers. Technically, this allows to deploy OpenShift clusters -without dnsmasq proxies. - -The `env_id` and `public_dns_domain` will form the cluster's DNS domain all -your servers will be under. With the default values, this will be -`openshift.example.com`. For workloads, the default subdomain is 'apps'. -That sudomain can be set as well by the `openshift_app_domain` variable in -the inventory. - -The `openstack_<role name>_hostname` is a set of variables used for customising -hostnames of servers with a given role. When such a variable stays commented, -default hostname (usually the role name) is used. - -The `public_dns_nameservers` is a list of DNS servers accessible from all -the created Nova servers. These will be serving as your DNS forwarders for -external FQDNs that do not belong to the cluster's DNS domain and its subdomains. -If you're unsure what to put in here, you can try the google or opendns servers, -but note that some organizations may be blocking them. - -The `openshift_use_dnsmasq` controls either dnsmasq is deployed or not. -By default, dnsmasq is deployed and comes as the hosts' /etc/resolv.conf file -first nameserver entry that points to the local host instance of the dnsmasq -daemon that in turn proxies DNS requests to the authoritative DNS server. -When Network Manager is enabled for provisioned cluster nodes, which is -normally the case, you should not change the defaults and always deploy dnsmasq. - -`external_nsupdate_keys` describes an external authoritative DNS server(s) -processing dynamic records updates in the public and private cluster views: - -    external_nsupdate_keys: -      public: -        key_secret: <some nsupdate key> -        key_algorithm: 'hmac-md5' -        key_name: 'update-key' -        server: <public DNS server IP> -      private: -        key_secret: <some nsupdate key 2> -        key_algorithm: 'hmac-sha256' -        server: <public or private DNS server IP> - -Here, for the public view section, we specified another key algorithm and -optional `key_name`, which normally defaults to the cluster's DNS domain. -This just illustrates a compatibility mode with a DNS service deployed -by OpenShift on OSP10 reference architecture, and used in a mixed mode with -another external DNS server. - -Another example defines an external DNS server for the public view -additionally to the in-stack DNS server used for the private view only: - -    external_nsupdate_keys: -      public: -        key_secret: <some nsupdate key> -        key_algorithm: 'hmac-sha256' -        server: <public DNS server IP> - -Here, updates matching the public view will be hitting the given public -server IP. While updates matching the private view will be sent to the -auto evaluated in-stack DNS server's **public** IP. - -Note, for the in-stack DNS server, private view updates may be sent only -via the public IP of the server. You can not send updates via the private -IP yet. This forces the in-stack private server to have a floating IP. -See also the [security notes](#security-notes) - -#### Other configuration variables - -`openstack_ssh_key` is a Nova keypair - you can see your keypairs with -`openstack keypair list`. This guide assumes that its corresponding private -key is `~/.ssh/openshift`, stored on the ansible admin (control) node. - -`openstack_default_image_name` is the default name of the Glance image the -servers will use. You can see your images with `openstack image list`. -In order to set a different image for a role, uncomment the line with the -corresponding variable (e.g. `openstack_lb_image_name` for load balancer) and -set its value to another available image name. `openstack_default_image_name` -must stay defined as it is used as a default value for the rest of the roles. - -`openstack_default_flavor` is the default Nova flavor the servers will use. -You can see your flavors with `openstack flavor list`. -In order to set a different flavor for a role, uncomment the line with the -corresponding variable (e.g. `openstack_lb_flavor` for load balancer) and -set its value to another available flavor. `openstack_default_flavor` must -stay defined as it is used as a default value for the rest of the roles. - -`openstack_external_network_name` is the name of the Neutron network -providing external connectivity. It is often called `public`, -`external` or `ext-net`. You can see your networks with `openstack -network list`. - -`openstack_private_network_name` is the name of the private Neutron network -providing admin/control access for ansible. It can be merged with other -cluster networks, there are no special requirements for networking. - -The `openstack_num_masters`, `openstack_num_infra` and -`openstack_num_nodes` values specify the number of Master, Infra and -App nodes to create. - -The `openshift_cluster_node_labels` defines custom labels for your openshift -cluster node groups. It currently supports app and infra node groups. -The default value of this variable sets `region: primary` to app nodes and -`region: infra` to infra nodes. -An example of setting a customised label: -``` -openshift_cluster_node_labels: -  app: -    mylabel: myvalue +```bash +$ sudo docker run -it -v ~/.ssh:/mnt/.ssh:Z \ +     -v $PWD/keystonerc:/root/.config/openstack/keystonerc.sh:Z \ +     redhatcop/control-host-openstack bash  ``` -The `openstack_nodes_to_remove` allows you to specify the numerical indexes -of App nodes that should be removed; for example, ['0', '2'], - -The `docker_volume_size` is the default Docker volume size the servers will use. -In order to set a different volume size for a role, -uncomment the line with the corresponding variable (e. g. `docker_master_volume_size` -for master) and change its value. `docker_volume_size` must stay defined as it is -used as a default value for some of the servers (master, infra, app node). -The rest of the roles (etcd, load balancer, dns) have their defaults hard-coded. - -**Note**: If the `ephemeral_volumes` is set to `true`, the `*_volume_size` variables -will be ignored and the deployment will not create any cinder volumes. - -The `openstack_flat_secgrp`, controls Neutron security groups creation for Heat -stacks. Set it to true, if you experience issues with sec group rules -quotas. It trades security for number of rules, by sharing the same set -of firewall rules for master, node, etcd and infra nodes. - -The `required_packages` variable also provides a list of the additional -prerequisite packages to be installed before to deploy an OpenShift cluster. -Those are ignored though, if the `manage_packages: False`. - -The `openstack_inventory` controls either a static inventory will be created after the -cluster nodes provisioned on OpenStack cloud. Note, the fully dynamic inventory -is yet to be supported, so the static inventory will be created anyway. - -The `openstack_inventory_path` points the directory to host the generated static inventory. -It should point to the copied example inventory directory, otherwise ti creates -a new one for you. - -#### Multi-master configuration - -Please refer to the official documentation for the -[multi-master setup](https://docs.openshift.com/container-platform/3.6/install_config/install/advanced_install.html#multiple-masters) -and define the corresponding [inventory -variables](https://docs.openshift.com/container-platform/3.6/install_config/install/advanced_install.html#configuring-cluster-variables) -in `inventory/group_vars/OSEv3.yml`. For example, given a load balancer node -under the ansible group named `ext_lb`: - -    openshift_master_cluster_method: native -    openshift_master_cluster_hostname: "{{ groups.ext_lb.0 }}" -    openshift_master_cluster_public_hostname: "{{ groups.ext_lb.0 }}" - -#### Provider Network - -Normally, the playbooks create a new Neutron network and subnet and attach -floating IP addresses to each node. If you have a provider network set up, this -is all unnecessary as you can just access servers that are placed in the -provider network directly. - -To use a provider network, set its name in `openstack_provider_network_name` in -`inventory/group_vars/all.yml`. - -If you set the provider network name, the `openstack_external_network_name` and -`openstack_private_network_name` fields will be ignored. - -**NOTE**: this will not update the nodes' DNS, so running openshift-ansible -right after provisioning will fail (unless you're using an external DNS server -your provider network knows about). You must make sure your nodes are able to -resolve each other by name. - -#### Security notes - -Configure required `*_ingress_cidr` variables to restrict public access -to provisioned servers from your laptop (a /32 notation should be used) -or your trusted network. The most important is the `node_ingress_cidr` -that restricts public access to the deployed DNS server and cluster -nodes' ephemeral ports range. - -Note, the command ``curl https://api.ipify.org`` helps fiding an external -IP address of your box (the ansible admin node). - -There is also the `manage_packages` variable (defaults to True) you -may want to turn off in order to speed up the provisioning tasks. This may -be the case for development environments. When turned off, the servers will -be provisioned omitting the ``yum update`` command. This brings security -implications though, and is not recommended for production deployments. - -##### DNS servers security options - -Aside from `node_ingress_cidr` restricting public access to in-stack DNS -servers, there are following (bind/named specific) DNS security -options available: - -    named_public_recursion: 'no' -    named_private_recursion: 'yes' - -External DNS servers, which is not included in the 'dns' hosts group, -are not managed. It is up to you to configure such ones. - -### Configure the OpenShift parameters - -Finally, you need to update the DNS entry in -`inventory/group_vars/OSEv3.yml` (look at -`openshift_master_default_subdomain`). - -In addition, this is the place where you can customise your OpenShift -installation for example by specifying the authentication. - -The full list of options is available in this sample inventory: - -https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.ose.example - -Note, that in order to deploy OpenShift origin, you should update the following -variables for the `inventory/group_vars/OSEv3.yml`, `all.yml`: - -    deployment_type: origin -    openshift_deployment_type: "{{ deployment_type }}" - - -#### Setting a custom entrypoint - -In order to set a custom entrypoint, update `openshift_master_cluster_public_hostname` - -    openshift_master_cluster_public_hostname: api.openshift.example.com - -Note than an empty hostname does not work, so if your domain is `openshift.example.com`, -you cannot set this value to simply `openshift.example.com`. - -### Creating and using a Cinder volume for the OpenShift registry - -You can optionally have the playbooks create a Cinder volume and set -it up as the OpenShift hosted registry. - -To do that you need specify the desired Cinder volume name and size in -Gigabytes in `inventory/group_vars/all.yml`: - -    cinder_hosted_registry_name: cinder-registry -    cinder_hosted_registry_size_gb: 10 - -With this, the playbooks will create the volume and set up its -filesystem. If there is an existing volume of the same name, we will -use it but keep the existing data on it. - -To use the volume for the registry, you must first configure it with -the OpenStack credentials by putting the following to `OSEv3.yml`: - -    openshift_cloudprovider_openstack_username: "{{ lookup('env','OS_USERNAME') }}" -    openshift_cloudprovider_openstack_password: "{{ lookup('env','OS_PASSWORD') }}" -    openshift_cloudprovider_openstack_auth_url: "{{ lookup('env','OS_AUTH_URL') }}" -    openshift_cloudprovider_openstack_tenant_name: "{{ lookup('env','OS_TENANT_NAME') }}" - -This will use the credentials from your shell environment. If you want -to enter them explicitly, you can. You can also use credentials -different from the provisioning ones (say for quota or access control -reasons). - -**NOTE**: If you're testing this on (DevStack)[devstack], you must -explicitly set your Keystone API version to v2 (e.g. -`OS_AUTH_URL=http://10.34.37.47/identity/v2.0`) instead of the default -value provided by `openrc`. You may also encounter the following issue -with Cinder: - -https://github.com/kubernetes/kubernetes/issues/50461 - -You can read the (OpenShift documentation on configuring -OpenStack)[openstack] for more information. - -[devstack]: https://docs.openstack.org/devstack/latest/ -[openstack]: https://docs.openshift.org/latest/install_config/configuring_openstack.html - - -Next, we need to instruct OpenShift to use the Cinder volume for it's -registry. Again in `OSEv3.yml`: +This will create the container, add your SSH key and source your +`keystonerc`. It should be set up for the installation. -    #openshift_hosted_registry_storage_kind: openstack -    #openshift_hosted_registry_storage_access_modes: ['ReadWriteOnce'] -    #openshift_hosted_registry_storage_openstack_filesystem: xfs +You can verify that everything is in order: -The filesystem value here will be used in the initial formatting of -the volume. +```bash +$ less .ssh/id_rsa +$ ansible --version +$ openstack image list +``` -### Use an existing Cinder volume for the OpenShift registry - -You can also use a pre-existing Cinder volume for the storage of your -OpenShift registry. - -To do that, you need to have a Cinder volume. You can create one by -running: - -    openstack volume create --size <volume size in gb> <volume name> - -The volume needs to have a file system created before you put it to -use. - -As with the automatically-created volume, you have to set up the -OpenStack credentials in `inventory/group_vars/OSEv3.yml` as well as -registry values: - -    #openshift_hosted_registry_storage_kind: openstack -    #openshift_hosted_registry_storage_access_modes: ['ReadWriteOnce'] -    #openshift_hosted_registry_storage_openstack_filesystem: xfs -    #openshift_hosted_registry_storage_openstack_volumeID: e0ba2d73-d2f9-4514-a3b2-a0ced507fa05 -    #openshift_hosted_registry_storage_volume_size: 10Gi - -Note the `openshift_hosted_registry_storage_openstack_volumeID` and -`openshift_hosted_registry_storage_volume_size` values: these need to -be added in addition to the previous variables. - -The **Cinder volume ID**, **filesystem** and **volume size** variables -must correspond to the values in your volume. The volume ID must be -the **UUID** of the Cinder volume, *not its name*. - -We can do formate the volume for you if you ask for it in -`inventory/group_vars/all.yml`: - -    prepare_and_format_registry_volume: true - -**NOTE:** doing so **will destroy any data that's currently on the volume**! - -You can also run the registry setup playbook directly: - -   ansible-playbook -i inventory playbooks/provisioning/openstack/prepare-and-format-cinder-volume.yaml - -(the provisioning phase must be completed, first) - - - -### Configure static inventory and access via a bastion node - -Example inventory variables: - -    openstack_use_bastion: true -    bastion_ingress_cidr: "{{openstack_subnet_prefix}}.0/24" -    openstack_private_ssh_key: ~/.ssh/openshift -    openstack_inventory: static -    openstack_inventory_path: ../../../../inventory -    openstack_ssh_config_path: /tmp/ssh.config.openshift.ansible.openshift.example.com - -The `openstack_subnet_prefix` is the openstack private network for your cluster. -And the `bastion_ingress_cidr` defines accepted range for SSH connections to nodes -additionally to the `ssh_ingress_cidr`` (see the security notes above). - -The SSH config will be stored on the ansible control node by the -gitven path. Ansible uses it automatically. To access the cluster nodes with -that ssh config, use the `-F` prefix, f.e.: - -    ssh -F /tmp/ssh.config.openshift.ansible.openshift.example.com master-0.openshift.example.com echo OK - -Note, relative paths will not work for the `openstack_ssh_config_path`, but it -works for the `openstack_private_ssh_key` and `openstack_inventory_path`. In this -guide, the latter points to the current directory, where you run ansible commands -from. - -To verify nodes connectivity, use the command: - -    ansible -v -i inventory/hosts -m ping all - -If something is broken, double-check the inventory variables, paths and the -generated `<openstack_inventory_path>/hosts` and `openstack_ssh_config_path` files. - -The `inventory: dynamic` can be used instead to access cluster nodes directly via -floating IPs. In this mode you can not use a bastion node and should specify -the dynamic inventory file in your ansible commands , like `-i openstack.py`. -## Deployment +### 2. Configuring the OpenStack Environment and OpenShift Cluster -### Using Docker on the Ansible host +The configuration is all done in an Ansible inventory directory. We +will clone the [openshift-ansible-contrib][contrib] repository and set +things up for a minimal installation. -If you don't want to worry about the dependencies, you can use the -[OpenStack Control Host image][control-host-image]. -[control-host-image]: https://hub.docker.com/r/redhatcop/control-host-openstack/ +``` +$ git clone https://github.com/openshift/openshift-ansible-contrib +$ cp -r openshift-ansible-contrib/playbooks/provisioning/openstack/sample-inventory/ inventory +``` -It has all the dependencies installed, but you'll need to map your -code and credentials to it. Assuming your SSH keys live in `~/.ssh` -and everything else is in your current directory (i.e. `ansible.cfg`, -`keystonerc`, `inventory`, `openshift-ansible`, -`openshift-ansible-contrib`), this is how you run the deployment: +If you're testing multiple configurations, you can have multiple +inventories and switch between them. -    sudo docker run -it -v ~/.ssh:/mnt/.ssh:Z \ -        -v $PWD:/root/openshift:Z \ -        -v $PWD/keystonerc:/root/.config/openstack/keystonerc.sh:Z \ -        redhatcop/control-host-openstack bash +#### OpenStack Configuration -(feel free to replace `$PWD` with an actual path to your inventory and -checkouts, but note that relative paths don't work) +The OpenStack configuration is in `inventory/group_vars/all.yml`. -The first run may take a few minutes while the image is being -downloaded. After that, you'll be inside the container and you can run -the playbooks: +Open the file and plug in the image, flavor and network configuration +corresponding to your OpenStack installation. -    cd openshift -    ansible-playbook openshift-ansible-contrib/playbooks/provisioning/openstack/provision.yaml +```bash +$ vi inventory/group_vars/all.yml +``` +1. Set the `openstack_ssh_public_key` to your OpenStack keypair name. +   - See `openstack keypair list` to find the keypairs registered with +   OpenShift. +   - This must correspond to your private SSH key in `~/.ssh/id_rsa` +2. Set the `openstack_external_network_name` to the floating IP +   network of your openstack. +   - See `openstack network list` for the list of networks. +   - It's often called `public`, `external` or `ext-net`. +3. Set the `openstack_default_image_name` to the image you want your +   OpenShift VMs to run. +   - See `openstack image list` for the list of available images. +4. Set the `openstack_default_flavor` to the flavor you want your +   OpenShift VMs to use. +   - See `openstack flavor list` for the list of available flavors. + +**NOTE**: In most OpenStack environments, you will also need to +configure the forwarders for the DNS server we create. This depends on +your environment. + +Launch a VM in your OpenStack and look at its `/etc/resolv.conf` and +put the IP addresses into `public_dns_nameservers` in +`inventory/group_vars/all.yml`. -### Run the playbook -Assuming your OpenStack (Keystone) credentials are in the `keystonerc` -this is how you stat the provisioning process from your ansible control node: +#### OpenShift configuration -    . keystonerc -    ansible-playbook openshift-ansible-contrib/playbooks/provisioning/openstack/provision.yaml +The OpenShift configuration is in `inventory/group_vars/OSEv3.yml`. -Note, here you start with an empty inventory. The static inventory will be populated -with data so you can omit providing additional arguments for future ansible commands. +The default options will mostly work, but unless you used the large +flavors for a production-ready environment, openshift-ansible's +hardware check will fail. -If bastion enabled, the generates SSH config must be applied for ansible. -Otherwise, it is auto included by the previous step. In order to execute it -as a separate playbook, use the following command: +Let's disable those checks by putting this in +`inventory/group_vars/OSEv3.yml`: -    ansible-playbook openshift-ansible-contrib/playbooks/provisioning/openstack/post-provision-openstack.yml +```yaml +openshift_disable_check: disk_availability,memory_availability +``` -The first infra node then becomes a bastion node as well and proxies access -for future ansible commands. The post-provision step also configures Satellite, -if requested, and DNS server, and ensures other OpenShift requirements to be met. +**NOTE**: The default authentication method will allow **any username +and password** in! If you're running this in a public place, you need +to set up access control. -### Running Custom Post-Provision Actions +Feel free to look at +the [Sample OpenShift Inventory][sample-openshift-inventory] and +the [advanced configuration][advanced-configuration]. -A custom playbook can be run like this: -``` -ansible-playbook --private-key ~/.ssh/openshift -i inventory/ openshift-ansible-contrib/playbooks/provisioning/openstack/custom-actions/custom-playbook.yml -``` +### 3. Creating the OpenStack resources (VMs, networking, etc.) -If you'd like to limit the run to one particular host, you can do so as follows: +We will install the DNS server roles using ansible galaxy and then run +the openstack provisioning playbook. The `ansible.cfg` file we provide +has useful defaults -- copy it to the directory you're going to run +Ansible from. +```bash +$ ansible-galaxy install -r openshift-ansible-contrib/playbooks/provisioning/openstack/galaxy-requirements.yaml -p openshift-ansible-contrib/roles +$ cp openshift-ansible-contrib/playbooks/provisioning/openstack/ansible.cfg ansible.cfg  ``` -ansible-playbook --private-key ~/.ssh/openshift -i inventory/ openshift-ansible-contrib/playbooks/provisioning/openstack/custom-actions/custom-playbook.yml -l app-node-0.openshift.example.com -``` +(you will only need to do this once) -You can also create your own custom playbook. Here's one example that adds additional YUM repositories: +Then run the provisioning playbook -- this will create the OpenStack +resources: -``` ---- -- hosts: app -  tasks: - -  # enable EPL -  - name: Add repository -    yum_repository: -      name: epel -      description: EPEL YUM repo -      baseurl: https://download.fedoraproject.org/pub/epel/$releasever/$basearch/ +```bash +$ ansible-playbook -i inventory openshift-ansible-contrib/playbooks/provisioning/openstack/provision.yaml  ``` -This example runs against app nodes. The list of options include: +If you're using multiple inventories, make sure you pass the path to +the right one to `-i`. -  - cluster_hosts (all hosts: app, infra, masters, dns, lb) -  - OSEv3 (app, infra, masters) -  - app -  - dns -  - masters -  - infra_hosts -Please consider contributing your custom playbook back to openshift-ansible-contrib! +### 4. Installing OpenShift -A library of custom post-provision actions exists in `openshift-ansible-contrib/playbooks/provisioning/openstack/custom-actions`. Playbooks include: +We will use the `openshift-ansible` project to install openshift on +top of the OpenStack nodes we have prepared: -##### add-yum-repos.yml - -[add-yum-repos.yml](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions/add-yum-repos.yml) adds a list of custom yum repositories to every node in the cluster. - -### Install OpenShift - -Once it succeeds, you can install openshift by running: - -    ansible-playbook openshift-ansible/playbooks/byo/config.yml - -### Access UI - -OpenShift UI may be accessed via the 1st master node FQDN, port 8443. - -When using a bastion, you may want to make an SSH tunnel from your control node -to access UI on the `https://localhost:8443`, with this inventory variable: - -   openshift_ui_ssh_tunnel: True +```bash +$ git clone https://github.com/openshift/openshift-ansible +$ ansible-playbook -i inventory openshift-ansible/playbooks/byo/config.yml +``` -Note, this requires sudo rights on the ansible control node and an absolute path -for the `openstack_private_ssh_key`. You should also update the control node's -`/etc/hosts`: -    127.0.0.1 master-0.openshift.example.com +### Next Steps -In order to access UI, the ssh-tunnel service will be created and started on the -control node. Make sure to remove these changes and the service manually, when not -needed anymore. +And that's it! You should have a small but functional OpenShift +cluster now. -## Scale Deployment up/down +Take a look at [how to access the cluster][accessing-openshift] +and [how to remove it][uninstall-openshift] as well as the more +advanced configuration: -### Scaling up +* [Accessing the OpenShift cluster][accessing-openshift] +* [Removing the OpenShift cluster][uninstall-openshift] +* Set Up Authentication (TODO) +* [Multiple Masters with a load balancer][loadbalancer] +* [External Dns][external-dns] +* Multiple Clusters (TODO) +* [Cinder Registry][cinder-registry] +* [Bastion Node][bastion] -One can scale up the number of application nodes by executing the ansible playbook -`openshift-ansible-contrib/playbooks/provisioning/openstack/scale-up.yaml`. -This process can be done even if there is currently no deployment available. -The `increment_by` variable is used to specify by how much the deployment should -be scaled up (if none exists, it serves as a target number of application nodes). -The path to `openshift-ansible` directory can be customised by the `openshift_ansible_dir` -variable. Its value must be an absolute path to `openshift-ansible` and it cannot -contain the '/' symbol at the end.  -Usage: +[ansible]: https://www.ansible.com/ +[openshift-ansible]: https://github.com/openshift/openshift-ansible +[devstack]: https://docs.openstack.org/devstack/ +[tripleo]: http://tripleo.org/ +[ansible-dependencies]: ./advanced-configuration.md#dependencies-for-localhost-ansible-controladmin-node +[contrib]: https://github.com/openshift/openshift-ansible-contrib +[control-host-image]: https://hub.docker.com/r/redhatcop/control-host-openstack/ +[hardware-requirements]: https://docs.openshift.org/latest/install_config/install/prerequisites.html#hardware +[origin]: https://www.openshift.org/ +[centos7]: https://www.centos.org/ +[sample-openshift-inventory]: https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.example +[advanced-configuration]: ./advanced-configuration.md +[accessing-openshift]: ./advanced-configuration.md#accessing-the-openshift-cluster +[uninstall-openshift]: ./advanced-configuration.md#removing-the-openshift-cluster +[loadbalancer]: ./advanced-configuration.md#multi-master-configuration +[external-dns]: ./advanced-configuration.md#dns-configuration-variables +[cinder-registry]: ./advanced-configuration.md#creating-and-using-a-cinder-volume-for-the-openshift-registry +[bastion]: ./advanced-configuration.md#configure-static-inventory-and-access-via-a-bastion-node -``` -ansible-playbook -i <path to inventory> openshift-ansible-contrib/playbooks/provisioning/openstack/scale-up.yaml` [-e increment_by=<number>] [-e openshift_ansible_dir=<path to openshift-ansible>] -``` -Note: This playbook works only without a bastion node (`openstack_use_bastion: False`).  ## License -As the rest of the openshift-ansible-contrib repository, the code here is -licensed under Apache 2. +Like the rest of the openshift-ansible-contrib repository, the code +here is licensed under Apache 2. diff --git a/playbooks/provisioning/openstack/advanced-configuration.md b/playbooks/provisioning/openstack/advanced-configuration.md new file mode 100644 index 000000000..72bb95254 --- /dev/null +++ b/playbooks/provisioning/openstack/advanced-configuration.md @@ -0,0 +1,773 @@ +## Dependencies for localhost (ansible control/admin node) + +* [Ansible 2.3](https://pypi.python.org/pypi/ansible) +* [Ansible-galaxy](https://pypi.python.org/pypi/ansible-galaxy-local-deps) +* [jinja2](http://jinja.pocoo.org/docs/2.9/) +* [shade](https://pypi.python.org/pypi/shade) +* python-jmespath / [jmespath](https://pypi.python.org/pypi/jmespath) +* python-dns / [dnspython](https://pypi.python.org/pypi/dnspython) +* Become (sudo) is not required. + +**NOTE**: You can use a Docker image with all dependencies set up. +Find more in the [Deployment section](#deployment). + +### Optional Dependencies for localhost +**Note**: When using rhel images, `rhel-7-server-openstack-10-rpms` repository is required in order to install these packages. + +* `python-openstackclient` +* `python-heatclient` + +## Dependencies for OpenStack hosted cluster nodes (servers) + +There are no additional dependencies for the cluster nodes. Required +configuration steps are done by Heat given a specific user data config +that normally should not be changed. + +## Required galaxy modules + +In order to pull in external dependencies for DNS configuration steps, +the following commads need to be executed: + +    ansible-galaxy install \ +      -r openshift-ansible-contrib/playbooks/provisioning/openstack/galaxy-requirements.yaml \ +      -p openshift-ansible-contrib/roles + +Alternatively you can install directly from github: + +    ansible-galaxy install git+https://github.com/redhat-cop/infra-ansible,master \ +      -p openshift-ansible-contrib/roles + +Notes: +* This assumes we're in the directory that contains the clonned +openshift-ansible-contrib repo in its root path. +* When trying to install a different version, the previous one must be removed first +(`infra-ansible` directory from [roles](https://github.com/openshift/openshift-ansible-contrib/tree/master/roles)). +Otherwise, even if there are differences between the two versions, installation of the newer version is skipped. + + +## Accessing the OpenShift Cluster + +### Use the Cluster DNS + +In addition to the OpenShift nodes, we created a DNS server with all +the necessary entries. We will configure your *Ansible host* to use +this new DNS and talk to the deployed OpenShift. + +First, get the DNS IP address: + +```bash +$ openstack server show dns-0.openshift.example.com --format value --column addresses +openshift-ansible-openshift.example.com-net=192.168.99.11, 10.40.128.129 +``` + +Note the floating IP address (it's `10.40.128.129` in this case) -- if +you're not sure, try pinging them both -- it's the one that responds +to pings. + +Next, edit your `/etc/resolv.conf` as root and put `nameserver DNS_IP` as your +**first entry**. + +If your `/etc/resolv.conf` currently looks like this: + +``` +; generated by /usr/sbin/dhclient-script +search openstacklocal +nameserver 192.168.0.3 +nameserver 192.168.0.2 +``` + +Change it to this: + +``` +; generated by /usr/sbin/dhclient-script +search openstacklocal +nameserver 10.40.128.129 +nameserver 192.168.0.3 +nameserver 192.168.0.2 +``` + +### Get the `oc` Client + +**NOTE**: You can skip this section if you're using the Docker image +-- it already has the `oc` binary. + +You need to download the OpenShift command line client (called `oc`). +You can download and extract `openshift-origin-client-tools` from the +OpenShift release page: + +https://github.com/openshift/origin/releases/latest/ + +Or you can now copy it from the master node: + +    $ ansible -i inventory masters[0] -m fetch -a "src=/bin/oc dest=oc" + +Either way, find the `oc` binary and put it in your `PATH`. + + +### Logging in Using the Command Line + + +``` +oc login --insecure-skip-tls-verify=true https://master-0.openshift.example.com:8443 -u user -p password +oc new-project test +oc new-app --template=cakephp-mysql-example +oc status -v +curl http://cakephp-mysql-example-test.apps.openshift.example.com +``` + +This will trigger an image build. You can run `oc logs -f +bc/cakephp-mysql-example` to follow its progress. + +Wait until the build has finished and both pods are deployed and running: + +``` +$ oc status -v +In project test on server https://master-0.openshift.example.com:8443 + +http://cakephp-mysql-example-test.apps.openshift.example.com (svc/cakephp-mysql-example) +  dc/cakephp-mysql-example deploys istag/cakephp-mysql-example:latest <- +    bc/cakephp-mysql-example source builds https://github.com/openshift/cakephp-ex.git on openshift/php:7.0 +    deployment #1 deployed about a minute ago - 1 pod + +svc/mysql - 172.30.144.36:3306 +  dc/mysql deploys openshift/mysql:5.7 +    deployment #1 deployed 3 minutes ago - 1 pod + +Info: +  * pod/cakephp-mysql-example-1-build has no liveness probe to verify pods are still running. +    try: oc set probe pod/cakephp-mysql-example-1-build --liveness ... +View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'. + +``` + +You can now look at the deployed app using its route: + +``` +$ curl http://cakephp-mysql-example-test.apps.openshift.example.com +``` + +Its `title` should say: "Welcome to OpenShift". + + +### Accessing the UI + +You can also access the OpenShift cluster with a web browser by going to: + +https://master-0.openshift.example.com:8443 + +Note that for this to work, the OpenShift nodes must be accessible +from your computer and it's DNS configuration must use the cruster's +DNS. + + +## Removing the OpenShift Cluster + +Everything in the cluster is contained within a Heat stack. To +completely remove the cluster and all the related OpenStack resources, +run this command: + +```bash +openstack stack delete --wait --yes openshift.example.com +``` + + +## DNS configuration variables + +Pay special attention to the values in the first paragraph -- these +will depend on your OpenStack environment. + +Note that the provsisioning playbooks update the original Neutron subnet +created with the Heat stack to point to the configured DNS servers. +So the provisioned cluster nodes will start using those natively as +default nameservers. Technically, this allows to deploy OpenShift clusters +without dnsmasq proxies. + +The `env_id` and `public_dns_domain` will form the cluster's DNS domain all +your servers will be under. With the default values, this will be +`openshift.example.com`. For workloads, the default subdomain is 'apps'. +That sudomain can be set as well by the `openshift_app_domain` variable in +the inventory. + +The `openstack_<role name>_hostname` is a set of variables used for customising +hostnames of servers with a given role. When such a variable stays commented, +default hostname (usually the role name) is used. + +The `public_dns_nameservers` is a list of DNS servers accessible from all +the created Nova servers. These will be serving as your DNS forwarders for +external FQDNs that do not belong to the cluster's DNS domain and its subdomains. +If you're unsure what to put in here, you can try the google or opendns servers, +but note that some organizations may be blocking them. + +The `openshift_use_dnsmasq` controls either dnsmasq is deployed or not. +By default, dnsmasq is deployed and comes as the hosts' /etc/resolv.conf file +first nameserver entry that points to the local host instance of the dnsmasq +daemon that in turn proxies DNS requests to the authoritative DNS server. +When Network Manager is enabled for provisioned cluster nodes, which is +normally the case, you should not change the defaults and always deploy dnsmasq. + +`external_nsupdate_keys` describes an external authoritative DNS server(s) +processing dynamic records updates in the public and private cluster views: + +    external_nsupdate_keys: +      public: +        key_secret: <some nsupdate key> +        key_algorithm: 'hmac-md5' +        key_name: 'update-key' +        server: <public DNS server IP> +      private: +        key_secret: <some nsupdate key 2> +        key_algorithm: 'hmac-sha256' +        server: <public or private DNS server IP> + +Here, for the public view section, we specified another key algorithm and +optional `key_name`, which normally defaults to the cluster's DNS domain. +This just illustrates a compatibility mode with a DNS service deployed +by OpenShift on OSP10 reference architecture, and used in a mixed mode with +another external DNS server. + +Another example defines an external DNS server for the public view +additionally to the in-stack DNS server used for the private view only: + +    external_nsupdate_keys: +      public: +        key_secret: <some nsupdate key> +        key_algorithm: 'hmac-sha256' +        server: <public DNS server IP> + +Here, updates matching the public view will be hitting the given public +server IP. While updates matching the private view will be sent to the +auto evaluated in-stack DNS server's **public** IP. + +Note, for the in-stack DNS server, private view updates may be sent only +via the public IP of the server. You can not send updates via the private +IP yet. This forces the in-stack private server to have a floating IP. +See also the [security notes](#security-notes) + +## Flannel networking + +In order to configure the +[flannel networking](https://docs.openshift.com/container-platform/3.6/install_config/configuring_sdn.html#using-flannel), +uncomment and adjust the appropriate `inventory/group_vars/OSEv3.yml` group vars. +Note that the `osm_cluster_network_cidr` must not overlap with the default +Docker bridge subnet of 172.17.0.0/16. Or you should change the docker0 default +CIDR range otherwise. For example, by adding `--bip=192.168.2.1/24` to +`DOCKER_NETWORK_OPTIONS` located in `/etc/sysconfig/docker-network`. + +Also note that the flannel network will be provisioned on a separate isolated Neutron +subnet defined from `osm_cluster_network_cidr` and having ports security disabled. +Use the `openstack_private_data_network_name` variable to define the network +name for the heat stack resource. + +After the cluster deployment done, you should run an additional post installation +step for flannel and docker iptables configuration: + +    ansible-playbook openshift-ansible-contrib/playbooks/provisioning/openstack/post-install.yml + +## Other configuration variables + +`openstack_ssh_public_key` is a Nova keypair - you can see your +keypairs with `openstack keypair list`. It must correspond to the +private SSH key Ansible will use to log into the created VMs. This is +`~/.ssh/id_rsa` by default, but you can use a different key by passing +`--private-key` to `ansible-playbook`. + +`openstack_default_image_name` is the default name of the Glance image the +servers will use. You can see your images with `openstack image list`. +In order to set a different image for a role, uncomment the line with the +corresponding variable (e.g. `openstack_lb_image_name` for load balancer) and +set its value to another available image name. `openstack_default_image_name` +must stay defined as it is used as a default value for the rest of the roles. + +`openstack_default_flavor` is the default Nova flavor the servers will use. +You can see your flavors with `openstack flavor list`. +In order to set a different flavor for a role, uncomment the line with the +corresponding variable (e.g. `openstack_lb_flavor` for load balancer) and +set its value to another available flavor. `openstack_default_flavor` must +stay defined as it is used as a default value for the rest of the roles. + +`openstack_external_network_name` is the name of the Neutron network +providing external connectivity. It is often called `public`, +`external` or `ext-net`. You can see your networks with `openstack +network list`. + +`openstack_private_network_name` is the name of the private Neutron network +providing admin/control access for ansible. It can be merged with other +cluster networks, there are no special requirements for networking. + +The `openstack_num_masters`, `openstack_num_infra` and +`openstack_num_nodes` values specify the number of Master, Infra and +App nodes to create. + +The `openshift_cluster_node_labels` defines custom labels for your openshift +cluster node groups. It currently supports app and infra node groups. +The default value of this variable sets `region: primary` to app nodes and +`region: infra` to infra nodes. +An example of setting a customised label: +``` +openshift_cluster_node_labels: +  app: +    mylabel: myvalue +``` + +The `openstack_nodes_to_remove` allows you to specify the numerical indexes +of App nodes that should be removed; for example, ['0', '2'], + +The `docker_volume_size` is the default Docker volume size the servers will use. +In order to set a different volume size for a role, +uncomment the line with the corresponding variable (e. g. `docker_master_volume_size` +for master) and change its value. `docker_volume_size` must stay defined as it is +used as a default value for some of the servers (master, infra, app node). +The rest of the roles (etcd, load balancer, dns) have their defaults hard-coded. + +**Note**: If the `ephemeral_volumes` is set to `true`, the `*_volume_size` variables +will be ignored and the deployment will not create any cinder volumes. + +The `openstack_flat_secgrp`, controls Neutron security groups creation for Heat +stacks. Set it to true, if you experience issues with sec group rules +quotas. It trades security for number of rules, by sharing the same set +of firewall rules for master, node, etcd and infra nodes. + +The `required_packages` variable also provides a list of the additional +prerequisite packages to be installed before to deploy an OpenShift cluster. +Those are ignored though, if the `manage_packages: False`. + +The `openstack_inventory` controls either a static inventory will be created after the +cluster nodes provisioned on OpenStack cloud. Note, the fully dynamic inventory +is yet to be supported, so the static inventory will be created anyway. + +The `openstack_inventory_path` points the directory to host the generated static inventory. +It should point to the copied example inventory directory, otherwise ti creates +a new one for you. + +## Multi-master configuration + +Please refer to the official documentation for the +[multi-master setup](https://docs.openshift.com/container-platform/3.6/install_config/install/advanced_install.html#multiple-masters) +and define the corresponding [inventory +variables](https://docs.openshift.com/container-platform/3.6/install_config/install/advanced_install.html#configuring-cluster-variables) +in `inventory/group_vars/OSEv3.yml`. For example, given a load balancer node +under the ansible group named `ext_lb`: + +    openshift_master_cluster_method: native +    openshift_master_cluster_hostname: "{{ groups.ext_lb.0 }}" +    openshift_master_cluster_public_hostname: "{{ groups.ext_lb.0 }}" + +## Provider Network + +Normally, the playbooks create a new Neutron network and subnet and attach +floating IP addresses to each node. If you have a provider network set up, this +is all unnecessary as you can just access servers that are placed in the +provider network directly. + +To use a provider network, set its name in `openstack_provider_network_name` in +`inventory/group_vars/all.yml`. + +If you set the provider network name, the `openstack_external_network_name` and +`openstack_private_network_name` fields will be ignored. + +**NOTE**: this will not update the nodes' DNS, so running openshift-ansible +right after provisioning will fail (unless you're using an external DNS server +your provider network knows about). You must make sure your nodes are able to +resolve each other by name. + +## Security notes + +Configure required `*_ingress_cidr` variables to restrict public access +to provisioned servers from your laptop (a /32 notation should be used) +or your trusted network. The most important is the `node_ingress_cidr` +that restricts public access to the deployed DNS server and cluster +nodes' ephemeral ports range. + +Note, the command ``curl https://api.ipify.org`` helps fiding an external +IP address of your box (the ansible admin node). + +There is also the `manage_packages` variable (defaults to True) you +may want to turn off in order to speed up the provisioning tasks. This may +be the case for development environments. When turned off, the servers will +be provisioned omitting the ``yum update`` command. This brings security +implications though, and is not recommended for production deployments. + +### DNS servers security options + +Aside from `node_ingress_cidr` restricting public access to in-stack DNS +servers, there are following (bind/named specific) DNS security +options available: + +    named_public_recursion: 'no' +    named_private_recursion: 'yes' + +External DNS servers, which is not included in the 'dns' hosts group, +are not managed. It is up to you to configure such ones. + +## Configure the OpenShift parameters + +Finally, you need to update the DNS entry in +`inventory/group_vars/OSEv3.yml` (look at +`openshift_master_default_subdomain`). + +In addition, this is the place where you can customise your OpenShift +installation for example by specifying the authentication. + +The full list of options is available in this sample inventory: + +https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.ose.example + +Note, that in order to deploy OpenShift origin, you should update the following +variables for the `inventory/group_vars/OSEv3.yml`, `all.yml`: + +    deployment_type: origin +    openshift_deployment_type: "{{ deployment_type }}" + + +## Setting a custom entrypoint + +In order to set a custom entrypoint, update `openshift_master_cluster_public_hostname` + +    openshift_master_cluster_public_hostname: api.openshift.example.com + +Note than an empty hostname does not work, so if your domain is `openshift.example.com`, +you cannot set this value to simply `openshift.example.com`. + +## Creating and using a Cinder volume for the OpenShift registry + +You can optionally have the playbooks create a Cinder volume and set +it up as the OpenShift hosted registry. + +To do that you need specify the desired Cinder volume name and size in +Gigabytes in `inventory/group_vars/all.yml`: + +    cinder_hosted_registry_name: cinder-registry +    cinder_hosted_registry_size_gb: 10 + +With this, the playbooks will create the volume and set up its +filesystem. If there is an existing volume of the same name, we will +use it but keep the existing data on it. + +To use the volume for the registry, you must first configure it with +the OpenStack credentials by putting the following to `OSEv3.yml`: + +    openshift_cloudprovider_openstack_username: "{{ lookup('env','OS_USERNAME') }}" +    openshift_cloudprovider_openstack_password: "{{ lookup('env','OS_PASSWORD') }}" +    openshift_cloudprovider_openstack_auth_url: "{{ lookup('env','OS_AUTH_URL') }}" +    openshift_cloudprovider_openstack_tenant_name: "{{ lookup('env','OS_TENANT_NAME') }}" + +This will use the credentials from your shell environment. If you want +to enter them explicitly, you can. You can also use credentials +different from the provisioning ones (say for quota or access control +reasons). + +**NOTE**: If you're testing this on (DevStack)[devstack], you must +explicitly set your Keystone API version to v2 (e.g. +`OS_AUTH_URL=http://10.34.37.47/identity/v2.0`) instead of the default +value provided by `openrc`. You may also encounter the following issue +with Cinder: + +https://github.com/kubernetes/kubernetes/issues/50461 + +You can read the (OpenShift documentation on configuring +OpenStack)[openstack] for more information. + +[devstack]: https://docs.openstack.org/devstack/latest/ +[openstack]: https://docs.openshift.org/latest/install_config/configuring_openstack.html + + +Next, we need to instruct OpenShift to use the Cinder volume for it's +registry. Again in `OSEv3.yml`: + +    #openshift_hosted_registry_storage_kind: openstack +    #openshift_hosted_registry_storage_access_modes: ['ReadWriteOnce'] +    #openshift_hosted_registry_storage_openstack_filesystem: xfs + +The filesystem value here will be used in the initial formatting of +the volume. + +If you're using the dynamic inventory, you must uncomment these two values as +well: + +    #openshift_hosted_registry_storage_openstack_volumeID: "{{ lookup('os_cinder', cinder_hosted_registry_name).id }}" +    #openshift_hosted_registry_storage_volume_size: "{{ cinder_hosted_registry_size_gb }}Gi" + +But note that they use the `os_cinder` lookup plugin we provide, so you must +tell Ansible where to find it either in `ansible.cfg` (the one we provide is +configured properly) or by exporting the +`ANSIBLE_LOOKUP_PLUGINS=openshift-ansible-contrib/lookup_plugins` environment +variable. + + + +## Use an existing Cinder volume for the OpenShift registry + +You can also use a pre-existing Cinder volume for the storage of your +OpenShift registry. + +To do that, you need to have a Cinder volume. You can create one by +running: + +    openstack volume create --size <volume size in gb> <volume name> + +The volume needs to have a file system created before you put it to +use. + +As with the automatically-created volume, you have to set up the +OpenStack credentials in `inventory/group_vars/OSEv3.yml` as well as +registry values: + +    #openshift_hosted_registry_storage_kind: openstack +    #openshift_hosted_registry_storage_access_modes: ['ReadWriteOnce'] +    #openshift_hosted_registry_storage_openstack_filesystem: xfs +    #openshift_hosted_registry_storage_openstack_volumeID: e0ba2d73-d2f9-4514-a3b2-a0ced507fa05 +    #openshift_hosted_registry_storage_volume_size: 10Gi + +Note the `openshift_hosted_registry_storage_openstack_volumeID` and +`openshift_hosted_registry_storage_volume_size` values: these need to +be added in addition to the previous variables. + +The **Cinder volume ID**, **filesystem** and **volume size** variables +must correspond to the values in your volume. The volume ID must be +the **UUID** of the Cinder volume, *not its name*. + +We can do formate the volume for you if you ask for it in +`inventory/group_vars/all.yml`: + +    prepare_and_format_registry_volume: true + +**NOTE:** doing so **will destroy any data that's currently on the volume**! + +You can also run the registry setup playbook directly: + +   ansible-playbook -i inventory playbooks/provisioning/openstack/prepare-and-format-cinder-volume.yaml + +(the provisioning phase must be completed, first) + + + +## Configure static inventory and access via a bastion node + +Example inventory variables: + +    openstack_use_bastion: true +    bastion_ingress_cidr: "{{openstack_subnet_prefix}}.0/24" +    openstack_private_ssh_key: ~/.ssh/id_rsa +    openstack_inventory: static +    openstack_inventory_path: ../../../../inventory +    openstack_ssh_config_path: /tmp/ssh.config.openshift.ansible.openshift.example.com + +The `openstack_subnet_prefix` is the openstack private network for your cluster. +And the `bastion_ingress_cidr` defines accepted range for SSH connections to nodes +additionally to the `ssh_ingress_cidr`` (see the security notes above). + +The SSH config will be stored on the ansible control node by the +gitven path. Ansible uses it automatically. To access the cluster nodes with +that ssh config, use the `-F` prefix, f.e.: + +    ssh -F /tmp/ssh.config.openshift.ansible.openshift.example.com master-0.openshift.example.com echo OK + +Note, relative paths will not work for the `openstack_ssh_config_path`, but it +works for the `openstack_private_ssh_key` and `openstack_inventory_path`. In this +guide, the latter points to the current directory, where you run ansible commands +from. + +To verify nodes connectivity, use the command: + +    ansible -v -i inventory/hosts -m ping all + +If something is broken, double-check the inventory variables, paths and the +generated `<openstack_inventory_path>/hosts` and `openstack_ssh_config_path` files. + +The `inventory: dynamic` can be used instead to access cluster nodes directly via +floating IPs. In this mode you can not use a bastion node and should specify +the dynamic inventory file in your ansible commands , like `-i openstack.py`. + +## Using Docker on the Ansible host + +If you don't want to worry about the dependencies, you can use the +[OpenStack Control Host image][control-host-image]. + +[control-host-image]: https://hub.docker.com/r/redhatcop/control-host-openstack/ + +It has all the dependencies installed, but you'll need to map your +code and credentials to it. Assuming your SSH keys live in `~/.ssh` +and everything else is in your current directory (i.e. `ansible.cfg`, +`keystonerc`, `inventory`, `openshift-ansible`, +`openshift-ansible-contrib`), this is how you run the deployment: + +    sudo docker run -it -v ~/.ssh:/mnt/.ssh:Z \ +        -v $PWD:/root/openshift:Z \ +        -v $PWD/keystonerc:/root/.config/openstack/keystonerc.sh:Z \ +        redhatcop/control-host-openstack bash + +(feel free to replace `$PWD` with an actual path to your inventory and +checkouts, but note that relative paths don't work) + +The first run may take a few minutes while the image is being +downloaded. After that, you'll be inside the container and you can run +the playbooks: + +    cd openshift +    ansible-playbook openshift-ansible-contrib/playbooks/provisioning/openstack/provision.yaml + + +### Run the playbook + +Assuming your OpenStack (Keystone) credentials are in the `keystonerc` +this is how you stat the provisioning process from your ansible control node: + +    . keystonerc +    ansible-playbook openshift-ansible-contrib/playbooks/provisioning/openstack/provision.yaml + +Note, here you start with an empty inventory. The static inventory will be populated +with data so you can omit providing additional arguments for future ansible commands. + +If bastion enabled, the generates SSH config must be applied for ansible. +Otherwise, it is auto included by the previous step. In order to execute it +as a separate playbook, use the following command: + +    ansible-playbook openshift-ansible-contrib/playbooks/provisioning/openstack/post-provision-openstack.yml + +The first infra node then becomes a bastion node as well and proxies access +for future ansible commands. The post-provision step also configures Satellite, +if requested, and DNS server, and ensures other OpenShift requirements to be met. + + +## Running Custom Post-Provision Actions + +A custom playbook can be run like this: + +``` +ansible-playbook --private-key ~/.ssh/openshift -i inventory/ openshift-ansible-contrib/playbooks/provisioning/openstack/custom-actions/custom-playbook.yml +``` + +If you'd like to limit the run to one particular host, you can do so as follows: + +``` +ansible-playbook --private-key ~/.ssh/openshift -i inventory/ openshift-ansible-contrib/playbooks/provisioning/openstack/custom-actions/custom-playbook.yml -l app-node-0.openshift.example.com +``` + +You can also create your own custom playbook. Here are a few examples: + +### Adding additional YUM repositories + +``` +--- +- hosts: app +  tasks: + +  # enable EPL +  - name: Add repository +    yum_repository: +      name: epel +      description: EPEL YUM repo +      baseurl: https://download.fedoraproject.org/pub/epel/$releasever/$basearch/ +``` + +This example runs against app nodes. The list of options include: + +  - cluster_hosts (all hosts: app, infra, masters, dns, lb) +  - OSEv3 (app, infra, masters) +  - app +  - dns +  - masters +  - infra_hosts + +### Attaching additional RHN pools + +``` +--- +- hosts: cluster_hosts +  tasks: +  - name: Attach additional RHN pool +    become: true +    command: "/usr/bin/subscription-manager attach --pool=<pool ID>" +    register: attach_rhn_pool_result +    until: attach_rhn_pool_result.rc == 0 +    retries: 10 +    delay: 1 +``` + +This playbook runs against all cluster nodes. In order to help prevent slow connectivity +problems, the task is retried 10 times in case of initial failure. +Note that in order for this example to work in your deployment, your servers must use the RHEL image. + +### Adding extra Docker registry URLs + +This playbook is located in the [custom-actions](https://github.com/openshift/openshift-ansible-contrib/tree/master/playbooks/provisioning/openstack/custom-actions) directory. + +It adds URLs passed as arguments to the docker configuration program. +Going into more detail, the configuration program (which is in the YAML format) is loaded into an ansible variable +([lines 27-30](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions/add-docker-registry.yml#L27-L30)) +and in its structure, `registries` and `insecure_registries` sections are expanded with the newly added items +([lines 56-76](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions/add-docker-registry.yml#L56-L76)). +The new content is then saved into the original file +([lines 78-82](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions/add-docker-registry.yml#L78-L82)) +and docker is restarted. + +Example usage: +``` +ansible-playbook -i <inventory> openshift-ansible-contrib/playbooks/provisioning/openstack/custom-actions/add-docker-registry.yml  --extra-vars '{"registries": "reg1", "insecure_registries": ["ins_reg1","ins_reg2"]}' +``` + +### Adding extra CAs to the trust chain + +This playbook is also located in the [custom-actions](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions) directory. +It copies passed CAs to the trust chain location and updates the trust chain on each selected host. + +Example usage: +``` +ansible-playbook -i <inventory> openshift-ansible-contrib/playbooks/provisioning/openstack/custom-actions/add-cas.yml --extra-vars '{"ca_files": [<absolute path to ca1 file>, <absolute path to ca2 file>]}' +``` + +Please consider contributing your custom playbook back to openshift-ansible-contrib! + +A library of custom post-provision actions exists in `openshift-ansible-contrib/playbooks/provisioning/openstack/custom-actions`. Playbooks include: + +* [add-yum-repos.yml](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions/add-yum-repos.yml): adds a list of custom yum repositories to every node in the cluster +* [add-rhn-pools.yml](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions/add-rhn-pools.yml): attaches a list of additional RHN pools to every node in the cluster +* [add-docker-registry.yml](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions/add-docker-registry.yml): adds a list of docker registries to the docker configuration on every node in the cluster +* [add-cas.yml](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions/add-rhn-pools.yml): adds a list of CAs to the trust chain on every node in the cluster + + +## Install OpenShift + +Once it succeeds, you can install openshift by running: + +    ansible-playbook openshift-ansible/playbooks/byo/config.yml + +## Access UI + +OpenShift UI may be accessed via the 1st master node FQDN, port 8443. + +When using a bastion, you may want to make an SSH tunnel from your control node +to access UI on the `https://localhost:8443`, with this inventory variable: + +   openshift_ui_ssh_tunnel: True + +Note, this requires sudo rights on the ansible control node and an absolute path +for the `openstack_private_ssh_key`. You should also update the control node's +`/etc/hosts`: + +    127.0.0.1 master-0.openshift.example.com + +In order to access UI, the ssh-tunnel service will be created and started on the +control node. Make sure to remove these changes and the service manually, when not +needed anymore. + +## Scale Deployment up/down + +### Scaling up + +One can scale up the number of application nodes by executing the ansible playbook +`openshift-ansible-contrib/playbooks/provisioning/openstack/scale-up.yaml`. +This process can be done even if there is currently no deployment available. +The `increment_by` variable is used to specify by how much the deployment should +be scaled up (if none exists, it serves as a target number of application nodes). +The path to `openshift-ansible` directory can be customised by the `openshift_ansible_dir` +variable. Its value must be an absolute path to `openshift-ansible` and it cannot +contain the '/' symbol at the end. + +Usage: + +``` +ansible-playbook -i <path to inventory> openshift-ansible-contrib/playbooks/provisioning/openstack/scale-up.yaml` [-e increment_by=<number>] [-e openshift_ansible_dir=<path to openshift-ansible>] +``` + +Note: This playbook works only without a bastion node (`openstack_use_bastion: False`). diff --git a/playbooks/provisioning/openstack/sample-inventory/ansible.cfg b/playbooks/provisioning/openstack/ansible.cfg index 81d8ae10c..a21f023ea 100644 --- a/playbooks/provisioning/openstack/sample-inventory/ansible.cfg +++ b/playbooks/provisioning/openstack/ansible.cfg @@ -1,6 +1,7 @@  # config file for ansible -- http://ansible.com/  # ==============================================  [defaults] +ansible_user = openshift  forks = 50  # work around privilege escalation timeouts in ansible  timeout = 30 @@ -14,6 +15,8 @@ fact_caching_connection = .ansible/cached_facts  fact_caching_timeout = 900  stdout_callback = skippy  callback_whitelist = profile_tasks +lookup_plugins = openshift-ansible-contrib/lookup_plugins +  [ssh_connection]  ssh_args = -o ControlMaster=auto -o ControlPersist=900s -o GSSAPIAuthentication=no diff --git a/playbooks/provisioning/openstack/custom-actions/add-cas.yml b/playbooks/provisioning/openstack/custom-actions/add-cas.yml new file mode 100644 index 000000000..b2c195f91 --- /dev/null +++ b/playbooks/provisioning/openstack/custom-actions/add-cas.yml @@ -0,0 +1,13 @@ +--- +- hosts: cluster_hosts +  become: true +  vars: +    ca_files: [] +  tasks: +  - name: Copy CAs to the trusted CAs location +    with_items: "{{ ca_files }}" +    copy: +      src: "{{ item }}" +      dest: /etc/pki/ca-trust/source/anchors/ +  - name: Update trusted CAs +    shell: 'update-ca-trust enable && update-ca-trust extract' diff --git a/playbooks/provisioning/openstack/custom-actions/add-docker-registry.yml b/playbooks/provisioning/openstack/custom-actions/add-docker-registry.yml new file mode 100644 index 000000000..e118a71dc --- /dev/null +++ b/playbooks/provisioning/openstack/custom-actions/add-docker-registry.yml @@ -0,0 +1,90 @@ +--- +- hosts: OSEv3 +  become: true +  vars: +    registries: [] +    insecure_registries: [] + +  tasks: +  - name: Check if docker is even installed +    command: docker + +  - name: Install atomic-registries package +    yum: +      name: atomic-registries +      state: latest + +  - name: Get registry configuration file +    register: file_result +    stat: +      path: /etc/containers/registries.conf + +  - name: Check if it exists +    assert: +      that: 'file_result.stat.exists' +      msg: "Configuration file does not exist." + +  - name: Load configuration file +    shell: cat /etc/containers/registries.conf +    register: file_content + +  - name: Store file content into a variable +    set_fact: +      docker_conf: "{{ file_content.stdout | from_yaml }}" + +  - name: Make sure that docker file content is a dictionary +    when: '(docker_conf is string) and (not docker_conf)' +    set_fact: +      docker_conf: {} + +  - name: Make sure that registries is a list +    when: 'registries is string' +    set_fact: +      registries_list: [ "{{ registries }}" ] + +  - name: Make sure that insecure_registries is a list +    when: 'insecure_registries is string' +    set_fact: +      insecure_registries_list: [ "{{ insecure_registries }}" ] + +  - name: Set default values if there are no registries defined +    set_fact: +      docker_conf_registries: "{{ [] if docker_conf['registries'] is not defined else docker_conf['registries'] }}" +      docker_conf_insecure_registries: "{{ [] if docker_conf['insecure_registries'] is not defined else docker_conf['insecure_registries'] }}" + +  - name: Add other registries +    when: 'registries_list is not defined' +    register: registries_merge_result +    set_fact: +      docker_conf: "{{ docker_conf | combine({'registries': (docker_conf_registries + registries) | unique}, recursive=True) }}" + +  - name: Add other registries (if registries had to be converted) +    when: 'registries_merge_result|skipped' +    set_fact: +      docker_conf: "{{ docker_conf | combine({'registries': (docker_conf_registries + registries_list) | unique}, recursive=True) }}" + +  - name: Add insecure registries +    when: 'insecure_registries_list is not defined' +    register: insecure_registries_merge_result +    set_fact: +      docker_conf: "{{ docker_conf | combine({'insecure_registries': (docker_conf_insecure_registries + insecure_registries) | unique }, recursive=True) }}" + +  - name: Add insecure registries (if insecure_registries had to be converted) +    when: 'insecure_registries_merge_result|skipped' +    set_fact: +      docker_conf: "{{ docker_conf | combine({'insecure_registries': (docker_conf_insecure_registries + insecure_registries_list) | unique }, recursive=True) }}" + +  - name: Load variable back to file +    copy: +      content: "{{ docker_conf | to_yaml }}" +      dest: /etc/containers/registries.conf + +  - name: Restart registries service +    service: +      name: registries +      state: restarted + +  - name: Restart docker +    service: +      name: docker +      state: restarted diff --git a/playbooks/provisioning/openstack/custom-actions/add-rhn-pools.yml b/playbooks/provisioning/openstack/custom-actions/add-rhn-pools.yml new file mode 100644 index 000000000..d17c1e335 --- /dev/null +++ b/playbooks/provisioning/openstack/custom-actions/add-rhn-pools.yml @@ -0,0 +1,13 @@ +--- +- hosts: cluster_hosts +  vars: +    rhn_pools: [] +  tasks: +  - name: Attach additional RHN pools +    become: true +    with_items: "{{ rhn_pools }}" +    command: "/usr/bin/subscription-manager attach --pool={{ item }}" +    register: attach_rhn_pools_result +    until: attach_rhn_pools_result.rc == 0 +    retries: 10 +    delay: 1 diff --git a/playbooks/provisioning/openstack/galaxy-requirements.yaml b/playbooks/provisioning/openstack/galaxy-requirements.yaml index 93dd14ec2..1d745dcc3 100644 --- a/playbooks/provisioning/openstack/galaxy-requirements.yaml +++ b/playbooks/provisioning/openstack/galaxy-requirements.yaml @@ -4,3 +4,7 @@  # From 'infra-ansible'  - src: https://github.com/redhat-cop/infra-ansible    version: master + +# From 'openshift-ansible' +- src: https://github.com/openshift/openshift-ansible +  version: master diff --git a/playbooks/provisioning/openstack/net_vars_check.yaml b/playbooks/provisioning/openstack/net_vars_check.yaml new file mode 100644 index 000000000..68afde415 --- /dev/null +++ b/playbooks/provisioning/openstack/net_vars_check.yaml @@ -0,0 +1,14 @@ +--- +- name: Check the provider network configuration +  fail: +    msg: "Flannel SDN requires a dedicated containers data network and can not work over a provider network" +  when: +    - openstack_provider_network_name is defined +    - openstack_private_data_network_name is defined + +- name: Check the flannel network configuration +  fail: +    msg: "A dedicated containers data network is only supported with Flannel SDN" +  when: +    - openstack_private_data_network_name is defined +    - not openshift_use_flannel|default(False)|bool diff --git a/playbooks/provisioning/openstack/post-install.yml b/playbooks/provisioning/openstack/post-install.yml new file mode 100644 index 000000000..417813e2a --- /dev/null +++ b/playbooks/provisioning/openstack/post-install.yml @@ -0,0 +1,57 @@ +--- +- hosts: OSEv3 +  gather_facts: False +  become: True +  tasks: +    - name: Save iptables rules to a backup file +      when: openshift_use_flannel|default(False)|bool +      shell: iptables-save > /etc/sysconfig/iptables.orig-$(date +%Y%m%d%H%M%S) + +# Enable iptables service on app nodes to persist custom rules (flannel SDN) +# FIXME(bogdando) w/a https://bugzilla.redhat.com/show_bug.cgi?id=1490820 +- hosts: app +  gather_facts: False +  become: True +  vars: +    os_firewall_allow: +      - service: dnsmasq tcp +        port: 53/tcp +      - service: dnsmasq udp +        port: 53/udp +  tasks: +    - when: openshift_use_flannel|default(False)|bool +      block: +        - include_role: +            name: openshift-ansible/roles/os_firewall +        - include_role: +            name: openshift-ansible/roles/lib_os_firewall +        - name: set allow rules for dnsmasq +          os_firewall_manage_iptables: +            name: "{{ item.service }}" +            action: add +            protocol: "{{ item.port.split('/')[1] }}" +            port: "{{ item.port.split('/')[0] }}" +          with_items: "{{ os_firewall_allow }}" + +- hosts: OSEv3 +  gather_facts: False +  become: True +  tasks: +    - name: Apply post-install iptables hacks for Flannel SDN (the best effort) +      when: openshift_use_flannel|default(False)|bool +      block: +        - name: set allow/masquerade rules for for flannel/docker +          shell: >- +            (iptables-save | grep -q custom-flannel-docker-1) || +            iptables -A DOCKER -w +            -p all -j ACCEPT +            -m comment --comment "custom-flannel-docker-1"; +            (iptables-save | grep -q custom-flannel-docker-2) || +            iptables -t nat -A POSTROUTING -w +            -o {{flannel_interface|default('eth1')}} +            -m comment --comment "custom-flannel-docker-2" +            -j MASQUERADE + +        # NOTE(bogdando) the rules will not be restored, when iptables service unit is disabled & masked +        - name: Persist in-memory iptables rules (w/o dynamic KUBE rules) +          shell: iptables-save | grep -v KUBE > /etc/sysconfig/iptables diff --git a/playbooks/provisioning/openstack/post-provision-openstack.yml b/playbooks/provisioning/openstack/post-provision-openstack.yml index a80e8d829..e460fbf12 100644 --- a/playbooks/provisioning/openstack/post-provision-openstack.yml +++ b/playbooks/provisioning/openstack/post-provision-openstack.yml @@ -76,6 +76,16 @@    hosts: OSEv3    gather_facts: true    become: true +  vars: +    interface: "{{ flannel_interface|default('eth1') }}" +    interface_file: /etc/sysconfig/network-scripts/ifcfg-{{ interface }} +    interface_config: +      DEVICE: "{{ interface }}" +      TYPE: Ethernet +      BOOTPROTO: dhcp +      ONBOOT: 'yes' +      DEFTROUTE: 'no' +      PEERDNS: 'no'    pre_tasks:      - name: "Include DNS configuration to ensure proper name resolution"        lineinfile: @@ -83,6 +93,21 @@          dest: /etc/sysconfig/network          regexp: "IP4_NAMESERVERS={{ hostvars['localhost'].private_dns_server }}"          line: "IP4_NAMESERVERS={{ hostvars['localhost'].private_dns_server }}" +    - name: "Configure the flannel interface options" +      when: openshift_use_flannel|default(False)|bool +      block: +        - file: +            dest: "{{ interface_file }}" +            state: touch +            mode: 0644 +            owner: root +            group: root +        - lineinfile: +            state: present +            dest: "{{ interface_file }}" +            regexp: "{{ item.key }}=" +            line: "{{ item.key }}={{ item.value }}" +          with_dict: "{{ interface_config }}"    roles:      - node-network-manager diff --git a/playbooks/provisioning/openstack/prerequisites.yml b/playbooks/provisioning/openstack/prerequisites.yml index f2f720f8b..11a31411e 100644 --- a/playbooks/provisioning/openstack/prerequisites.yml +++ b/playbooks/provisioning/openstack/prerequisites.yml @@ -2,6 +2,9 @@  - hosts: localhost    tasks: +  # Sanity check of inventory variables +  - include: net_vars_check.yaml +    # Check ansible    - name: Check Ansible version      assert: diff --git a/playbooks/provisioning/openstack/sample-inventory/group_vars/OSEv3.yml b/playbooks/provisioning/openstack/sample-inventory/group_vars/OSEv3.yml index 7d7683c62..949a323a7 100644 --- a/playbooks/provisioning/openstack/sample-inventory/group_vars/OSEv3.yml +++ b/playbooks/provisioning/openstack/sample-inventory/group_vars/OSEv3.yml @@ -27,9 +27,14 @@ openshift_hosted_registry_wait: True  #openshift_hosted_registry_storage_access_modes: ['ReadWriteOnce']  #openshift_hosted_registry_storage_openstack_filesystem: xfs -## Configure this if you're attaching a Cinder volume you've set up. +## NOTE(shadower): This won't work until the openshift-ansible issue #5657 is fixed: +## https://github.com/openshift/openshift-ansible/issues/5657  ## If you're using the `cinder_hosted_registry_name` option from -## `all.yml`, this will be configured automaticaly. +## `all.yml`, uncomment these lines: +#openshift_hosted_registry_storage_openstack_volumeID: "{{ lookup('os_cinder', cinder_hosted_registry_name).id }}" +#openshift_hosted_registry_storage_volume_size: "{{ cinder_hosted_registry_size_gb }}Gi" + +## If you're using a Cinder volume you've set up yourself, uncomment these lines:  #openshift_hosted_registry_storage_openstack_volumeID: e0ba2d73-d2f9-4514-a3b2-a0ced507fa05  #openshift_hosted_registry_storage_volume_size: 10Gi @@ -46,3 +51,9 @@ openshift_override_hostname_check: true  # NOTE(shadower): Always switch to root on the OSEv3 nodes.  # openshift-ansible requires an explicit `become`.  ansible_become: true + +# # Flannel networking +#osm_cluster_network_cidr: 10.128.0.0/14 +#openshift_use_openshift_sdn: false +#openshift_use_flannel: true +#flannel_interface: eth1 diff --git a/playbooks/provisioning/openstack/sample-inventory/group_vars/all.yml b/playbooks/provisioning/openstack/sample-inventory/group_vars/all.yml index 12f64f401..83289307d 100644 --- a/playbooks/provisioning/openstack/sample-inventory/group_vars/all.yml +++ b/playbooks/provisioning/openstack/sample-inventory/group_vars/all.yml @@ -15,6 +15,10 @@ public_dns_nameservers: []  openstack_ssh_public_key: "openshift"  openstack_external_network_name: "public"  #openstack_private_network_name:  "openshift-ansible-{{ stack_name }}-net" +# # A dedicated Neutron network name for containers data network +# # Configures the data network to be separated from openstack_private_network_name +# # NOTE: this is only supported with Flannel SDN yet +#openstack_private_data_network_name: "openshift-ansible-{{ stack_name }}-data-net"  ## If you want to use a provider network, set its name here.  ## NOTE: the `openstack_external_network_name` and @@ -62,6 +66,11 @@ openstack_default_flavor: "m1.medium"  #docker_lb_volume_size: "5"  docker_volume_size: "15" +## Specify server group policies for master and infra nodes. Nova must be configured to +## enable these policies. 'anti-affinity' will ensure that each VM is launched on a +## different physical host. +#openstack_master_server_group_policies: [anti-affinity] +#openstack_infra_server_group_policies: [anti-affinity]  ## Create a Cinder volume and use it for the OpenShift registry.  ## NOTE: the openstack credentials and hosted registry options must be set in OSEv3.yml! diff --git a/playbooks/provisioning/openstack/sample-inventory/inventory.py b/playbooks/provisioning/openstack/sample-inventory/inventory.py new file mode 100755 index 000000000..6a1b74b3d --- /dev/null +++ b/playbooks/provisioning/openstack/sample-inventory/inventory.py @@ -0,0 +1,88 @@ +#!/usr/bin/env python + +from __future__ import print_function + +import json + +import shade + + +if __name__ == '__main__': +    cloud = shade.openstack_cloud() + +    inventory = {} + +    # TODO(shadower): filter the servers based on the `OPENSHIFT_CLUSTER` +    # environment variable. +    cluster_hosts = [ +        server for server in cloud.list_servers() +        if 'metadata' in server and 'clusterid' in server.metadata] + +    masters = [server.name for server in cluster_hosts +               if server.metadata['host-type'] == 'master'] + +    etcd = [server.name for server in cluster_hosts +            if server.metadata['host-type'] == 'etcd'] +    if not etcd: +        etcd = masters + +    infra_hosts = [server.name for server in cluster_hosts +                   if server.metadata['host-type'] == 'node' and +                   server.metadata['sub-host-type'] == 'infra'] + +    app = [server.name for server in cluster_hosts +           if server.metadata['host-type'] == 'node' and +           server.metadata['sub-host-type'] == 'app'] + +    nodes = list(set(masters + infra_hosts + app)) + +    dns = [server.name for server in cluster_hosts +           if server.metadata['host-type'] == 'dns'] + +    lb = [server.name for server in cluster_hosts +          if server.metadata['host-type'] == 'lb'] + +    osev3 = list(set(nodes + etcd + lb)) + +    groups = [server.metadata.group for server in cluster_hosts +              if 'group' in server.metadata] + +    inventory['cluster_hosts'] = {'hosts': [s.name for s in cluster_hosts]} +    inventory['OSEv3'] = {'hosts': osev3} +    inventory['masters'] = {'hosts': masters} +    inventory['etcd'] = {'hosts': etcd} +    inventory['nodes'] = {'hosts': nodes} +    inventory['infra_hosts'] = {'hosts': infra_hosts} +    inventory['app'] = {'hosts': app} +    inventory['dns'] = {'hosts': dns} +    inventory['lb'] = {'hosts': lb} + +    for server in cluster_hosts: +        if 'group' in server.metadata: +            group = server.metadata.group +            if group not in inventory: +                inventory[group] = {'hosts': []} +            inventory[group]['hosts'].append(server.name) + +    inventory['_meta'] = {'hostvars': {}} + +    for server in cluster_hosts: +        ssh_ip_address = server.public_v4 or server.private_v4 +        vars = { +            'ansible_host': ssh_ip_address +        } + +        public_v4 = server.public_v4 or server.private_v4 +        if public_v4: +            vars['public_v4'] = public_v4 +        # TODO(shadower): what about multiple networks? +        if server.private_v4: +            vars['private_v4'] = server.private_v4 + +        node_labels = server.metadata.get('node_labels') +        if node_labels: +            vars['openshift_node_labels'] = node_labels + +        inventory['_meta']['hostvars'][server.name] = vars + +    print(json.dumps(inventory, indent=4, sort_keys=True)) diff --git a/playbooks/provisioning/openstack/stack_params.yaml b/playbooks/provisioning/openstack/stack_params.yaml index 484c06889..a4da31bfe 100644 --- a/playbooks/provisioning/openstack/stack_params.yaml +++ b/playbooks/provisioning/openstack/stack_params.yaml @@ -36,6 +36,8 @@ num_masters: "{{ openstack_num_masters }}"  num_nodes: "{{ openstack_num_nodes }}"  num_infra: "{{ openstack_num_infra }}"  num_dns: "{{ openstack_num_dns | default(1) }}" +master_server_group_policies: "{{ openstack_master_server_group_policies | default([]) | to_yaml }}" +infra_server_group_policies: "{{ openstack_infra_server_group_policies | default([]) | to_yaml }}"  master_volume_size: "{{ docker_master_volume_size | default(docker_volume_size) }}"  infra_volume_size: "{{ docker_infra_volume_size | default(docker_volume_size) }}"  node_volume_size: "{{ docker_node_volume_size | default(docker_volume_size) }}"  | 
