diff options
73 files changed, 1039 insertions, 172 deletions
diff --git a/.tito/packages/openshift-ansible b/.tito/packages/openshift-ansible index 7cc242470..6f00b8437 100644 --- a/.tito/packages/openshift-ansible +++ b/.tito/packages/openshift-ansible @@ -1 +1 @@ -3.6.89.2-1 ./ +3.6.100-1 ./ @@ -26,16 +26,18 @@ tito build --rpm ## Build an openshift-ansible container image +**NOTE**: the examples below use "openshift-ansible" as the name of the image to build for simplicity and illustration purposes, and also to prevent potential confusion between custom built images and official releases. See [README_CONTAINER_IMAGE.md](README_CONTAINER_IMAGE.md) for details about the released container images for openshift-ansible. + To build a container image of `openshift-ansible` using standalone **Docker**: cd openshift-ansible - docker build -f images/installer/Dockerfile -t openshift/openshift-ansible . + docker build -f images/installer/Dockerfile -t openshift-ansible . ### Building on OpenShift To build an openshift-ansible image using an **OpenShift** [build and image stream](https://docs.openshift.org/latest/architecture/core_concepts/builds_and_image_streams.html) the straightforward command would be: - oc new-build docker.io/aweiteka/playbook2image~https://github.com/openshift/openshift-ansible + oc new-build registry.centos.org/openshift/playbook2image~https://github.com/openshift/openshift-ansible However: because the `Dockerfile` for this repository is not in the top level directory, and because we can't change the build context to the `images/installer` path as it would cause the build to fail, the `oc new-app` command above will create a build configuration using the *source to image* strategy, which is the default approach of the [playbook2image](https://github.com/openshift/playbook2image) base image. This does build an image successfully, but unfortunately the resulting image will be missing some customizations that are handled by the [Dockerfile](images/installer/Dockerfile) in this repo. @@ -48,7 +50,7 @@ At the time of this writing there is no straightforward option to [set the docke ``` curl -s https://raw.githubusercontent.com/openshift/openshift-ansible/master/images/installer/Dockerfile | oc new-build -D - \ - --docker-image=docker.io/aweiteka/playbook2image \ + --docker-image=registry.centos.org/openshift/playbook2image \ https://github.com/openshift/openshift-ansible ``` @@ -76,5 +78,5 @@ Once the container image is built, we can import it into the OSTree storage: ``` -atomic pull --storage ostree docker:openshift/openshift-ansible:latest +atomic pull --storage ostree docker:openshift-ansible:latest ``` diff --git a/README_CONTAINER_IMAGE.md b/README_CONTAINER_IMAGE.md index 0d7f7f4af..cf3b432df 100644 --- a/README_CONTAINER_IMAGE.md +++ b/README_CONTAINER_IMAGE.md @@ -6,6 +6,12 @@ The image is designed to **run as a non-root user**. The container's UID is mapp **Note**: at this time there are known issues that prevent to run this image for installation/upgrade purposes (i.e. run one of the config/upgrade playbooks) from within one of the hosts that is also an installation target at the same time: if the playbook you want to run attempts to manage the docker daemon and restart it (like install/upgrade playbooks do) this would kill the container itself during its operation. +## A note about the name of the image + +The released container images for openshift-ansible follow the naming scheme determined by OpenShift's `imageConfig.format` configuration option. This means that the released image name is `openshift/origin-ansible` instead of `openshift/openshift-ansible`. + +This provides consistency with other images used by the platform and it's also a requirement for some use cases like using the image from [`oc cluster up`](https://github.com/openshift/origin/blob/master/docs/cluster_up_down.md). + ## Usage The `playbook2image` base image provides several options to control the behaviour of the containers. For more details on these options see the [playbook2image](https://github.com/openshift/playbook2image) documentation. @@ -26,7 +32,7 @@ Here is an example of how to run a containerized `openshift-ansible` playbook th -e INVENTORY_FILE=/tmp/inventory \ -e PLAYBOOK_FILE=playbooks/byo/openshift-checks/certificate_expiry/default.yaml \ -e OPTS="-v" -t \ - openshift/openshift-ansible + openshift/origin-ansible You might want to adjust some of the options in the example to match your environment and/or preferences. For example: you might want to create a separate directory on the host where you'll copy the ssh key and inventory files prior to invocation to avoid unwanted SELinux re-labeling of the original files or paths (see below). @@ -46,7 +52,7 @@ Here is a detailed explanation of the options used in the command above: Further usage examples are available in the [examples directory](examples/) with samples of how to use the image from within OpenShift. -Additional usage information for images built from `playbook2image` like this one can be found in the [playbook2image examples](https://github.com/aweiteka/playbook2image/tree/master/examples). +Additional usage information for images built from `playbook2image` like this one can be found in the [playbook2image examples](https://github.com/openshift/playbook2image/tree/master/examples). ## Running openshift-ansible as a System Container @@ -59,8 +65,8 @@ If the inventory file needs additional files then it can use the path `/var/lib/ Run the ansible system container: ```sh -atomic install --system --set INVENTORY_FILE=$(pwd)/inventory.origin openshift/openshift-ansible -systemctl start openshift-ansible +atomic install --system --set INVENTORY_FILE=$(pwd)/inventory.origin openshift/origin-ansible +systemctl start origin-ansible ``` The `INVENTORY_FILE` variable says to the installer what inventory file on the host will be bind mounted inside the container. In the example above, a file called `inventory.origin` in the current directory is used as the inventory file for the installer. @@ -68,5 +74,5 @@ The `INVENTORY_FILE` variable says to the installer what inventory file on the h And to finally cleanup the container: ``` -atomic uninstall openshift-ansible +atomic uninstall origin-ansible ``` diff --git a/ansible.cfg b/ansible.cfg index 034733684..0c74d63da 100644 --- a/ansible.cfg +++ b/ansible.cfg @@ -14,6 +14,7 @@ callback_plugins = callback_plugins/ forks = 20 host_key_checking = False retry_files_enabled = False +retry_files_save_path = ~/ansible-installer-retries nocows = True # Uncomment to use the provided BYO inventory diff --git a/examples/certificate-check-upload.yaml b/examples/certificate-check-upload.yaml index 8b560447f..1794cb096 100644 --- a/examples/certificate-check-upload.yaml +++ b/examples/certificate-check-upload.yaml @@ -4,10 +4,10 @@ # The generated reports are uploaded to a location in the master # hosts, using the playbook 'easy-mode-upload.yaml'. # -# This example uses the openshift/openshift-ansible container image. +# This example uses the openshift/origin-ansible container image. # (see README_CONTAINER_IMAGE.md in the top level dir for more details). # -# The following objects are xpected to be configured before the creation +# The following objects are expected to be configured before the creation # of this Job: # - A ConfigMap named 'inventory' with a key named 'hosts' that # contains the the Ansible inventory file @@ -28,7 +28,7 @@ spec: spec: containers: - name: openshift-ansible - image: openshift/openshift-ansible + image: openshift/origin-ansible env: - name: PLAYBOOK_FILE value: playbooks/certificate_expiry/easy-mode-upload.yaml diff --git a/examples/certificate-check-volume.yaml b/examples/certificate-check-volume.yaml index f6613bcd8..dd0a89c8e 100644 --- a/examples/certificate-check-volume.yaml +++ b/examples/certificate-check-volume.yaml @@ -4,10 +4,10 @@ # The generated reports are stored in a Persistent Volume using # the playbook 'html_and_json_timestamp.yaml'. # -# This example uses the openshift/openshift-ansible container image. +# This example uses the openshift/origin-ansible container image. # (see README_CONTAINER_IMAGE.md in the top level dir for more details). # -# The following objects are xpected to be configured before the creation +# The following objects are expected to be configured before the creation # of this Job: # - A ConfigMap named 'inventory' with a key named 'hosts' that # contains the the Ansible inventory file @@ -30,7 +30,7 @@ spec: spec: containers: - name: openshift-ansible - image: openshift/openshift-ansible + image: openshift/origin-ansible env: - name: PLAYBOOK_FILE value: playbooks/certificate_expiry/html_and_json_timestamp.yaml diff --git a/examples/scheduled-certcheck-upload.yaml b/examples/scheduled-certcheck-upload.yaml index b0a97361b..05890a357 100644 --- a/examples/scheduled-certcheck-upload.yaml +++ b/examples/scheduled-certcheck-upload.yaml @@ -28,7 +28,7 @@ spec: spec: containers: - name: openshift-ansible - image: openshift/openshift-ansible + image: openshift/origin-ansible env: - name: PLAYBOOK_FILE value: playbooks/certificate_expiry/easy-mode-upload.yaml diff --git a/examples/scheduled-certcheck-volume.yaml b/examples/scheduled-certcheck-volume.yaml index 74cdc9e7f..2f26e8809 100644 --- a/examples/scheduled-certcheck-volume.yaml +++ b/examples/scheduled-certcheck-volume.yaml @@ -28,7 +28,7 @@ spec: spec: containers: - name: openshift-ansible - image: openshift/openshift-ansible + image: openshift/origin-ansible env: - name: PLAYBOOK_FILE value: playbooks/certificate_expiry/html_and_json_timestamp.yaml diff --git a/hack/build-images.sh b/hack/build-images.sh index 3e9896caa..ce421178f 100755 --- a/hack/build-images.sh +++ b/hack/build-images.sh @@ -7,7 +7,7 @@ set -o pipefail STARTTIME=$(date +%s) source_root=$(dirname "${0}")/.. -prefix="openshift/openshift-ansible" +prefix="openshift/origin-ansible" version="latest" verbose=false options="-f images/installer/Dockerfile" @@ -44,7 +44,7 @@ if [ "$help" = true ]; then echo "Options: " echo " --prefix=PREFIX" echo " The prefix to use for the image names." - echo " default: openshift/openshift-ansible" + echo " default: openshift/origin-ansible" echo echo " --version=VERSION" echo " The version used to tag the image" diff --git a/hack/push-release.sh b/hack/push-release.sh index 8639143af..131ed83ca 100755 --- a/hack/push-release.sh +++ b/hack/push-release.sh @@ -12,7 +12,7 @@ set -o pipefail STARTTIME=$(date +%s) OS_ROOT=$(dirname "${BASH_SOURCE}")/.. -PREFIX="${PREFIX:-openshift/openshift-ansible}" +PREFIX="${PREFIX:-openshift/origin-ansible}" # Go to the top of the tree. cd "${OS_ROOT}" diff --git a/images/installer/Dockerfile b/images/installer/Dockerfile index f6af018ca..915dfe377 100644 --- a/images/installer/Dockerfile +++ b/images/installer/Dockerfile @@ -1,11 +1,11 @@ # Using playbook2image as a base -# See https://github.com/aweiteka/playbook2image for details on the image +# See https://github.com/openshift/playbook2image for details on the image # including documentation for the settings/env vars referenced below -FROM docker.io/aweiteka/playbook2image:latest +FROM registry.centos.org/openshift/playbook2image:latest MAINTAINER OpenShift Team <dev@lists.openshift.redhat.com> -LABEL name="openshift-ansible" \ +LABEL name="openshift/origin-ansible" \ summary="OpenShift's installation and configuration tool" \ description="A containerized openshift-ansible image to let you run playbooks to install, upgrade, maintain and check an OpenShift cluster" \ url="https://github.com/openshift/openshift-ansible" \ @@ -22,7 +22,7 @@ USER root # configurations for the two images. RUN mkdir -p /usr/share/ansible/ && ln -s /opt/app-root/src /usr/share/ansible/openshift-ansible -RUN INSTALL_PKGS="skopeo" && \ +RUN INSTALL_PKGS="skopeo openssl java-1.8.0-openjdk-headless httpd-tools" && \ yum install -y --setopt=tsflags=nodocs $INSTALL_PKGS && \ rpm -V $INSTALL_PKGS && \ yum clean all diff --git a/images/installer/Dockerfile.rhel7 b/images/installer/Dockerfile.rhel7 index 00841e660..9d7eeec24 100644 --- a/images/installer/Dockerfile.rhel7 +++ b/images/installer/Dockerfile.rhel7 @@ -2,7 +2,7 @@ FROM openshift3/playbook2image MAINTAINER OpenShift Team <dev@lists.openshift.redhat.com> -LABEL name="openshift3/openshift-ansible" \ +LABEL name="openshift3/ose-ansible" \ summary="OpenShift's installation and configuration tool" \ description="A containerized openshift-ansible image to let you run playbooks to install, upgrade, maintain and check an OpenShift cluster" \ url="https://github.com/openshift/openshift-ansible" \ diff --git a/images/installer/system-container/root/exports/config.json.template b/images/installer/system-container/root/exports/config.json.template index 383e3696e..397ac941a 100644 --- a/images/installer/system-container/root/exports/config.json.template +++ b/images/installer/system-container/root/exports/config.json.template @@ -102,7 +102,7 @@ }, { "type": "bind", - "source": "$SSH_ROOT", + "source": "$HOME_ROOT/.ssh", "destination": "/opt/app-root/src/.ssh", "options": [ "bind", @@ -112,8 +112,8 @@ }, { "type": "bind", - "source": "$SSH_ROOT", - "destination": "/root/.ssh", + "source": "$HOME_ROOT", + "destination": "/root", "options": [ "bind", "rw", @@ -171,6 +171,16 @@ ] }, { + "destination": "/etc/resolv.conf", + "type": "bind", + "source": "/etc/resolv.conf", + "options": [ + "ro", + "rbind", + "rprivate" + ] + }, + { "destination": "/sys/fs/cgroup", "type": "cgroup", "source": "cgroup", diff --git a/images/installer/system-container/root/exports/manifest.json b/images/installer/system-container/root/exports/manifest.json index 1db845965..f735494d4 100644 --- a/images/installer/system-container/root/exports/manifest.json +++ b/images/installer/system-container/root/exports/manifest.json @@ -5,7 +5,7 @@ "VAR_LIB_OPENSHIFT_INSTALLER" : "/var/lib/openshift-installer", "VAR_LOG_OPENSHIFT_LOG": "/var/log/ansible.log", "PLAYBOOK_FILE": "/usr/share/ansible/openshift-ansible/playbooks/byo/config.yml", - "SSH_ROOT": "/root/.ssh", + "HOME_ROOT": "/root", "INVENTORY_FILE": "/dev/null" } } diff --git a/openshift-ansible.spec b/openshift-ansible.spec index 4ea13e15c..914cf84ae 100644 --- a/openshift-ansible.spec +++ b/openshift-ansible.spec @@ -9,7 +9,7 @@ %global __requires_exclude ^/usr/bin/ansible-playbook$ Name: openshift-ansible -Version: 3.6.89.2 +Version: 3.6.100 Release: 1%{?dist} Summary: Openshift and Atomic Enterprise Ansible License: ASL 2.0 @@ -280,6 +280,63 @@ Atomic OpenShift Utilities includes %changelog +* Tue Jun 13 2017 Jenkins CD Merge Bot <smunilla@redhat.com> 3.6.100-1 +- Change default key for gce (hekumar@redhat.com) +- set etcd working directory for embedded etcd (jchaloup@redhat.com) +- Add daemon-reload handler to openshift_node and notify when /etc/systemd + files have been updated. (abutcher@redhat.com) +- Use volume.beta.kubernetes.io annotation for storage-classes + (per.carlson@vegvesen.no) +- Correct master-config update during upgrade (rteague@redhat.com) + +* Mon Jun 12 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.99-1 +- Replace repoquery with module (jchaloup@redhat.com) +- Consider previous value of 'changed' when updating (rhcarvalho@gmail.com) +- Improve code readability (rhcarvalho@gmail.com) +- Disable excluder only on nodes that are not masters (jchaloup@redhat.com) +- Added includes to specify openshift version for libvirt cluster create. + Otherwise bin/cluster create fails on unknown version for libvirt deployment. + (schulthess@puzzle.ch) +- docker checks: finish and refactor (lmeyer@redhat.com) +- oc_secret: allow use of force for secret type (jarrpa@redhat.com) +- add docker storage, docker driver checks (jvallejo@redhat.com) +- Add dependency and use same storageclass name as upstream + (hekumar@redhat.com) +- Add documentation (hekumar@redhat.com) +- Install default storageclass in AWS & GCE envs (hekumar@redhat.com) + +* Fri Jun 09 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.98-1 +- + +* Fri Jun 09 2017 Scott Dodson <sdodson@redhat.com> 3.6.97-1 +- Updated to using oo_random_word for secret gen (ewolinet@redhat.com) +- Updating kibana to store session and oauth secrets for reuse, fix oauthclient + generation for ops (ewolinet@redhat.com) + +* Thu Jun 08 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.89.5-1 +- Rename container image to origin-ansible / ose-ansible (pep@redhat.com) + +* Thu Jun 08 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.89.4-1 +- Guard check for container install based on openshift dictionary key + (ayoung@redhat.com) +- Separate client config removal in uninstall s.t. ansible_ssh_user is removed + from with_items. (abutcher@redhat.com) +- Remove supported/implemented barrier for registry object storage providers. + (abutcher@redhat.com) +- Add node unit file on upgrade (smilner@redhat.com) +- fix up openshift-ansible for use with 'oc cluster up' (jcantril@redhat.com) +- specify all logging index mappings for kibana (jcantril@redhat.com) +- openshift-master: set r_etcd_common_etcd_runtime (gscrivan@redhat.com) +- rename daemon.json to container-daemon.json (smilner@redhat.com) +- Updating probe timeout and exposing variable to adjust timeout in image + (ewolinet@redhat.com) +- Do not attempt to override openstack nodename (jdetiber@redhat.com) +- Update image stream to openshift/origin:2c55ade (skuznets@redhat.com) + +* Wed Jun 07 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.89.3-1 +- Use local openshift.master.loopback_url when generating initial master + loopback kubeconfigs. (abutcher@redhat.com) + * Tue Jun 06 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.89.2-1 - diff --git a/playbooks/adhoc/uninstall.yml b/playbooks/adhoc/uninstall.yml index 1c8257162..97d835eae 100644 --- a/playbooks/adhoc/uninstall.yml +++ b/playbooks/adhoc/uninstall.yml @@ -393,10 +393,19 @@ - "{{ directories.results | default([]) }}" - files + - set_fact: + client_users: "{{ [ansible_ssh_user, 'root'] | unique }}" + + - name: Remove client kubeconfigs + file: + path: "~{{ item }}/.kube" + state: absent + with_items: + - "{{ client_users }}" + - name: Remove remaining files file: path={{ item }} state=absent with_items: - - "~{{ ansible_ssh_user }}/.kube" - /etc/ansible/facts.d/openshift.fact - /etc/atomic-enterprise - /etc/corosync @@ -421,7 +430,6 @@ - /etc/sysconfig/origin-master - /etc/sysconfig/origin-master-api - /etc/sysconfig/origin-master-controllers - - /root/.kube - /usr/share/openshift/examples - /var/lib/atomic-enterprise - /var/lib/openshift diff --git a/playbooks/common/openshift-cluster/evaluate_groups.yml b/playbooks/common/openshift-cluster/evaluate_groups.yml index 46932b27f..c28ce4c14 100644 --- a/playbooks/common/openshift-cluster/evaluate_groups.yml +++ b/playbooks/common/openshift-cluster/evaluate_groups.yml @@ -155,5 +155,5 @@ groups: oo_glusterfs_to_config ansible_ssh_user: "{{ g_ssh_user | default(omit) }}" ansible_become: "{{ g_sudo | default(omit) }}" - with_items: "{{ g_glusterfs_hosts | union(g_glusterfs_registry_hosts) | default([]) }}" + with_items: "{{ g_glusterfs_hosts | union(g_glusterfs_registry_hosts | default([])) }}" changed_when: no diff --git a/playbooks/common/openshift-cluster/openshift_hosted.yml b/playbooks/common/openshift-cluster/openshift_hosted.yml index 5db71b857..8d94b6509 100644 --- a/playbooks/common/openshift-cluster/openshift_hosted.yml +++ b/playbooks/common/openshift-cluster/openshift_hosted.yml @@ -45,6 +45,8 @@ - role: cockpit-ui when: ( openshift.common.version_gte_3_3_or_1_3 | bool ) and ( openshift_hosted_manage_registry | default(true) | bool ) and not (openshift.docker.hosted_registry_insecure | default(false) | bool) + - role: openshift_default_storage_class + when: openshift_cloudprovider_kind is defined and (openshift_cloudprovider_kind == 'aws' or openshift_cloudprovider_kind == 'gce') - name: Update master-config for publicLoggingURL hosts: oo_masters_to_config:!oo_first_master diff --git a/playbooks/common/openshift-cluster/upgrades/disable_node_excluders.yml b/playbooks/common/openshift-cluster/upgrades/disable_node_excluders.yml index 7988e97ab..a66301c0d 100644 --- a/playbooks/common/openshift-cluster/upgrades/disable_node_excluders.yml +++ b/playbooks/common/openshift-cluster/upgrades/disable_node_excluders.yml @@ -1,6 +1,6 @@ --- - name: Disable excluders - hosts: oo_nodes_to_config + hosts: oo_nodes_to_upgrade:!oo_masters_to_config gather_facts: no roles: - role: openshift_excluder diff --git a/playbooks/common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml b/playbooks/common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml index 1b437dce9..9b4a8e413 100644 --- a/playbooks/common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml +++ b/playbooks/common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml @@ -14,24 +14,26 @@ - when: not openshift.common.is_containerized | bool block: - name: Check latest available OpenShift RPM version - command: > - {{ repoquery_cmd }} --qf '%{version}' "{{ openshift.common.service_type }}" - failed_when: false - changed_when: false - register: avail_openshift_version + repoquery: + name: "{{ openshift.common.service_type }}" + ignore_excluders: true + register: repoquery_out - name: Fail when unable to determine available OpenShift RPM version fail: msg: "Unable to determine available OpenShift RPM version" when: - - avail_openshift_version.stdout == '' + - not repoquery_out.results.package_found + + - name: Set fact avail_openshift_version + set_fact: + avail_openshift_version: "{{ repoquery_out.results.versions.available_versions.0 }}" - name: Verify OpenShift RPMs are available for upgrade fail: - msg: "OpenShift {{ avail_openshift_version.stdout }} is available, but {{ openshift_upgrade_target }} or greater is required" + msg: "OpenShift {{ avail_openshift_version }} is available, but {{ openshift_upgrade_target }} or greater is required" when: - - not avail_openshift_version | skipped - - avail_openshift_version.stdout | default('0.0', True) | version_compare(openshift_release, '<') + - avail_openshift_version | default('0.0', True) | version_compare(openshift_release, '<') - name: Fail when openshift version does not meet minium requirement for Origin upgrade fail: diff --git a/playbooks/common/openshift-cluster/upgrades/v3_5/master_config_upgrade.yml b/playbooks/common/openshift-cluster/upgrades/v3_5/master_config_upgrade.yml new file mode 100644 index 000000000..ed89dbe8d --- /dev/null +++ b/playbooks/common/openshift-cluster/upgrades/v3_5/master_config_upgrade.yml @@ -0,0 +1,16 @@ +--- +- modify_yaml: + dest: "{{ openshift.common.config_base}}/master/master-config.yaml" + yaml_key: 'admissionConfig.pluginConfig' + yaml_value: "{{ openshift.master.admission_plugin_config }}" + when: "'admission_plugin_config' in openshift.master" + +- modify_yaml: + dest: "{{ openshift.common.config_base}}/master/master-config.yaml" + yaml_key: 'admissionConfig.pluginOrderOverride' + yaml_value: + +- modify_yaml: + dest: "{{ openshift.common.config_base}}/master/master-config.yaml" + yaml_key: 'kubernetesMasterConfig.admissionConfig' + yaml_value: diff --git a/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade_control_plane.yml b/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade_control_plane.yml index 21e1d440d..74c2964aa 100644 --- a/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade_control_plane.yml +++ b/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade_control_plane.yml @@ -115,6 +115,8 @@ - include: ../cleanup_unused_images.yml - include: ../upgrade_control_plane.yml + vars: + master_config_hook: "v3_5/master_config_upgrade.yml" - include: ../post_control_plane.yml diff --git a/playbooks/common/openshift-cluster/upgrades/v3_6/master_config_upgrade.yml b/playbooks/common/openshift-cluster/upgrades/v3_6/master_config_upgrade.yml new file mode 100644 index 000000000..ed89dbe8d --- /dev/null +++ b/playbooks/common/openshift-cluster/upgrades/v3_6/master_config_upgrade.yml @@ -0,0 +1,16 @@ +--- +- modify_yaml: + dest: "{{ openshift.common.config_base}}/master/master-config.yaml" + yaml_key: 'admissionConfig.pluginConfig' + yaml_value: "{{ openshift.master.admission_plugin_config }}" + when: "'admission_plugin_config' in openshift.master" + +- modify_yaml: + dest: "{{ openshift.common.config_base}}/master/master-config.yaml" + yaml_key: 'admissionConfig.pluginOrderOverride' + yaml_value: + +- modify_yaml: + dest: "{{ openshift.common.config_base}}/master/master-config.yaml" + yaml_key: 'kubernetesMasterConfig.admissionConfig' + yaml_value: diff --git a/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade_control_plane.yml b/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade_control_plane.yml index e34259b00..a66fb51ff 100644 --- a/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade_control_plane.yml +++ b/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade_control_plane.yml @@ -115,6 +115,8 @@ - include: ../cleanup_unused_images.yml - include: ../upgrade_control_plane.yml + vars: + master_config_hook: "v3_6/master_config_upgrade.yml" - include: ../post_control_plane.yml diff --git a/playbooks/common/openshift-master/config.yml b/playbooks/common/openshift-master/config.yml index 60cf56108..ddc4db8f8 100644 --- a/playbooks/common/openshift-master/config.yml +++ b/playbooks/common/openshift-master/config.yml @@ -117,6 +117,7 @@ | oo_collect('openshift.common.hostname') | default(none, true) }}" openshift_master_hosts: "{{ groups.oo_masters_to_config }}" + r_etcd_common_etcd_runtime: "{{ openshift.common.etcd_runtime }}" etcd_ca_host: "{{ groups.oo_etcd_to_config.0 }}" etcd_cert_subdir: "openshift-master-{{ openshift.common.hostname }}" etcd_cert_config_dir: "{{ openshift.common.config_base }}/master" diff --git a/playbooks/common/openshift-node/config.yml b/playbooks/common/openshift-node/config.yml index 792ffb4e2..acebabc91 100644 --- a/playbooks/common/openshift-node/config.yml +++ b/playbooks/common/openshift-node/config.yml @@ -32,7 +32,7 @@ ansible_ssh_user: "{{ g_ssh_user | default(omit) }}" ansible_become: "{{ g_sudo | default(omit) }}" with_items: "{{ groups.oo_nodes_to_config | default([]) }}" - when: hostvars[item].openshift.common is defined and hostvars[item].openshift.common.is_containerized | bool and (item in groups.oo_nodes_to_config and item in groups.oo_masters_to_config) + when: hostvars[item].openshift is defined and hostvars[item].openshift.common is defined and hostvars[item].openshift.common.is_containerized | bool and (item in groups.oo_nodes_to_config and item in groups.oo_masters_to_config) changed_when: False - name: Configure containerized nodes diff --git a/playbooks/libvirt/openshift-cluster/config.yml b/playbooks/libvirt/openshift-cluster/config.yml index f782d6dab..477213f4e 100644 --- a/playbooks/libvirt/openshift-cluster/config.yml +++ b/playbooks/libvirt/openshift-cluster/config.yml @@ -3,6 +3,8 @@ # is localhost, so no hostname value (or public_hostname) value is getting # assigned +- include: ../../common/openshift-cluster/std_include.yml + - hosts: localhost gather_facts: no tasks: diff --git a/requirements.txt b/requirements.txt index 734ee6201..dae460713 100644 --- a/requirements.txt +++ b/requirements.txt @@ -7,3 +7,4 @@ pyOpenSSL==16.2.0 # We need to disable ruamel.yaml for now because of test failures #ruamel.yaml six==1.10.0 +passlib==1.6.5 diff --git a/roles/docker/README.md b/roles/docker/README.md index 4a9f21f22..19908c036 100644 --- a/roles/docker/README.md +++ b/roles/docker/README.md @@ -3,7 +3,7 @@ Docker Ensures docker package or system container is installed, and optionally raises timeout for systemd-udevd.service to 5 minutes. -daemon.json items may be found at https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file +container-daemon.json items may be found at https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file Requirements ------------ diff --git a/roles/docker/tasks/systemcontainer_docker.yml b/roles/docker/tasks/systemcontainer_docker.yml index f0f5a40dd..650f06f86 100644 --- a/roles/docker/tasks/systemcontainer_docker.yml +++ b/roles/docker/tasks/systemcontainer_docker.yml @@ -130,8 +130,8 @@ dest: "{{ container_engine_systemd_dir }}/custom.conf" src: systemcontainercustom.conf.j2 -# Set local versions of facts that must be in json format for daemon.json -# NOTE: When jinja2.9+ is used the daemon.json file can move to using tojson +# Set local versions of facts that must be in json format for container-daemon.json +# NOTE: When jinja2.9+ is used the container-daemon.json file can move to using tojson - set_fact: l_docker_insecure_registries: "{{ docker_insecure_registries | default([]) | to_json }}" l_docker_log_options: "{{ docker_log_options | default({}) | to_json }}" @@ -139,10 +139,12 @@ l_docker_blocked_registries: "{{ docker_blocked_registries | default([]) | to_json }}" l_docker_selinux_enabled: "{{ docker_selinux_enabled | default(true) | to_json }}" -# Configure container-engine using the daemon.json file +# Configure container-engine using the container-daemon.json file +# NOTE: daemon.json and container-daemon.json have been seperated to avoid +# collision. - name: Configure Container Engine template: - dest: "{{ docker_conf_dir }}/daemon.json" + dest: "{{ docker_conf_dir }}/container-daemon.json" src: daemon.json # Enable and start the container-engine service diff --git a/roles/etcd/tasks/system_container.yml b/roles/etcd/tasks/system_container.yml index 72ffadbd2..f1d948d16 100644 --- a/roles/etcd/tasks/system_container.yml +++ b/roles/etcd/tasks/system_container.yml @@ -15,6 +15,56 @@ {%- endif -%} {% endfor -%} +- name: Check etcd system container package + command: > + atomic containers list --no-trunc -a -f container=etcd -f backend=ostree + register: etcd_result + +- name: Unmask etcd service + systemd: + name: etcd + state: stopped + enabled: yes + masked: no + daemon_reload: yes + register: task_result + failed_when: task_result|failed and 'could not' not in task_result.msg|lower + when: "'etcd' in etcd_result.stdout" + +- name: Disable etcd_container + systemd: + name: etcd_container + state: stopped + enabled: no + masked: yes + daemon_reload: yes + register: task_result + failed_when: task_result|failed and 'could not' not in task_result.msg|lower + +- name: Check for previous etcd data store + stat: + path: "{{ etcd_data_dir }}/member/" + register: src_datastore + +- name: Check for etcd system container data store + stat: + path: "{{ r_etcd_common_system_container_host_dir }}/etcd.etcd/member" + register: dest_datastore + +- name: Ensure that etcd system container data dirs exist + file: path="{{ item }}" state=directory + with_items: + - "{{ r_etcd_common_system_container_host_dir }}/etc" + - "{{ r_etcd_common_system_container_host_dir }}/etcd.etcd" + +- name: Copy etcd data store + command: > + cp -a {{ etcd_data_dir }}/member + {{ r_etcd_common_system_container_host_dir }}/etcd.etcd/member + when: + - src_datastore.stat.exists + - not dest_datastore.stat.exists + - name: Install or Update Etcd system container package oc_atomic_container: name: etcd @@ -35,3 +85,5 @@ - ETCD_PEER_CA_FILE={{ etcd_system_container_conf_dir }}/ca.crt - ETCD_PEER_CERT_FILE={{ etcd_system_container_conf_dir }}/peer.crt - ETCD_PEER_KEY_FILE={{ etcd_system_container_conf_dir }}/peer.key + - ETCD_TRUSTED_CA_FILE={{ etcd_system_container_conf_dir }}/ca.crt + - ETCD_PEER_TRUSTED_CA_FILE={{ etcd_system_container_conf_dir }}/ca.crt diff --git a/roles/etcd_common/defaults/main.yml b/roles/etcd_common/defaults/main.yml index e1a080b34..14e712fcf 100644 --- a/roles/etcd_common/defaults/main.yml +++ b/roles/etcd_common/defaults/main.yml @@ -1,9 +1,11 @@ --- # runc, docker, host r_etcd_common_etcd_runtime: "docker" +r_etcd_common_embedded_etcd: false # etcd server vars -etcd_conf_dir: "{{ '/etc/etcd' if r_etcd_common_etcd_runtime != 'runc' else '/var/lib/etcd/etcd.etcd/etc' }}" +etcd_conf_dir: '/etc/etcd' +r_etcd_common_system_container_host_dir: /var/lib/etcd/etcd.etcd etcd_system_container_conf_dir: /var/lib/etcd/etc etcd_conf_file: "{{ etcd_conf_dir }}/etcd.conf" etcd_ca_file: "{{ etcd_conf_dir }}/ca.crt" @@ -40,7 +42,7 @@ etcd_is_containerized: False etcd_is_thirdparty: False # etcd dir vars -etcd_data_dir: /var/lib/etcd/ +etcd_data_dir: "{{ '/var/lib/origin/openshift.local.etcd' if r_etcd_common_embedded_etcd | bool else '/var/lib/etcd/' }}" # etcd ports and protocols etcd_client_port: 2379 diff --git a/roles/etcd_server_certificates/tasks/main.yml b/roles/etcd_server_certificates/tasks/main.yml index 3ac7f3401..4795188a6 100644 --- a/roles/etcd_server_certificates/tasks/main.yml +++ b/roles/etcd_server_certificates/tasks/main.yml @@ -5,11 +5,14 @@ - name: Check status of etcd certificates stat: - path: "{{ etcd_cert_config_dir }}/{{ item }}" + path: "{{ item }}" with_items: - - "{{ etcd_cert_prefix }}server.crt" - - "{{ etcd_cert_prefix }}peer.crt" - - "{{ etcd_cert_prefix }}ca.crt" + - "{{ etcd_cert_config_dir }}/{{ etcd_cert_prefix }}server.crt" + - "{{ etcd_cert_config_dir }}/{{ etcd_cert_prefix }}peer.crt" + - "{{ etcd_cert_config_dir }}/{{ etcd_cert_prefix }}ca.crt" + - "{{ etcd_system_container_cert_config_dir }}/{{ etcd_cert_prefix }}server.crt" + - "{{ etcd_system_container_cert_config_dir }}/{{ etcd_cert_prefix }}peer.crt" + - "{{ etcd_system_container_cert_config_dir }}/{{ etcd_cert_prefix }}ca.crt" register: g_etcd_server_cert_stat_result when: not etcd_certificates_redeploy | default(false) | bool @@ -132,8 +135,11 @@ - name: Ensure certificate directory exists file: - path: "{{ etcd_cert_config_dir }}" + path: "{{ item }}" state: directory + with_items: + - "{{ etcd_cert_config_dir }}" + - "{{ etcd_system_container_cert_config_dir }}" when: etcd_server_certs_missing | bool - name: Unarchive cert tarball @@ -164,15 +170,28 @@ - name: Ensure ca directory exists file: - path: "{{ etcd_ca_dir }}" + path: "{{ item }}" state: directory + with_items: + - "{{ etcd_ca_dir }}" + - "{{ etcd_system_container_cert_config_dir }}/ca" when: etcd_server_certs_missing | bool -- name: Unarchive etcd ca cert tarballs +- name: Unarchive cert tarball for the system container + unarchive: + src: "{{ g_etcd_server_mktemp.stdout }}/{{ etcd_cert_subdir }}.tgz" + dest: "{{ etcd_system_container_cert_config_dir }}" + when: + - etcd_server_certs_missing | bool + - r_etcd_common_etcd_runtime == 'runc' + +- name: Unarchive etcd ca cert tarballs for the system container unarchive: src: "{{ g_etcd_server_mktemp.stdout }}/{{ etcd_ca_name }}.tgz" - dest: "{{ etcd_ca_dir }}" - when: etcd_server_certs_missing | bool + dest: "{{ etcd_system_container_cert_config_dir }}/ca" + when: + - etcd_server_certs_missing | bool + - r_etcd_common_etcd_runtime == 'runc' - name: Delete temporary directory local_action: file path="{{ g_etcd_server_mktemp.stdout }}" state=absent diff --git a/roles/etcd_upgrade/defaults/main.yml b/roles/etcd_upgrade/defaults/main.yml index 01ad8a268..b61bf526c 100644 --- a/roles/etcd_upgrade/defaults/main.yml +++ b/roles/etcd_upgrade/defaults/main.yml @@ -1,8 +1,8 @@ --- r_etcd_upgrade_action: upgrade r_etcd_upgrade_mechanism: rpm -r_etcd_upgrade_embedded_etcd: False - +r_etcd_upgrade_embedded_etcd: false +r_etcd_common_embedded_etcd: "{{ r_etcd_upgrade_embedded_etcd }}" # etcd run on a host => use etcdctl command directly # etcd run as a docker container => use docker exec # etcd run as a runc container => use runc exec diff --git a/roles/etcd_upgrade/meta/main.yml b/roles/etcd_upgrade/meta/main.yml index 018bdc8d7..afdb0267f 100644 --- a/roles/etcd_upgrade/meta/main.yml +++ b/roles/etcd_upgrade/meta/main.yml @@ -14,3 +14,4 @@ galaxy_info: - system dependencies: - role: etcd_common + r_etcd_common_embedded_etcd: "{{ r_etcd_upgrade_embedded_etcd }}" diff --git a/roles/lib_openshift/library/oc_obj.py b/roles/lib_openshift/library/oc_obj.py index a73318725..56af303cc 100644 --- a/roles/lib_openshift/library/oc_obj.py +++ b/roles/lib_openshift/library/oc_obj.py @@ -1461,7 +1461,12 @@ class OCObject(OpenShiftCLI): def delete(self): '''delete the object''' - return self._delete(self.kind, name=self.name, selector=self.selector) + results = self._delete(self.kind, name=self.name, selector=self.selector) + if (results['returncode'] != 0 and 'stderr' in results and + '\"{}\" not found'.format(self.name) in results['stderr']): + results['returncode'] = 0 + + return results def create(self, files=None, content=None): ''' @@ -1545,7 +1550,8 @@ class OCObject(OpenShiftCLI): if state == 'absent': # verify its not in our results if (params['name'] is not None or params['selector'] is not None) and \ - (len(api_rval['results']) == 0 or len(api_rval['results'][0].get('items', [])) == 0): + (len(api_rval['results']) == 0 or \ + ('items' in api_rval['results'][0] and len(api_rval['results'][0]['items']) == 0)): return {'changed': False, 'state': state} if check_mode: diff --git a/roles/lib_openshift/library/oc_secret.py b/roles/lib_openshift/library/oc_secret.py index 62b162afd..d762e0c38 100644 --- a/roles/lib_openshift/library/oc_secret.py +++ b/roles/lib_openshift/library/oc_secret.py @@ -1601,7 +1601,7 @@ class OCSecret(OpenShiftCLI): '''delete a secret by name''' return self._delete('secrets', self.name) - def create(self, files=None, contents=None): + def create(self, files=None, contents=None, force=False): '''Create a secret ''' if not files: files = Utils.create_tmp_files_from_contents(contents) @@ -1610,6 +1610,8 @@ class OCSecret(OpenShiftCLI): cmd = ['secrets', 'new', self.name] if self.type is not None: cmd.append("--type=%s" % (self.type)) + if force: + cmd.append('--confirm') cmd.extend(secrets) results = self.openshift_cmd(cmd) @@ -1622,7 +1624,7 @@ class OCSecret(OpenShiftCLI): This receives a list of file names and converts it into a secret. The secret is then written to disk and passed into the `oc replace` command. ''' - secret = self.prep_secret(files) + secret = self.prep_secret(files, force) if secret['returncode'] != 0: return secret @@ -1634,7 +1636,7 @@ class OCSecret(OpenShiftCLI): return self._replace(sfile_path, force=force) - def prep_secret(self, files=None, contents=None): + def prep_secret(self, files=None, contents=None, force=False): ''' return what the secret would look like if created This is accomplished by passing -ojson. This will most likely change in the future ''' @@ -1645,6 +1647,8 @@ class OCSecret(OpenShiftCLI): cmd = ['-ojson', 'secrets', 'new', self.name] if self.type is not None: cmd.extend(["--type=%s" % (self.type)]) + if force: + cmd.append('--confirm') cmd.extend(secrets) return self.openshift_cmd(cmd, output=True) @@ -1707,7 +1711,7 @@ class OCSecret(OpenShiftCLI): return {'changed': True, 'msg': 'Would have performed a create.'} - api_rval = ocsecret.create(files, params['contents']) + api_rval = ocsecret.create(files, params['contents'], force=params['force']) # Remove files if files and params['delete_after']: @@ -1724,7 +1728,7 @@ class OCSecret(OpenShiftCLI): ######## # Update ######## - secret = ocsecret.prep_secret(params['files'], params['contents']) + secret = ocsecret.prep_secret(params['files'], params['contents'], force=params['force']) if secret['returncode'] != 0: return {'failed': True, 'msg': secret} diff --git a/roles/lib_openshift/src/class/oc_obj.py b/roles/lib_openshift/src/class/oc_obj.py index 6f0da3d5c..5e423bea9 100644 --- a/roles/lib_openshift/src/class/oc_obj.py +++ b/roles/lib_openshift/src/class/oc_obj.py @@ -33,7 +33,12 @@ class OCObject(OpenShiftCLI): def delete(self): '''delete the object''' - return self._delete(self.kind, name=self.name, selector=self.selector) + results = self._delete(self.kind, name=self.name, selector=self.selector) + if (results['returncode'] != 0 and 'stderr' in results and + '\"{}\" not found'.format(self.name) in results['stderr']): + results['returncode'] = 0 + + return results def create(self, files=None, content=None): ''' @@ -117,7 +122,8 @@ class OCObject(OpenShiftCLI): if state == 'absent': # verify its not in our results if (params['name'] is not None or params['selector'] is not None) and \ - (len(api_rval['results']) == 0 or len(api_rval['results'][0].get('items', [])) == 0): + (len(api_rval['results']) == 0 or \ + ('items' in api_rval['results'][0] and len(api_rval['results'][0]['items']) == 0)): return {'changed': False, 'state': state} if check_mode: diff --git a/roles/lib_openshift/src/class/oc_secret.py b/roles/lib_openshift/src/class/oc_secret.py index ee83580df..4ee6443e9 100644 --- a/roles/lib_openshift/src/class/oc_secret.py +++ b/roles/lib_openshift/src/class/oc_secret.py @@ -44,7 +44,7 @@ class OCSecret(OpenShiftCLI): '''delete a secret by name''' return self._delete('secrets', self.name) - def create(self, files=None, contents=None): + def create(self, files=None, contents=None, force=False): '''Create a secret ''' if not files: files = Utils.create_tmp_files_from_contents(contents) @@ -53,6 +53,8 @@ class OCSecret(OpenShiftCLI): cmd = ['secrets', 'new', self.name] if self.type is not None: cmd.append("--type=%s" % (self.type)) + if force: + cmd.append('--confirm') cmd.extend(secrets) results = self.openshift_cmd(cmd) @@ -65,7 +67,7 @@ class OCSecret(OpenShiftCLI): This receives a list of file names and converts it into a secret. The secret is then written to disk and passed into the `oc replace` command. ''' - secret = self.prep_secret(files) + secret = self.prep_secret(files, force) if secret['returncode'] != 0: return secret @@ -77,7 +79,7 @@ class OCSecret(OpenShiftCLI): return self._replace(sfile_path, force=force) - def prep_secret(self, files=None, contents=None): + def prep_secret(self, files=None, contents=None, force=False): ''' return what the secret would look like if created This is accomplished by passing -ojson. This will most likely change in the future ''' @@ -88,6 +90,8 @@ class OCSecret(OpenShiftCLI): cmd = ['-ojson', 'secrets', 'new', self.name] if self.type is not None: cmd.extend(["--type=%s" % (self.type)]) + if force: + cmd.append('--confirm') cmd.extend(secrets) return self.openshift_cmd(cmd, output=True) @@ -150,7 +154,7 @@ class OCSecret(OpenShiftCLI): return {'changed': True, 'msg': 'Would have performed a create.'} - api_rval = ocsecret.create(files, params['contents']) + api_rval = ocsecret.create(files, params['contents'], force=params['force']) # Remove files if files and params['delete_after']: @@ -167,7 +171,7 @@ class OCSecret(OpenShiftCLI): ######## # Update ######## - secret = ocsecret.prep_secret(params['files'], params['contents']) + secret = ocsecret.prep_secret(params['files'], params['contents'], force=params['force']) if secret['returncode'] != 0: return {'failed': True, 'msg': secret} diff --git a/roles/lib_openshift/src/test/unit/test_oc_secret.py b/roles/lib_openshift/src/test/unit/test_oc_secret.py index 09cc4a374..323b3423c 100755 --- a/roles/lib_openshift/src/test/unit/test_oc_secret.py +++ b/roles/lib_openshift/src/test/unit/test_oc_secret.py @@ -48,6 +48,7 @@ class OCSecretTest(unittest.TestCase): 'debug': False, 'files': None, 'delete_after': True, + 'force': False, } # Return values of our mocked function call. These get returned once per call. diff --git a/roles/openshift_ca/tasks/main.yml b/roles/openshift_ca/tasks/main.yml index c7b906949..b9a7ec32f 100644 --- a/roles/openshift_ca/tasks/main.yml +++ b/roles/openshift_ca/tasks/main.yml @@ -108,6 +108,38 @@ delegate_to: "{{ openshift_ca_host }}" run_once: true +- name: Test local loopback context + command: > + {{ hostvars[openshift_ca_host].openshift.common.client_binary }} config view + --config={{ openshift_master_loopback_config }} + changed_when: false + register: loopback_config + delegate_to: "{{ openshift_ca_host }}" + run_once: true + +- name: Generate the loopback master client config + command: > + {{ hostvars[openshift_ca_host].openshift.common.client_binary }} adm create-api-client-config + {% for named_ca_certificate in openshift.master.named_certificates | default([]) | oo_collect('cafile') %} + --certificate-authority {{ named_ca_certificate }} + {% endfor %} + --certificate-authority={{ openshift_ca_cert }} + --client-dir={{ openshift_ca_config_dir }} + --groups=system:masters,system:openshift-master + --master={{ hostvars[openshift_ca_host].openshift.master.loopback_api_url }} + --public-master={{ hostvars[openshift_ca_host].openshift.master.loopback_api_url }} + --signer-cert={{ openshift_ca_cert }} + --signer-key={{ openshift_ca_key }} + --signer-serial={{ openshift_ca_serial }} + --user=system:openshift-master + --basename=openshift-master + {% if openshift_version | oo_version_gte_3_5_or_1_5(openshift.common.deployment_type) | bool %} + --expire-days={{ openshift_master_cert_expire_days }} + {% endif %} + when: loopback_context_string not in loopback_config.stdout + delegate_to: "{{ openshift_ca_host }}" + run_once: true + - name: Restore original serviceaccount keys copy: src: "{{ item }}.keep" diff --git a/roles/openshift_ca/vars/main.yml b/roles/openshift_ca/vars/main.yml index a32e385ec..d04c1766d 100644 --- a/roles/openshift_ca/vars/main.yml +++ b/roles/openshift_ca/vars/main.yml @@ -4,3 +4,6 @@ openshift_ca_cert: "{{ openshift_ca_config_dir }}/ca.crt" openshift_ca_key: "{{ openshift_ca_config_dir }}/ca.key" openshift_ca_serial: "{{ openshift_ca_config_dir }}/ca.serial.txt" openshift_version: "{{ openshift_pkg_version | default('') }}" + +openshift_master_loopback_config: "{{ openshift_ca_config_dir }}/openshift-master.kubeconfig" +loopback_context_string: "current-context: {{ openshift.master.loopback_context_name }}" diff --git a/roles/openshift_default_storage_class/README.md b/roles/openshift_default_storage_class/README.md new file mode 100644 index 000000000..198163127 --- /dev/null +++ b/roles/openshift_default_storage_class/README.md @@ -0,0 +1,39 @@ +openshift_master_storage_class +========= + +A role that deploys configuratons for Openshift StorageClass + +Requirements +------------ + +None + +Role Variables +-------------- + +openshift_storageclass_name: Name of the storage class to create +openshift_storageclass_provisioner: The kubernetes provisioner to use +openshift_storageclass_type: type of storage to use. This is different among clouds/providers + +Dependencies +------------ + + +Example Playbook +---------------- + +- role: openshift_default_storage_class + openshift_storageclass_name: awsEBS + openshift_storageclass_provisioner: kubernetes.io/aws-ebs + openshift_storageclass_type: gp2 + + +License +------- + +Apache + +Author Information +------------------ + +Openshift Operations diff --git a/roles/openshift_default_storage_class/defaults/main.yml b/roles/openshift_default_storage_class/defaults/main.yml new file mode 100644 index 000000000..66ffd2a73 --- /dev/null +++ b/roles/openshift_default_storage_class/defaults/main.yml @@ -0,0 +1,14 @@ +--- +openshift_storageclass_defaults: + aws: + name: gp2 + provisioner: kubernetes.io/aws-ebs + type: gp2 + gce: + name: standard + provisioner: kubernetes.io/gce-pd + type: pd-standard + +openshift_storageclass_name: "{{ openshift_storageclass_defaults[openshift_cloudprovider_kind]['name'] }}" +openshift_storageclass_provisioner: "{{ openshift_storageclass_defaults[openshift_cloudprovider_kind]['provisioner'] }}" +openshift_storageclass_type: "{{ openshift_storageclass_defaults[openshift_cloudprovider_kind]['type'] }}" diff --git a/roles/openshift_default_storage_class/meta/main.yml b/roles/openshift_default_storage_class/meta/main.yml new file mode 100644 index 000000000..d7d57fe39 --- /dev/null +++ b/roles/openshift_default_storage_class/meta/main.yml @@ -0,0 +1,15 @@ +--- +galaxy_info: + author: Openshift Operations + description: This role configures the StorageClass in Openshift + company: Red Hat + license: Apache + min_ansible_version: 2.2 + platforms: + - name: EL + verisons: + - 7 + categories: + - cloud +dependencies: +- role: lib_openshift diff --git a/roles/openshift_default_storage_class/tasks/main.yml b/roles/openshift_default_storage_class/tasks/main.yml new file mode 100644 index 000000000..408fc17c7 --- /dev/null +++ b/roles/openshift_default_storage_class/tasks/main.yml @@ -0,0 +1,19 @@ +--- +# Install default storage classes in GCE & AWS +- name: Ensure storageclass object + oc_obj: + kind: storageclass + name: "{{ openshift_storageclass_name }}" + content: + path: /tmp/openshift_storageclass + data: + kind: StorageClass + apiVersion: storage.k8s.io/v1beta1 + metadata: + name: "{{ openshift_storageclass_name }}" + annotations: + storageclass.beta.kubernetes.io/is-default-class: "true" + provisioner: "{{ openshift_storageclass_provisioner }}" + parameters: + type: "{{ openshift_storageclass_type }}" + run_once: true diff --git a/roles/openshift_default_storage_class/vars/main.yml b/roles/openshift_default_storage_class/vars/main.yml new file mode 100644 index 000000000..ed97d539c --- /dev/null +++ b/roles/openshift_default_storage_class/vars/main.yml @@ -0,0 +1 @@ +--- diff --git a/roles/openshift_etcd_facts/vars/main.yml b/roles/openshift_etcd_facts/vars/main.yml index 82db36eba..b3ecd57a6 100644 --- a/roles/openshift_etcd_facts/vars/main.yml +++ b/roles/openshift_etcd_facts/vars/main.yml @@ -5,6 +5,7 @@ etcd_hostname: "{{ openshift.common.hostname }}" etcd_ip: "{{ openshift.common.ip }}" etcd_cert_subdir: "etcd-{{ openshift.common.hostname }}" etcd_cert_prefix: -etcd_cert_config_dir: "{{ '/etc/etcd' if not openshift.common.is_etcd_system_container | bool else '/var/lib/etcd/etcd.etcd/etc' }}" +etcd_cert_config_dir: "/etc/etcd" +etcd_system_container_cert_config_dir: /var/lib/etcd/etcd.etcd/etc etcd_peer_url_scheme: https etcd_url_scheme: https diff --git a/roles/openshift_examples/files/examples/v3.6/image-streams/image-streams-centos7.json b/roles/openshift_examples/files/examples/v3.6/image-streams/image-streams-centos7.json index a81dbb654..2583018b7 100644 --- a/roles/openshift_examples/files/examples/v3.6/image-streams/image-streams-centos7.json +++ b/roles/openshift_examples/files/examples/v3.6/image-streams/image-streams-centos7.json @@ -103,7 +103,7 @@ }, "from": { "kind": "ImageStreamTag", - "name": "4" + "name": "6" } }, { @@ -137,6 +137,22 @@ "kind": "DockerImage", "name": "centos/nodejs-4-centos7:latest" } + }, + { + "name": "6", + "annotations": { + "openshift.io/display-name": "Node.js 6", + "description": "Build and run Node.js 6 applications on CentOS 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/6/README.md.", + "iconClass": "icon-nodejs", + "tags": "builder,nodejs", + "supports":"nodejs:6,nodejs", + "version": "6", + "sampleRepo": "https://github.com/openshift/nodejs-ex.git" + }, + "from": { + "kind": "DockerImage", + "name": "centos/nodejs-6-centos7:latest" + } } ] } @@ -407,7 +423,7 @@ "iconClass": "icon-wildfly", "tags": "builder,wildfly,java", "supports":"jee,java", - "sampleRepo": "https://github.com/bparees/openshift-jee-sample.git" + "sampleRepo": "https://github.com/openshift/openshift-jee-sample.git" }, "from": { "kind": "ImageStreamTag", @@ -423,7 +439,7 @@ "tags": "builder,wildfly,java", "supports":"wildfly:8.1,jee,java", "version": "8.1", - "sampleRepo": "https://github.com/bparees/openshift-jee-sample.git" + "sampleRepo": "https://github.com/openshift/openshift-jee-sample.git" }, "from": { "kind": "DockerImage", @@ -439,7 +455,7 @@ "tags": "builder,wildfly,java", "supports":"wildfly:9.0,jee,java", "version": "9.0", - "sampleRepo": "https://github.com/bparees/openshift-jee-sample.git" + "sampleRepo": "https://github.com/openshift/openshift-jee-sample.git" }, "from": { "kind": "DockerImage", @@ -455,7 +471,7 @@ "tags": "builder,wildfly,java", "supports":"wildfly:10.0,jee,java", "version": "10.0", - "sampleRepo": "https://github.com/bparees/openshift-jee-sample.git" + "sampleRepo": "https://github.com/openshift/openshift-jee-sample.git" }, "from": { "kind": "DockerImage", @@ -471,7 +487,7 @@ "tags": "builder,wildfly,java", "supports":"wildfly:10.1,jee,java", "version": "10.1", - "sampleRepo": "https://github.com/bparees/openshift-jee-sample.git" + "sampleRepo": "https://github.com/openshift/openshift-jee-sample.git" }, "from": { "kind": "DockerImage", diff --git a/roles/openshift_examples/files/examples/v3.6/image-streams/image-streams-rhel7.json b/roles/openshift_examples/files/examples/v3.6/image-streams/image-streams-rhel7.json index 2ed0efe1e..b65f0a5e3 100644 --- a/roles/openshift_examples/files/examples/v3.6/image-streams/image-streams-rhel7.json +++ b/roles/openshift_examples/files/examples/v3.6/image-streams/image-streams-rhel7.json @@ -103,7 +103,7 @@ }, "from": { "kind": "ImageStreamTag", - "name": "4" + "name": "6" } }, { @@ -137,6 +137,22 @@ "kind": "DockerImage", "name": "registry.access.redhat.com/rhscl/nodejs-4-rhel7:latest" } + }, + { + "name": "6", + "annotations": { + "openshift.io/display-name": "Node.js 6", + "description": "Build and run Node.js 6 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container.", + "iconClass": "icon-nodejs", + "tags": "builder,nodejs", + "supports":"nodejs:6,nodejs", + "version": "6", + "sampleRepo": "https://github.com/openshift/nodejs-ex.git" + }, + "from": { + "kind": "DockerImage", + "name": "registry.access.redhat.com/rhscl/nodejs-6-rhel7:latest" + } } ] } @@ -253,7 +269,7 @@ "tags": "hidden,builder,php", "supports":"php:5.5,php", "version": "5.5", - "sampleRepo": "https://github.com/openshift/cakephp-ex.git" + "sampleRepo": "https://github.com/openshift/cakephp-ex.git" }, "from": { "kind": "DockerImage", diff --git a/roles/openshift_facts/library/openshift_facts.py b/roles/openshift_facts/library/openshift_facts.py index 514c06500..cfe092a28 100755 --- a/roles/openshift_facts/library/openshift_facts.py +++ b/roles/openshift_facts/library/openshift_facts.py @@ -193,8 +193,7 @@ def hostname_valid(hostname): """ if (not hostname or hostname.startswith('localhost') or - hostname.endswith('localdomain') or - hostname.endswith('novalocal')): + hostname.endswith('localdomain')): return False return True @@ -1041,10 +1040,13 @@ def set_sdn_facts_if_unset(facts, system_facts): def set_nodename(facts): """ set nodename """ if 'node' in facts and 'common' in facts: - if 'cloudprovider' in facts and facts['cloudprovider']['kind'] == 'openstack': - facts['node']['nodename'] = facts['provider']['metadata']['hostname'].replace('.novalocal', '') - elif 'cloudprovider' in facts and facts['cloudprovider']['kind'] == 'gce': + if 'cloudprovider' in facts and facts['cloudprovider']['kind'] == 'gce': facts['node']['nodename'] = facts['provider']['metadata']['instance']['hostname'].split('.')[0] + + # TODO: The openstack cloudprovider nodename setting was too opinionaed. + # It needs to be generalized before it can be enabled again. + # elif 'cloudprovider' in facts and facts['cloudprovider']['kind'] == 'openstack': + # facts['node']['nodename'] = facts['provider']['metadata']['hostname'].replace('.novalocal', '') else: facts['node']['nodename'] = facts['common']['hostname'].lower() return facts diff --git a/roles/openshift_health_checker/openshift_checks/docker_image_availability.py b/roles/openshift_health_checker/openshift_checks/docker_image_availability.py index 4588ed634..27e6fe383 100644 --- a/roles/openshift_health_checker/openshift_checks/docker_image_availability.py +++ b/roles/openshift_health_checker/openshift_checks/docker_image_availability.py @@ -1,8 +1,9 @@ # pylint: disable=missing-docstring from openshift_checks import OpenShiftCheck, get_var +from openshift_checks.mixins import DockerHostMixin -class DockerImageAvailability(OpenShiftCheck): +class DockerImageAvailability(DockerHostMixin, OpenShiftCheck): """Check that required Docker images are available. This check attempts to ensure that required docker images are @@ -36,19 +37,11 @@ class DockerImageAvailability(OpenShiftCheck): def run(self, tmp, task_vars): msg, failed, changed = self.ensure_dependencies(task_vars) - - # exit early if Skopeo update fails if failed: - if "No package matching" in msg: - msg = "Ensure that all required dependencies can be installed via `yum`.\n" return { "failed": True, "changed": changed, - "msg": ( - "Unable to update or install required dependency packages on this host;\n" - "These are required in order to check Docker image availability:" - "\n {deps}\n{msg}" - ).format(deps=',\n '.join(self.dependencies), msg=msg), + "msg": "Some dependencies are required in order to check Docker image availability.\n" + msg } required_images = self.required_images(task_vars) @@ -168,12 +161,3 @@ class DockerImageAvailability(OpenShiftCheck): args = {"_raw_params": cmd_str} result = self.module_executor("command", args, task_vars) return not result.get("failed", False) and result.get("rc", 0) == 0 - - # ensures that the skopeo and python-docker-py packages exist - # check is skipped on atomic installations - def ensure_dependencies(self, task_vars): - if get_var(task_vars, "openshift", "common", "is_atomic"): - return "", False, False - - result = self.module_executor("yum", {"name": self.dependencies, "state": "latest"}, task_vars) - return result.get("msg", ""), result.get("failed", False) or result.get("rc", 0) != 0, result.get("changed") diff --git a/roles/openshift_health_checker/openshift_checks/docker_storage.py b/roles/openshift_health_checker/openshift_checks/docker_storage.py new file mode 100644 index 000000000..7f1751b36 --- /dev/null +++ b/roles/openshift_health_checker/openshift_checks/docker_storage.py @@ -0,0 +1,185 @@ +"""Check Docker storage driver and usage.""" +import json +import re +from openshift_checks import OpenShiftCheck, OpenShiftCheckException, get_var +from openshift_checks.mixins import DockerHostMixin + + +class DockerStorage(DockerHostMixin, OpenShiftCheck): + """Check Docker storage driver compatibility. + + This check ensures that Docker is using a supported storage driver, + and that loopback is not being used (if using devicemapper). + Also that storage usage is not above threshold. + """ + + name = "docker_storage" + tags = ["pre-install", "health", "preflight"] + + dependencies = ["python-docker-py"] + storage_drivers = ["devicemapper", "overlay2"] + max_thinpool_data_usage_percent = 90.0 + max_thinpool_meta_usage_percent = 90.0 + + # pylint: disable=too-many-return-statements + # Reason: permanent stylistic exception; + # it is clearer to return on failures and there are just many ways to fail here. + def run(self, tmp, task_vars): + msg, failed, changed = self.ensure_dependencies(task_vars) + if failed: + return { + "failed": True, + "changed": changed, + "msg": "Some dependencies are required in order to query docker storage on host:\n" + msg + } + + # attempt to get the docker info hash from the API + info = self.execute_module("docker_info", {}, task_vars) + if info.get("failed"): + return {"failed": True, "changed": changed, + "msg": "Failed to query Docker API. Is docker running on this host?"} + if not info.get("info"): # this would be very strange + return {"failed": True, "changed": changed, + "msg": "Docker API query missing info:\n{}".format(json.dumps(info))} + info = info["info"] + + # check if the storage driver we saw is valid + driver = info.get("Driver", "[NONE]") + if driver not in self.storage_drivers: + msg = ( + "Detected unsupported Docker storage driver '{driver}'.\n" + "Supported storage drivers are: {drivers}" + ).format(driver=driver, drivers=', '.join(self.storage_drivers)) + return {"failed": True, "changed": changed, "msg": msg} + + # driver status info is a list of tuples; convert to dict and validate based on driver + driver_status = {item[0]: item[1] for item in info.get("DriverStatus", [])} + if driver == "devicemapper": + if driver_status.get("Data loop file"): + msg = ( + "Use of loopback devices with the Docker devicemapper storage driver\n" + "(the default storage configuration) is unsupported in production.\n" + "Please use docker-storage-setup to configure a backing storage volume.\n" + "See http://red.ht/2rNperO for further information." + ) + return {"failed": True, "changed": changed, "msg": msg} + result = self._check_dm_usage(driver_status, task_vars) + result['changed'] = result.get('changed', False) or changed + return result + + # TODO(lmeyer): determine how to check usage for overlay2 + + return {"changed": changed} + + def _check_dm_usage(self, driver_status, task_vars): + """ + Backing assumptions: We expect devicemapper to be backed by an auto-expanding thin pool + implemented as an LV in an LVM2 VG. This is how docker-storage-setup currently configures + devicemapper storage. The LV is "thin" because it does not use all available storage + from its VG, instead expanding as needed; so to determine available space, we gather + current usage as the Docker API reports for the driver as well as space available for + expansion in the pool's VG. + Usage within the LV is divided into pools allocated to data and metadata, either of which + could run out of space first; so we check both. + """ + vals = dict( + vg_free=self._get_vg_free(driver_status.get("Pool Name"), task_vars), + data_used=driver_status.get("Data Space Used"), + data_total=driver_status.get("Data Space Total"), + metadata_used=driver_status.get("Metadata Space Used"), + metadata_total=driver_status.get("Metadata Space Total"), + ) + + # convert all human-readable strings to bytes + for key, value in vals.copy().items(): + try: + vals[key + "_bytes"] = self._convert_to_bytes(value) + except ValueError as err: # unlikely to hit this from API info, but just to be safe + return { + "failed": True, + "values": vals, + "msg": "Could not interpret {} value '{}' as bytes: {}".format(key, value, str(err)) + } + + # determine the threshold percentages which usage should not exceed + for name, default in [("data", self.max_thinpool_data_usage_percent), + ("metadata", self.max_thinpool_meta_usage_percent)]: + percent = get_var(task_vars, "max_thinpool_" + name + "_usage_percent", default=default) + try: + vals[name + "_threshold"] = float(percent) + except ValueError: + return { + "failed": True, + "msg": "Specified thinpool {} usage limit '{}' is not a percentage".format(name, percent) + } + + # test whether the thresholds are exceeded + messages = [] + for name in ["data", "metadata"]: + vals[name + "_pct_used"] = 100 * vals[name + "_used_bytes"] / ( + vals[name + "_total_bytes"] + vals["vg_free_bytes"]) + if vals[name + "_pct_used"] > vals[name + "_threshold"]: + messages.append( + "Docker thinpool {name} usage percentage {pct:.1f} " + "is higher than threshold {thresh:.1f}.".format( + name=name, + pct=vals[name + "_pct_used"], + thresh=vals[name + "_threshold"], + )) + vals["failed"] = True + + vals["msg"] = "\n".join(messages or ["Thinpool usage is within thresholds."]) + return vals + + def _get_vg_free(self, pool, task_vars): + # Determine which VG to examine according to the pool name, the only indicator currently + # available from the Docker API driver info. We assume a name that looks like + # "vg--name-docker--pool"; vg and lv names with inner hyphens doubled, joined by a hyphen. + match = re.match(r'((?:[^-]|--)+)-(?!-)', pool) # matches up to the first single hyphen + if not match: # unlikely, but... be clear if we assumed wrong + raise OpenShiftCheckException( + "This host's Docker reports it is using a storage pool named '{}'.\n" + "However this name does not have the expected format of 'vgname-lvname'\n" + "so the available storage in the VG cannot be determined.".format(pool) + ) + vg_name = match.groups()[0].replace("--", "-") + vgs_cmd = "/sbin/vgs --noheadings -o vg_free --select vg_name=" + vg_name + # should return free space like " 12.00g" if the VG exists; empty if it does not + + ret = self.execute_module("command", {"_raw_params": vgs_cmd}, task_vars) + if ret.get("failed") or ret.get("rc", 0) != 0: + raise OpenShiftCheckException( + "Is LVM installed? Failed to run /sbin/vgs " + "to determine docker storage usage:\n" + ret.get("msg", "") + ) + size = ret.get("stdout", "").strip() + if not size: + raise OpenShiftCheckException( + "This host's Docker reports it is using a storage pool named '{pool}'.\n" + "which we expect to come from local VG '{vg}'.\n" + "However, /sbin/vgs did not find this VG. Is Docker for this host" + "running and using the storage on the host?".format(pool=pool, vg=vg_name) + ) + return size + + @staticmethod + def _convert_to_bytes(string): + units = dict( + b=1, + k=1024, + m=1024**2, + g=1024**3, + t=1024**4, + p=1024**5, + ) + string = string or "" + match = re.match(r'(\d+(?:\.\d+)?)\s*(\w)?', string) # float followed by optional unit + if not match: + raise ValueError("Cannot convert to a byte size: " + string) + + number, unit = match.groups() + multiplier = 1 if not unit else units.get(unit.lower()) + if not multiplier: + raise ValueError("Cannot convert to a byte size: " + string) + + return float(number) * multiplier diff --git a/roles/openshift_health_checker/openshift_checks/mixins.py b/roles/openshift_health_checker/openshift_checks/mixins.py index 20d160eaf..7f3d78cc4 100644 --- a/roles/openshift_health_checker/openshift_checks/mixins.py +++ b/roles/openshift_health_checker/openshift_checks/mixins.py @@ -1,4 +1,3 @@ -# pylint: disable=missing-docstring,too-few-public-methods """ Mixin classes meant to be used with subclasses of OpenShiftCheck. """ @@ -8,8 +7,49 @@ from openshift_checks import get_var class NotContainerizedMixin(object): """Mixin for checks that are only active when not in containerized mode.""" + # permanent # pylint: disable=too-few-public-methods + # Reason: The mixin is not intended to stand on its own as a class. @classmethod def is_active(cls, task_vars): + """Only run on non-containerized hosts.""" is_containerized = get_var(task_vars, "openshift", "common", "is_containerized") return super(NotContainerizedMixin, cls).is_active(task_vars) and not is_containerized + + +class DockerHostMixin(object): + """Mixin for checks that are only active on hosts that require Docker.""" + + dependencies = [] + + @classmethod + def is_active(cls, task_vars): + """Only run on hosts that depend on Docker.""" + is_containerized = get_var(task_vars, "openshift", "common", "is_containerized") + is_node = "nodes" in get_var(task_vars, "group_names", default=[]) + return super(DockerHostMixin, cls).is_active(task_vars) and (is_containerized or is_node) + + def ensure_dependencies(self, task_vars): + """ + Ensure that docker-related packages exist, but not on atomic hosts + (which would not be able to install but should already have them). + Returns: msg, failed, changed + """ + if get_var(task_vars, "openshift", "common", "is_atomic"): + return "", False, False + + # NOTE: we would use the "package" module but it's actually an action plugin + # and it's not clear how to invoke one of those. This is about the same anyway: + pkg_manager = get_var(task_vars, "ansible_pkg_mgr", default="yum") + result = self.module_executor(pkg_manager, {"name": self.dependencies, "state": "present"}, task_vars) + msg = result.get("msg", "") + if result.get("failed"): + if "No package matching" in msg: + msg = "Ensure that all required dependencies can be installed via `yum`.\n" + msg = ( + "Unable to install required packages on this host:\n" + " {deps}\n{msg}" + ).format(deps=',\n '.join(self.dependencies), msg=msg) + failed = result.get("failed", False) or result.get("rc", 0) != 0 + changed = result.get("changed", False) + return msg, failed, changed diff --git a/roles/openshift_health_checker/test/docker_image_availability_test.py b/roles/openshift_health_checker/test/docker_image_availability_test.py index 0379cafb5..197c65f51 100644 --- a/roles/openshift_health_checker/test/docker_image_availability_test.py +++ b/roles/openshift_health_checker/test/docker_image_availability_test.py @@ -3,19 +3,25 @@ import pytest from openshift_checks.docker_image_availability import DockerImageAvailability -@pytest.mark.parametrize('deployment_type,is_active', [ - ("origin", True), - ("openshift-enterprise", True), - ("enterprise", False), - ("online", False), - ("invalid", False), - ("", False), +@pytest.mark.parametrize('deployment_type, is_containerized, group_names, expect_active', [ + ("origin", True, [], True), + ("openshift-enterprise", True, [], True), + ("enterprise", True, [], False), + ("online", True, [], False), + ("invalid", True, [], False), + ("", True, [], False), + ("origin", False, [], False), + ("openshift-enterprise", False, [], False), + ("origin", False, ["nodes", "masters"], True), + ("openshift-enterprise", False, ["etcd"], False), ]) -def test_is_active(deployment_type, is_active): +def test_is_active(deployment_type, is_containerized, group_names, expect_active): task_vars = dict( + openshift=dict(common=dict(is_containerized=is_containerized)), openshift_deployment_type=deployment_type, + group_names=group_names, ) - assert DockerImageAvailability.is_active(task_vars=task_vars) == is_active + assert DockerImageAvailability.is_active(task_vars=task_vars) == expect_active @pytest.mark.parametrize("is_containerized,is_atomic", [ diff --git a/roles/openshift_health_checker/test/docker_storage_test.py b/roles/openshift_health_checker/test/docker_storage_test.py new file mode 100644 index 000000000..292a323db --- /dev/null +++ b/roles/openshift_health_checker/test/docker_storage_test.py @@ -0,0 +1,224 @@ +import pytest + +from openshift_checks import OpenShiftCheckException +from openshift_checks.docker_storage import DockerStorage + + +def dummy_check(execute_module=None): + def dummy_exec(self, status, task_vars): + raise Exception("dummy executor called") + return DockerStorage(execute_module=execute_module or dummy_exec) + + +@pytest.mark.parametrize('is_containerized, group_names, is_active', [ + (False, ["masters", "etcd"], False), + (False, ["masters", "nodes"], True), + (True, ["etcd"], True), +]) +def test_is_active(is_containerized, group_names, is_active): + task_vars = dict( + openshift=dict(common=dict(is_containerized=is_containerized)), + group_names=group_names, + ) + assert DockerStorage.is_active(task_vars=task_vars) == is_active + + +non_atomic_task_vars = {"openshift": {"common": {"is_atomic": False}}} + + +@pytest.mark.parametrize('docker_info, failed, expect_msg', [ + ( + dict(failed=True, msg="Error connecting: Error while fetching server API version"), + True, + ["Is docker running on this host?"], + ), + ( + dict(msg="I have no info"), + True, + ["missing info"], + ), + ( + dict(info={ + "Driver": "devicemapper", + "DriverStatus": [("Pool Name", "docker-docker--pool")], + }), + False, + [], + ), + ( + dict(info={ + "Driver": "devicemapper", + "DriverStatus": [("Data loop file", "true")], + }), + True, + ["loopback devices with the Docker devicemapper storage driver"], + ), + ( + dict(info={ + "Driver": "overlay2", + "DriverStatus": [] + }), + False, + [], + ), + ( + dict(info={ + "Driver": "overlay", + }), + True, + ["unsupported Docker storage driver"], + ), + ( + dict(info={ + "Driver": "unsupported", + }), + True, + ["unsupported Docker storage driver"], + ), +]) +def test_check_storage_driver(docker_info, failed, expect_msg): + def execute_module(module_name, args, tmp=None, task_vars=None): + if module_name == "yum": + return {} + if module_name != "docker_info": + raise ValueError("not expecting module " + module_name) + return docker_info + + check = dummy_check(execute_module=execute_module) + check._check_dm_usage = lambda status, task_vars: dict() # stub out for this test + result = check.run(tmp=None, task_vars=non_atomic_task_vars) + + if failed: + assert result["failed"] + else: + assert not result.get("failed", False) + + for word in expect_msg: + assert word in result["msg"] + + +enough_space = { + "Pool Name": "docker--vg-docker--pool", + "Data Space Used": "19.92 MB", + "Data Space Total": "8.535 GB", + "Metadata Space Used": "40.96 kB", + "Metadata Space Total": "25.17 MB", +} + +not_enough_space = { + "Pool Name": "docker--vg-docker--pool", + "Data Space Used": "10 GB", + "Data Space Total": "10 GB", + "Metadata Space Used": "42 kB", + "Metadata Space Total": "43 kB", +} + + +@pytest.mark.parametrize('task_vars, driver_status, vg_free, success, expect_msg', [ + ( + {"max_thinpool_data_usage_percent": "not a float"}, + enough_space, + "12g", + False, + ["is not a percentage"], + ), + ( + {}, + {}, # empty values from driver status + "bogus", # also does not parse as bytes + False, + ["Could not interpret", "as bytes"], + ), + ( + {}, + enough_space, + "12.00g", + True, + [], + ), + ( + {}, + not_enough_space, + "0.00", + False, + ["data usage", "metadata usage", "higher than threshold"], + ), +]) +def test_dm_usage(task_vars, driver_status, vg_free, success, expect_msg): + check = dummy_check() + check._get_vg_free = lambda pool, task_vars: vg_free + result = check._check_dm_usage(driver_status, task_vars) + result_success = not result.get("failed") + + assert result_success is success + for msg in expect_msg: + assert msg in result["msg"] + + +@pytest.mark.parametrize('pool, command_returns, raises, returns', [ + ( + "foo-bar", + { # vgs missing + "msg": "[Errno 2] No such file or directory", + "failed": True, + "cmd": "/sbin/vgs", + "rc": 2, + }, + "Failed to run /sbin/vgs", + None, + ), + ( + "foo", # no hyphen in name - should not happen + {}, + "name does not have the expected format", + None, + ), + ( + "foo-bar", + dict(stdout=" 4.00g\n"), + None, + "4.00g", + ), + ( + "foo-bar", + dict(stdout="\n"), # no matching VG + "vgs did not find this VG", + None, + ) +]) +def test_vg_free(pool, command_returns, raises, returns): + def execute_module(module_name, args, tmp=None, task_vars=None): + if module_name != "command": + raise ValueError("not expecting module " + module_name) + return command_returns + + check = dummy_check(execute_module=execute_module) + if raises: + with pytest.raises(OpenShiftCheckException) as err: + check._get_vg_free(pool, {}) + assert raises in str(err.value) + else: + ret = check._get_vg_free(pool, {}) + assert ret == returns + + +@pytest.mark.parametrize('string, expect_bytes', [ + ("12", 12.0), + ("12 k", 12.0 * 1024), + ("42.42 MB", 42.42 * 1024**2), + ("12g", 12.0 * 1024**3), +]) +def test_convert_to_bytes(string, expect_bytes): + got = DockerStorage._convert_to_bytes(string) + assert got == expect_bytes + + +@pytest.mark.parametrize('string', [ + "bork", + "42 Qs", +]) +def test_convert_to_bytes_error(string): + with pytest.raises(ValueError) as err: + DockerStorage._convert_to_bytes(string) + assert "Cannot convert" in str(err.value) + assert string in str(err.value) diff --git a/roles/openshift_hosted/tasks/registry/storage/object_storage.yml b/roles/openshift_hosted/tasks/registry/storage/object_storage.yml index 3dde83bee..8aaba0f3c 100644 --- a/roles/openshift_hosted/tasks/registry/storage/object_storage.yml +++ b/roles/openshift_hosted/tasks/registry/storage/object_storage.yml @@ -1,20 +1,4 @@ --- -- name: Assert supported openshift.hosted.registry.storage.provider - assert: - that: - - openshift.hosted.registry.storage.provider in ['azure_blob', 's3', 'swift'] - msg: > - Object Storage Provider: "{{ openshift.hosted.registry.storage.provider }}" - is not currently supported - -- name: Assert implemented openshift.hosted.registry.storage.provider - assert: - that: - - openshift.hosted.registry.storage.provider not in ['azure_blob', 'swift'] - msg: > - Support for provider: "{{ openshift.hosted.registry.storage.provider }}" - not implemented yet - - include: s3.yml when: openshift.hosted.registry.storage.provider == 's3' diff --git a/roles/openshift_logging/tasks/generate_certs.yaml b/roles/openshift_logging/tasks/generate_certs.yaml index 040356e3d..9c8f0986a 100644 --- a/roles/openshift_logging/tasks/generate_certs.yaml +++ b/roles/openshift_logging/tasks/generate_certs.yaml @@ -17,7 +17,7 @@ - name: Generate certificates command: > - {{ openshift.common.admin_binary }} --config={{ mktemp.stdout }}/admin.kubeconfig ca create-signer-cert + {{ openshift.common.client_binary }} adm --config={{ mktemp.stdout }}/admin.kubeconfig ca create-signer-cert --key={{generated_certs_dir}}/ca.key --cert={{generated_certs_dir}}/ca.crt --serial={{generated_certs_dir}}/ca.serial.txt --name=logging-signer-test check_mode: no diff --git a/roles/openshift_logging/tasks/procure_server_certs.yaml b/roles/openshift_logging/tasks/procure_server_certs.yaml index 7ab140357..00de0ca06 100644 --- a/roles/openshift_logging/tasks/procure_server_certs.yaml +++ b/roles/openshift_logging/tasks/procure_server_certs.yaml @@ -27,7 +27,7 @@ - name: Creating signed server cert and key for {{ cert_info.procure_component }} command: > - {{ openshift.common.admin_binary }} --config={{ mktemp.stdout }}/admin.kubeconfig ca create-server-cert + {{ openshift.common.client_binary }} adm --config={{ mktemp.stdout }}/admin.kubeconfig ca create-server-cert --key={{generated_certs_dir}}/{{cert_info.procure_component}}.key --cert={{generated_certs_dir}}/{{cert_info.procure_component}}.crt --hostnames={{cert_info.hostnames|quote}} --signer-cert={{generated_certs_dir}}/ca.crt --signer-key={{generated_certs_dir}}/ca.key --signer-serial={{generated_certs_dir}}/ca.serial.txt diff --git a/roles/openshift_logging_elasticsearch/tasks/main.yaml b/roles/openshift_logging_elasticsearch/tasks/main.yaml index 7e88a7498..f1d15b76d 100644 --- a/roles/openshift_logging_elasticsearch/tasks/main.yaml +++ b/roles/openshift_logging_elasticsearch/tasks/main.yaml @@ -217,7 +217,7 @@ access_modes: "{{ openshift_logging_elasticsearch_pvc_access_modes | list }}" pv_selector: "{{ openshift_logging_elasticsearch_pvc_pv_selector }}" annotations: - volume.alpha.kubernetes.io/storage-class: "dynamic" + volume.beta.kubernetes.io/storage-class: "dynamic" when: - openshift_logging_elasticsearch_storage_type == "pvc" - openshift_logging_elasticsearch_pvc_dynamic diff --git a/roles/openshift_logging_elasticsearch/templates/elasticsearch.yml.j2 b/roles/openshift_logging_elasticsearch/templates/elasticsearch.yml.j2 index 681f5a7e6..58c325c8a 100644 --- a/roles/openshift_logging_elasticsearch/templates/elasticsearch.yml.j2 +++ b/roles/openshift_logging_elasticsearch/templates/elasticsearch.yml.j2 @@ -38,6 +38,7 @@ gateway: io.fabric8.elasticsearch.authentication.users: ["system.logging.kibana", "system.logging.fluentd", "system.logging.curator", "system.admin"] io.fabric8.elasticsearch.kibana.mapping.app: /usr/share/elasticsearch/index_patterns/com.redhat.viaq-openshift.index-pattern.json io.fabric8.elasticsearch.kibana.mapping.ops: /usr/share/elasticsearch/index_patterns/com.redhat.viaq-openshift.index-pattern.json +io.fabric8.elasticsearch.kibana.mapping.empty: /usr/share/elasticsearch/index_patterns/com.redhat.viaq-openshift.index-pattern.json openshift.config: use_common_data_model: true diff --git a/roles/openshift_logging_elasticsearch/templates/es.j2 b/roles/openshift_logging_elasticsearch/templates/es.j2 index e129205ca..bd2289f0d 100644 --- a/roles/openshift_logging_elasticsearch/templates/es.j2 +++ b/roles/openshift_logging_elasticsearch/templates/es.j2 @@ -84,6 +84,9 @@ spec: name: "RECOVER_AFTER_TIME" value: "{{openshift_logging_elasticsearch_recover_after_time}}" - + name: "READINESS_PROBE_TIMEOUT" + value: "30" + - name: "IS_MASTER" value: "{% if deploy_type in ['data-master', 'master'] %}true{% else %}false{% endif %}" @@ -104,8 +107,8 @@ spec: exec: command: - "/usr/share/elasticsearch/probe/readiness.sh" - initialDelaySeconds: 5 - timeoutSeconds: 4 + initialDelaySeconds: 10 + timeoutSeconds: 30 periodSeconds: 5 volumes: - name: elasticsearch diff --git a/roles/openshift_logging_kibana/tasks/main.yaml b/roles/openshift_logging_kibana/tasks/main.yaml index d13255386..bae55ffaa 100644 --- a/roles/openshift_logging_kibana/tasks/main.yaml +++ b/roles/openshift_logging_kibana/tasks/main.yaml @@ -43,6 +43,31 @@ kibana_name: "{{ 'logging-kibana' ~ ( (openshift_logging_kibana_ops_deployment | default(false) | bool) | ternary('-ops', '')) }}" kibana_component: "{{ 'kibana' ~ ( (openshift_logging_kibana_ops_deployment | default(false) | bool) | ternary('-ops', '')) }}" +# Check {{ generated_certs_dir }} for session_secret and oauth_secret +- name: Checking for session_secret + stat: path="{{generated_certs_dir}}/session_secret" + register: session_secret_file + +- name: Checking for oauth_secret + stat: path="{{generated_certs_dir}}/oauth_secret" + register: oauth_secret_file + +# gen session_secret if necessary +- name: Generate session secret + copy: + content: "{{ 200 | oo_random_word }}" + dest: "{{ generated_certs_dir }}/session_secret" + when: + - not session_secret_file.stat.exists + +# gen oauth_secret if necessary +- name: Generate oauth secret + copy: + content: "{{ 64 | oo_random_word }}" + dest: "{{ generated_certs_dir }}/oauth_secret" + when: + - not oauth_secret_file.stat.exists + - name: Retrieving the cert to use when generating secrets for the logging components slurp: src: "{{ generated_certs_dir }}/{{ item.file }}" @@ -52,6 +77,8 @@ - { name: "kibana_internal_key", file: "kibana-internal.key"} - { name: "kibana_internal_cert", file: "kibana-internal.crt"} - { name: "server_tls", file: "server-tls.json"} + - { name: "session_secret", file: "session_secret" } + - { name: "oauth_secret", file: "oauth_secret" } # services - name: Set {{ kibana_name }} service @@ -120,19 +147,16 @@ files: - "{{ tempdir }}/templates/kibana-route.yaml" -# gen session_secret -- if necessary -# TODO: make idempotent -- name: Generate proxy session - set_fact: - session_secret: "{{ 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789' | random_word(200) }}" - check_mode: no +# preserve list of current hostnames +- name: Get current oauthclient hostnames + oc_obj: + state: list + name: kibana-proxy + namespace: "{{ openshift_logging_namespace }}" + kind: oauthclient + register: oauth_client_list -# gen oauth_secret -- if necessary -# TODO: make idempotent -- name: Generate oauth client secret - set_fact: - oauth_secret: "{{ 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789' | random_word(64) }}" - check_mode: no +- set_fact: proxy_hostnames={{ oauth_client_list.results.results[0].redirectURIs | default ([]) + ['https://' ~ openshift_logging_kibana_hostname] }} # create oauth client - name: Create oauth-client template @@ -140,8 +164,8 @@ src: oauth-client.j2 dest: "{{ tempdir }}/templates/oauth-client.yml" vars: - kibana_hostname: "{{ openshift_logging_kibana_hostname }}" - secret: "{{ oauth_secret }}" + kibana_hostnames: "{{ proxy_hostnames | unique }}" + secret: "{{ key_pairs | entry_from_named_pair('oauth_secret') | b64decode }}" - name: Set kibana-proxy oauth-client oc_obj: @@ -183,9 +207,9 @@ # path: "{{ generated_certs_dir }}/server-tls.json" contents: - path: oauth-secret - data: "{{ oauth_secret }}" + data: "{{ key_pairs | entry_from_named_pair('oauth_secret') | b64decode }}" - path: session-secret - data: "{{ session_secret }}" + data: "{{ key_pairs | entry_from_named_pair('session_secret') | b64decode }}" - path: server-key data: "{{ key_pairs | entry_from_named_pair('kibana_internal_key') | b64decode }}" - path: server-cert diff --git a/roles/openshift_logging_kibana/templates/oauth-client.j2 b/roles/openshift_logging_kibana/templates/oauth-client.j2 index 6767f6d89..c80ff3d30 100644 --- a/roles/openshift_logging_kibana/templates/oauth-client.j2 +++ b/roles/openshift_logging_kibana/templates/oauth-client.j2 @@ -4,9 +4,11 @@ metadata: name: kibana-proxy labels: logging-infra: support -secret: {{secret}} +secret: {{ secret }} redirectURIs: -- https://{{kibana_hostname}} +{% for host in kibana_hostnames %} +- {{ host }} +{% endfor %} scopeRestrictions: - literals: - user:info diff --git a/roles/openshift_master_certificates/tasks/main.yml b/roles/openshift_master_certificates/tasks/main.yml index 9706da24b..62413536b 100644 --- a/roles/openshift_master_certificates/tasks/main.yml +++ b/roles/openshift_master_certificates/tasks/main.yml @@ -71,7 +71,7 @@ delegate_to: "{{ openshift_ca_host }}" run_once: true -- name: Generate the master client config +- name: Generate the loopback master client config command: > {{ hostvars[openshift_ca_host].openshift.common.client_binary }} adm create-api-client-config {% for named_ca_certificate in openshift.master.named_certificates | default([]) | oo_collect('cafile') %} @@ -80,8 +80,8 @@ --certificate-authority={{ openshift_ca_cert }} --client-dir={{ openshift_generated_configs_dir }}/master-{{ hostvars[item].openshift.common.hostname }} --groups=system:masters,system:openshift-master - --master={{ openshift.master.api_url }} - --public-master={{ openshift.master.public_api_url }} + --master={{ hostvars[item].openshift.master.loopback_api_url }} + --public-master={{ hostvars[item].openshift.master.loopback_api_url }} --signer-cert={{ openshift_ca_cert }} --signer-key={{ openshift_ca_key }} --signer-serial={{ openshift_ca_serial }} diff --git a/roles/openshift_metrics/tasks/generate_certificates.yaml b/roles/openshift_metrics/tasks/generate_certificates.yaml index 7af3f9467..3dc15d58b 100644 --- a/roles/openshift_metrics/tasks/generate_certificates.yaml +++ b/roles/openshift_metrics/tasks/generate_certificates.yaml @@ -1,7 +1,7 @@ --- - name: generate ca certificate chain command: > - {{ openshift.common.admin_binary }} ca create-signer-cert + {{ openshift.common.client_binary }} adm ca create-signer-cert --config={{ mktemp.stdout }}/admin.kubeconfig --key='{{ mktemp.stdout }}/ca.key' --cert='{{ mktemp.stdout }}/ca.crt' diff --git a/roles/openshift_metrics/tasks/install_cassandra.yaml b/roles/openshift_metrics/tasks/install_cassandra.yaml index 3b4e8560f..0b6fe7c9e 100644 --- a/roles/openshift_metrics/tasks/install_cassandra.yaml +++ b/roles/openshift_metrics/tasks/install_cassandra.yaml @@ -50,7 +50,7 @@ labels: metrics-infra: hawkular-cassandra annotations: - volume.alpha.kubernetes.io/storage-class: dynamic + volume.beta.kubernetes.io/storage-class: dynamic access_modes: "{{ openshift_metrics_cassandra_pvc_access | list }}" size: "{{ openshift_metrics_cassandra_pvc_size }}" with_sequence: count={{ openshift_metrics_cassandra_replicas }} diff --git a/roles/openshift_metrics/tasks/setup_certificate.yaml b/roles/openshift_metrics/tasks/setup_certificate.yaml index 199968579..2d880f4d6 100644 --- a/roles/openshift_metrics/tasks/setup_certificate.yaml +++ b/roles/openshift_metrics/tasks/setup_certificate.yaml @@ -1,7 +1,7 @@ --- - name: generate {{ component }} keys command: > - {{ openshift.common.admin_binary }} ca create-server-cert + {{ openshift.common.client_binary }} adm ca create-server-cert --config={{ mktemp.stdout }}/admin.kubeconfig --key='{{ mktemp.stdout }}/{{ component }}.key' --cert='{{ mktemp.stdout }}/{{ component }}.crt' diff --git a/roles/openshift_node/handlers/main.yml b/roles/openshift_node/handlers/main.yml index 4dcf1eef8..a6bd12d4e 100644 --- a/roles/openshift_node/handlers/main.yml +++ b/roles/openshift_node/handlers/main.yml @@ -1,6 +1,8 @@ --- - name: restart openvswitch - systemd: name=openvswitch state=restarted + systemd: + name: openvswitch + state: restarted when: (not skip_node_svc_handlers | default(False) | bool) and not (ovs_service_status_changed | default(false) | bool) and openshift.common.use_openshift_sdn | bool notify: - restart openvswitch pause @@ -10,8 +12,13 @@ when: (not skip_node_svc_handlers | default(False) | bool) and openshift.common.is_containerized | bool - name: restart node - systemd: name={{ openshift.common.service_type }}-node state=restarted + systemd: + name: "{{ openshift.common.service_type }}-node" + state: restarted when: (not skip_node_svc_handlers | default(False) | bool) and not (node_service_status_changed | default(false) | bool) - name: reload sysctl.conf command: /sbin/sysctl -p + +- name: reload systemd units + command: systemctl daemon-reload diff --git a/roles/openshift_node/tasks/systemd_units.yml b/roles/openshift_node/tasks/systemd_units.yml index f58c803c4..e3ce5df3d 100644 --- a/roles/openshift_node/tasks/systemd_units.yml +++ b/roles/openshift_node/tasks/systemd_units.yml @@ -8,6 +8,9 @@ src: openshift.docker.node.dep.service register: install_node_dep_result when: openshift.common.is_containerized | bool + notify: + - reload systemd units + - restart node - block: - name: Pre-pull node image @@ -21,6 +24,9 @@ dest: "/etc/systemd/system/{{ openshift.common.service_type }}-node.service" src: openshift.docker.node.service register: install_node_result + notify: + - reload systemd units + - restart node when: - openshift.common.is_containerized | bool - not openshift.common.is_node_system_container | bool @@ -31,6 +37,9 @@ src: "{{ openshift.common.service_type }}-node.service.j2" register: install_node_result when: not openshift.common.is_containerized | bool + notify: + - reload systemd units + - restart node - name: Create the openvswitch service env file template: @@ -39,6 +48,7 @@ when: openshift.common.is_containerized | bool register: install_ovs_sysconfig notify: + - reload systemd units - restart openvswitch - name: Install Node system container @@ -67,6 +77,7 @@ when: openshift.common.use_openshift_sdn | default(true) | bool register: install_oom_fix_result notify: + - reload systemd units - restart openvswitch - block: @@ -81,6 +92,7 @@ dest: "/etc/systemd/system/openvswitch.service" src: openvswitch.docker.service notify: + - reload systemd units - restart openvswitch when: - openshift.common.is_containerized | bool @@ -119,8 +131,3 @@ when: ('http_proxy' in openshift.common and openshift.common.http_proxy != '') notify: - restart node - -- name: Reload systemd units - command: systemctl daemon-reload - notify: - - restart node diff --git a/roles/openshift_node_upgrade/tasks/rpm_upgrade.yml b/roles/openshift_node_upgrade/tasks/rpm_upgrade.yml index 480e87d58..06a2d16ba 100644 --- a/roles/openshift_node_upgrade/tasks/rpm_upgrade.yml +++ b/roles/openshift_node_upgrade/tasks/rpm_upgrade.yml @@ -12,3 +12,18 @@ - name: Ensure python-yaml present for config upgrade package: name=PyYAML state=present when: not openshift.common.is_atomic | bool + +- name: Install Node service file + template: + dest: "/etc/systemd/system/{{ openshift.common.service_type }}-node.service" + src: "{{ openshift.common.service_type }}-node.service.j2" + register: l_node_unit + +# NOTE: This is needed to make sure we are using the correct set +# of systemd unit files. The RPMs lay down defaults but +# the install/upgrade may override them in /etc/systemd/system/. +# NOTE: We don't use the systemd module as some versions of the module +# require a service to be part of the call. +- name: Reload systemd units + command: systemctl daemon-reload + when: l_node_unit | changed diff --git a/roles/openshift_node_upgrade/templates/atomic-openshift-node.service.j2 b/roles/openshift_node_upgrade/templates/atomic-openshift-node.service.j2 new file mode 120000 index 000000000..6041fb13a --- /dev/null +++ b/roles/openshift_node_upgrade/templates/atomic-openshift-node.service.j2 @@ -0,0 +1 @@ +../../openshift_node/templates/atomic-openshift-node.service.j2
\ No newline at end of file diff --git a/roles/openshift_node_upgrade/templates/origin-node.service.j2 b/roles/openshift_node_upgrade/templates/origin-node.service.j2 new file mode 120000 index 000000000..79c45a303 --- /dev/null +++ b/roles/openshift_node_upgrade/templates/origin-node.service.j2 @@ -0,0 +1 @@ +../../openshift_node/templates/origin-node.service.j2
\ No newline at end of file |