diff options
186 files changed, 3030 insertions, 1950 deletions
diff --git a/.redhat-ci.sh b/.redhat-ci.sh index 29d64e4d5..fce8c1d52 100755 --- a/.redhat-ci.sh +++ b/.redhat-ci.sh @@ -1,10 +1,9 @@ #!/bin/bash set -xeuo pipefail -# F25 currently has 2.2.1, so install from pypi -pip install ansible==2.2.2.0 +pip install -r requirements.txt -# do a simple ping to make sure the nodes are available +# ping the nodes to check they're responding and register their ostree versions ansible -vvv -i .redhat-ci.inventory nodes -a 'rpm-ostree status' upload_journals() { diff --git a/.redhat-ci.yml b/.redhat-ci.yml index 887cc6ef0..6dac7b256 100644 --- a/.redhat-ci.yml +++ b/.redhat-ci.yml @@ -24,7 +24,7 @@ env: OPENSHIFT_IMAGE_TAG: v3.6.0-alpha.1 tests: - - sh .redhat-ci.sh + - ./.redhat-ci.sh artifacts: - journals/ diff --git a/.tito/packages/openshift-ansible b/.tito/packages/openshift-ansible index 03abd6d62..837491217 100644 --- a/.tito/packages/openshift-ansible +++ b/.tito/packages/openshift-ansible @@ -1 +1 @@ -3.6.39-1 ./ +3.6.58-1 ./ @@ -42,3 +42,22 @@ The progress of the build can be monitored with: Once built, the image will be visible in the Image Stream created by the same command: oc describe imagestream openshift-ansible + +## Build the Atomic System Container + +A system container runs using runC instead of Docker and it is managed +by the [atomic](https://github.com/projectatomic/atomic/) tool. As it +doesn't require Docker to run, the installer can run on a node of the +cluster without interfering with the Docker daemon that is configured +by the installer itself. + +The first step is to build the [container image](#build-an-openshift-ansible-container-image) +as described before. The container image already contains all the +required files to run as a system container. + +Once the container image is built, we can import it into the OSTree +storage: + +``` +atomic pull --storage ostree docker:openshift/openshift-ansible:latest +``` diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index a000802e2..1c0fa73ad 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -3,92 +3,103 @@ Thank you for contributing to OpenShift Ansible. This document explains how the repository is organized, and how to submit contributions. -## Introduction +**Table of Contents** -Before submitting code changes, get familiarized with these documents: +<!-- TOC depthFrom:2 depthTo:4 withLinks:1 updateOnSave:1 orderedList:0 --> -- [Core Concepts](https://github.com/openshift/openshift-ansible/blob/master/docs/core_concepts_guide.adoc) -- [Best Practices Guide](https://github.com/openshift/openshift-ansible/blob/master/docs/best_practices_guide.adoc) -- [Style Guide](https://github.com/openshift/openshift-ansible/blob/master/docs/style_guide.adoc) +- [Introduction](#introduction) +- [Submitting contributions](#submitting-contributions) +- [Running tests and other verification tasks](#running-tests-and-other-verification-tasks) + - [Running only specific tasks](#running-only-specific-tasks) +- [Appendix](#appendix) + - [Tricks](#tricks) + - [Activating a virtualenv managed by tox](#activating-a-virtualenv-managed-by-tox) + - [Limiting the unit tests that are run](#limiting-the-unit-tests-that-are-run) + - [Finding unused Python code](#finding-unused-python-code) -## Repository structure +<!-- /TOC --> -### Ansible +## Introduction -``` -. -├── inventory Contains dynamic inventory scripts, and examples of -│ Ansible inventories. -├── library Contains Python modules used by the playbooks. -├── playbooks Contains Ansible playbooks targeting multiple use cases. -└── roles Contains Ansible roles, units of shared behavior among - playbooks. -``` +Before submitting code changes, get familiarized with these documents: -#### Ansible plugins +- [Core Concepts](docs/core_concepts_guide.adoc) +- [Best Practices Guide](docs/best_practices_guide.adoc) +- [Style Guide](docs/style_guide.adoc) +- [Repository Structure](docs/repo_structure.md) -These are plugins used in playbooks and roles: +Please consider opening an issue or discussing on an existing one if you are +planning to work on something larger, to make sure your time investment is +something that can be merged to the repository. -``` -. -├── ansible-profile -├── callback_plugins -├── filter_plugins -└── lookup_plugins -``` +## Submitting contributions -### Scripts +1. [Fork](https://help.github.com/articles/fork-a-repo/) this repository and + [create a work branch in your fork](https://help.github.com/articles/github-flow/). +2. Go through the documents mentioned in the [introduction](#introduction). +3. Make changes and commit. You may want to review your changes and + [run tests](#running-tests-and-other-verification-tasks) before pushing your + branch. +4. [Open a Pull Request](https://help.github.com/articles/creating-a-pull-request/). + Give it a meaningful title explaining the changes you are proposing, and + then add further details in the description. + +One of the repository maintainers will then review the PR and trigger tests, and +possibly start a discussion that goes on until the PR is ready to be merged. +This process is further explained in the +[Pull Request process](docs/pull_requests.md) document. + +If you get no timely feedback from a project contributor / maintainer, sorry for +the delay. You can help us speed up triaging, reviewing and eventually merging +contributions by requesting a review or tagging in a comment +[someone who has worked on the files](https://help.github.com/articles/tracing-changes-in-a-file/) +you're proposing changes to. -``` -. -├── bin [DEPRECATED] Contains the `bin/cluster` script, a -│ wrapper around the Ansible playbooks that ensures proper -│ configuration, and facilitates installing, updating, -│ destroying and configuring OpenShift clusters. -│ Note: this tool is kept in the repository for legacy -│ reasons and will be removed at some point. -└── utils Contains the `atomic-openshift-installer` command, an - interactive CLI utility to install OpenShift across a - set of hosts. -``` +--- -### Documentation +**Note**: during the review process, you may add new commits to address review +comments or change existing commits. However, before getting your PR merged, +please [squash commits](https://help.github.com/articles/about-git-rebase/) to a +minimum set of meaningful commits. -``` -. -└── docs Contains documentation for this repository. -``` +If you've broken your work up into a set of sequential changes and each commit +pass the tests on their own then that's fine. If you've got commits fixing typos +or other problems introduced by previous commits in the same PR, then those +should be squashed before merging. -### Tests +If you are new to Git, these links might help: -``` -. -└── test Contains tests. -``` +- https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History +- http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html -## Building openshift-ansible RPMs and container images +--- -See the [build instructions in BUILD.md](BUILD.md). +## Running tests and other verification tasks -## Running tests +We use [`tox`](http://readthedocs.org/docs/tox/) to manage virtualenvs where +tests and other verification tasks are run. We use +[`pytest`](https://docs.pytest.org/) as our test runner. -We use [tox](http://readthedocs.org/docs/tox/) to manage virtualenvs and run -tests. Alternatively, tests can be run using -[detox](https://pypi.python.org/pypi/detox/) which allows for running tests in -parallel. +Alternatively to `tox`, one can use +[`detox`](https://pypi.python.org/pypi/detox/) for running verification tasks in +parallel. Note that while `detox` may be useful in development to make use of +multiple cores, it can be buggy at times and produce flakes, thus we do not use +it in our [CI](docs/continuous_integration.md) jobs. -Note: while `detox` may be useful in development to make use of multiple cores, -it can be buggy at times and produce flakes, thus we do not use it in our CI. +``` +pip install tox +``` +To run all tests and verification tasks: ``` -pip install tox detox +tox ``` --- -Note: before running `tox` or `detox`, ensure that the only virtualenvs within -the repository root are the ones managed by `tox`, those in a `.tox` +**Note**: before running `tox` or `detox`, ensure that the only virtualenvs +within the repository root are the ones managed by `tox`, those in a `.tox` subdirectory. Use this command to list paths that are likely part of a virtualenv not managed @@ -105,45 +116,52 @@ potentially fail. --- -List the test environments available: +### Running only specific tasks + +The [tox configuration](tox.ini) describes environments based on either Python 2 +or Python 3. Each environment is associated with a command that is executed in +the context of a virtualenv, with a specific version of Python, installed +dependencies, environment variables and so on. To list the environments +available: ``` tox -l ``` -Run all of the tests and linters with: +To run the command of a particular environment, e.g., `flake8` on Python 2.7: ``` -tox +tox -e py27-flake8 ``` -Run all of the tests linters in parallel (may flake): +To run the command of a particular environment in a clean virtualenv, e.g., +`pylint` on Python 3.5: ``` -detox +tox -re py35-pylint ``` -### Run only unit tests or some specific linter +The `-r` flag recreates existing environments, useful to force dependencies to +be reinstalled. -Run a particular test environment (`flake8` on Python 2.7 in this case): +## Appendix -``` -tox -e py27-flake8 -``` +### Tricks -Run a particular test environment in a clean virtualenv (`pylint` on Python 3.5 -in this case): +Here are some useful tips that might improve your workflow while working on this repository. -``` -tox -re py35-pylint -``` +#### Git Hooks -### Tricks +Git hooks are included in this repository to aid in development. Check +out the README in the +[hack/hooks](http://github.com/openshift/openshift-ansible/blob/master/hack/hooks/README.md) +directory for more information. #### Activating a virtualenv managed by tox -If you want to enter a virtualenv created by tox to do additional -testing/debugging (py27-flake8 env in this case): +If you want to enter a virtualenv created by tox to do additional debugging, you +can activate it just like any other virtualenv (py27-flake8 environment in this +example): ``` source .tox/py27-flake8/bin/activate @@ -182,32 +200,7 @@ $ tox -e py27-unit -- roles/lib_openshift/src/test/unit/test_oc_project.py -k te Among other things, this can be used for instance to see the coverage levels of individual modules as we work on improving tests. -## Submitting contributions - -1. Go through the guides from the [introduction](#Introduction). -2. Fork this repository, and create a work branch in your fork. -3. Make changes and commit. You may want to review your changes and run tests - before pushing your branch. -4. Open a Pull Request. - -One of the repository maintainers will then review the PR and submit it for -testing. - -The `default` test job is publicly accessible at -https://ci.openshift.redhat.com/jenkins/job/openshift-ansible/. The other jobs -are run on a different Jenkins host that is not publicly accessible, however the -test results are posted to S3 buckets when complete. - -The test output of each job is also posted to the Pull Request as comments. - -A trend of the time taken by merge jobs is available at -https://ci.openshift.redhat.com/jenkins/job/merge_pull_request_openshift_ansible/buildTimeTrend. - ---- - -## Appendix - -### Finding unused Python code +#### Finding unused Python code If you are contributing with Python code, you can use the tool [`vulture`](https://pypi.python.org/pypi/vulture) to verify that you are not @@ -54,7 +54,7 @@ you are not running a stable release. *** Requirements: - - Ansible >= 2.2.0 + - Ansible >= 2.2.2.0 - Jinja >= 2.7 - pyOpenSSL - python-lxml @@ -83,7 +83,10 @@ See [README_CONTAINER_IMAGE.md](README_CONTAINER_IMAGE.md) for information on ho See the [hooks documentation](HOOKS.md). - ## Contributing See the [contribution guide](CONTRIBUTING.md). + +## Building openshift-ansible RPMs and container images + +See the [build instructions](BUILD.md). diff --git a/README_CONTAINER_IMAGE.md b/README_CONTAINER_IMAGE.md index 29a99db3f..b78073100 100644 --- a/README_CONTAINER_IMAGE.md +++ b/README_CONTAINER_IMAGE.md @@ -47,3 +47,26 @@ Here is a detailed explanation of the options used in the command above: Further usage examples are available in the [examples directory](examples/) with samples of how to use the image from within OpenShift. Additional usage information for images built from `playbook2image` like this one can be found in the [playbook2image examples](https://github.com/aweiteka/playbook2image/tree/master/examples). + +## Running openshift-ansible as a System Container + +Building the System Container: See the [BUILD.md](BUILD.md). + +Copy ssh public key of the host machine to master and nodes machines in the cluster. + +If the inventory file needs additional files then it can use the path `/var/lib/openshift-installer` in the container as it is bind mounted from the host (controllable with `VAR_LIB_OPENSHIFT_INSTALLER`). + +Run the ansible system container: + +```sh +atomic install --system --set INVENTORY_FILE=$(pwd)/inventory.origin openshift/openshift-ansible +systemctl start openshift-ansible +``` + +The `INVENTORY_FILE` variable says to the installer what inventory file on the host will be bind mounted inside the container. In the example above, a file called `inventory.origin` in the current directory is used as the inventory file for the installer. + +And to finally cleanup the container: + +``` +atomic uninstall openshift-ansible +``` diff --git a/callback_plugins/aa_version_requirement.py b/callback_plugins/aa_version_requirement.py index f31445381..20bdd9056 100644 --- a/callback_plugins/aa_version_requirement.py +++ b/callback_plugins/aa_version_requirement.py @@ -7,7 +7,6 @@ The plugin is named with leading `aa_` to ensure this plugin is loaded first (alphanumerically) by Ansible. """ import sys -from subprocess import check_output from ansible import __version__ if __version__ < '2.0': @@ -30,13 +29,8 @@ else: # Set to minimum required Ansible version -REQUIRED_VERSION = '2.2.0.0' -DESCRIPTION = "Supported versions: %s or newer (except 2.2.1.0)" % REQUIRED_VERSION -FAIL_ON_2_2_1_0 = "There are known issues with Ansible version 2.2.1.0 which " \ - "are impacting OpenShift-Ansible. Please use Ansible " \ - "version 2.2.0.0 or a version greater than 2.2.1.0. " \ - "See this issue for more details: " \ - "https://github.com/openshift/openshift-ansible/issues/3111" +REQUIRED_VERSION = '2.2.2.0' +DESCRIPTION = "Supported versions: %s or newer" % REQUIRED_VERSION def version_requirement(version): @@ -64,13 +58,3 @@ class CallbackModule(CallbackBase): 'FATAL: Current Ansible version (%s) is not supported. %s' % (__version__, DESCRIPTION), color='red') sys.exit(1) - - if __version__ == '2.2.1.0': - rpm_ver = str(check_output(["rpm", "-qa", "ansible"])) - patched_ansible = '2.2.1.0-2' - - if patched_ansible not in rpm_ver: - display( - 'FATAL: Current Ansible version (%s) is not supported. %s' - % (__version__, FAIL_ON_2_2_1_0), color='red') - sys.exit(1) diff --git a/docs/best_practices_guide.adoc b/docs/best_practices_guide.adoc index 7f3d85d40..4ecd535e4 100644 --- a/docs/best_practices_guide.adoc +++ b/docs/best_practices_guide.adoc @@ -11,22 +11,6 @@ All new pull requests created against this repository MUST comply with this guid This guide complies with https://www.ietf.org/rfc/rfc2119.txt[RFC2119]. -== Pull Requests - - - -[[All-pull-requests-MUST-pass-the-build-bot-before-they-are-merged]] -[cols="2v,v"] -|=== -| <<All-pull-requests-MUST-pass-the-build-bot-before-they-are-merged, Rule>> -| All pull requests MUST pass the build bot *before* they are merged. -|=== - -The purpose of this rule is to avoid cases where the build bot will fail pull requests for code modified in a previous pull request. - -The tooling is flexible enough that exceptions can be made so that the tool the build bot is running will ignore certain areas or certain checks, but the build bot itself must pass for the pull request to be merged. - - == Python @@ -509,12 +493,12 @@ The Ansible `package` module calls the associated package manager for the underl # tasks.yml - name: Install etcd (for etcdctl) yum: name=etcd state=latest - when: "ansible_pkg_mgr == yum" + when: ansible_pkg_mgr == yum register: install_result - name: Install etcd (for etcdctl) dnf: name=etcd state=latest - when: "ansible_pkg_mgr == dnf" + when: ansible_pkg_mgr == dnf register: install_result ---- diff --git a/docs/pull_requests.md b/docs/pull_requests.md new file mode 100644 index 000000000..953563fb2 --- /dev/null +++ b/docs/pull_requests.md @@ -0,0 +1,86 @@ +# Pull Request process + +Pull Requests in the `openshift-ansible` project follow a +[Continuous](https://en.wikipedia.org/wiki/Continuous_integration) +[Integration](https://martinfowler.com/articles/continuousIntegration.html) +process that is similar to the process observed in other repositories such as +[`origin`](https://github.com/openshift/origin). + +Whenever a +[Pull Request is opened](../CONTRIBUTING.md#submitting-contributions), some +automated test jobs must be successfully run before the PR can be merged. + +Some of these jobs are automatically triggered, e.g., Travis and Coveralls. +Other jobs need to be manually triggered by a member of the +[Team OpenShift Ansible Contributors](https://github.com/orgs/openshift/teams/team-openshift-ansible-contributors). + +## Triggering tests + +We have two different Jenkins infrastructures, and, while that holds true, there +are two commands that trigger a different set of test jobs. We are working on +simplifying the workflow towards a single infrastructure in the future. + +- **Test jobs on the older infrastructure** + + Members of the [OpenShift organization](https://github.com/orgs/openshift/people) + can trigger the set of test jobs in the older infrastructure by writing a + comment with the exact text `aos-ci-test` and nothing else. + + The Jenkins host is not publicly accessible. Test results are posted to S3 + buckets when complete, and links are available both at the bottom of the Pull + Request page and as comments posted by + [@openshift-bot](https://github.com/openshift-bot). + +- **Test jobs on the newer infrastructure** + + Members of the + [Team OpenShift Ansible Contributors](https://github.com/orgs/openshift/teams/team-openshift-ansible-contributors) + can trigger the set of test jobs in the newer infrastructure by writing a + comment containing `[test]` anywhere in the comment body. + + The [Jenkins host](https://ci.openshift.redhat.com/jenkins/job/test_pull_request_openshift_ansible/) + is publicly accessible. Like for the older infrastructure, the result of each + job is also posted to the Pull Request as comments and summarized at the + bottom of the Pull Request page. + +## Triggering merge + +After a PR is properly reviewed and a set of +[required jobs](https://github.com/openshift/aos-cd-jobs/blob/master/sjb/test_status_config.yml) +reported successfully, it can be tagged for merge by a member of the +[Team OpenShift Ansible Contributors](https://github.com/orgs/openshift/teams/team-openshift-ansible-contributors) +by writing a comment containing `[merge]` anywhere in the comment body. + +Tagging a Pull Request for merge puts it in an automated merge queue. The +[@openshift-bot](https://github.com/openshift-bot) monitors the queue and merges +PRs that pass all of the required tests. + +### Manual merges + +The normal process described above should be followed: `aos-ci-test` and +`[test]` / `[merge]`. + +In exceptional cases, such as when known problems with the merge queue prevent +PRs from being merged, a PR may be manually merged if _all_ of these conditions +are true: + +- [ ] Travis job must have passed (as enforced by GitHub) +- [ ] Must have passed `aos-ci-test` (as enforced by GitHub) +- [ ] Must have a positive review (as enforced by GitHub) +- [ ] Must have failed the `[merge]` queue with a reported flake at least twice +- [ ] Must have [issues labeled kind/test-flake](https://github.com/openshift/origin/issues?q=is%3Aopen+is%3Aissue+label%3Akind%2Ftest-flake) in [Origin](https://github.com/openshift/origin) linked in comments for the failures +- [ ] Content must not have changed since all of the above conditions have been met (no rebases, no new commits) + +This exception is temporary and should be completely removed in the future once +the merge queue has become more stable. + +Only members of the +[Team OpenShift Ansible Committers](https://github.com/orgs/openshift/teams/team-openshift-ansible-committers) +can perform manual merges. + +## Useful links + +- Repository containing Jenkins job definitions: https://github.com/openshift/aos-cd-jobs +- List of required successful jobs before merge: https://github.com/openshift/aos-cd-jobs/blob/master/sjb/test_status_config.yml +- Source code of the bot responsible for testing and merging PRs: https://github.com/openshift/test-pull-requests/ +- Trend of the time taken by merge jobs: https://ci.openshift.redhat.com/jenkins/job/merge_pull_request_openshift_ansible/buildTimeTrend diff --git a/docs/repo_structure.md b/docs/repo_structure.md new file mode 100644 index 000000000..693837fba --- /dev/null +++ b/docs/repo_structure.md @@ -0,0 +1,54 @@ +# Repository structure + +### Ansible + +``` +. +├── inventory Contains dynamic inventory scripts, and examples of +│ Ansible inventories. +├── library Contains Python modules used by the playbooks. +├── playbooks Contains Ansible playbooks targeting multiple use cases. +└── roles Contains Ansible roles, units of shared behavior among + playbooks. +``` + +#### Ansible plugins + +These are plugins used in playbooks and roles: + +``` +. +├── ansible-profile +├── callback_plugins +├── filter_plugins +└── lookup_plugins +``` + +### Scripts + +``` +. +├── bin [DEPRECATED] Contains the `bin/cluster` script, a +│ wrapper around the Ansible playbooks that ensures proper +│ configuration, and facilitates installing, updating, +│ destroying and configuring OpenShift clusters. +│ Note: this tool is kept in the repository for legacy +│ reasons and will be removed at some point. +└── utils Contains the `atomic-openshift-installer` command, an + interactive CLI utility to install OpenShift across a + set of hosts. +``` + +### Documentation + +``` +. +└── docs Contains documentation for this repository. +``` + +### Tests + +``` +. +└── test Contains tests. +``` diff --git a/examples/README.md b/examples/README.md index 0e412244d..d54752fb9 100644 --- a/examples/README.md +++ b/examples/README.md @@ -69,19 +69,19 @@ To run these examples we prepare the inventory and ssh keys as in the other exam Additionally we allocate a `PersistentVolumeClaim` to store the reports: - oc create -f - <<PVC - --- - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - name: certcheck-reports - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 1Gi - PVC + oc create -f - <<PVC + --- + apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: certcheck-reports + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + PVC With that we can run the `Job` once: diff --git a/examples/certificate-check-upload.yaml b/examples/certificate-check-upload.yaml index b10a0b614..8b560447f 100644 --- a/examples/certificate-check-upload.yaml +++ b/examples/certificate-check-upload.yaml @@ -20,28 +20,34 @@ kind: Job metadata: name: certificate-check spec: - containers: - - name: openshift-ansible - image: openshift/openshift-ansible - env: - - name: PLAYBOOK_FILE - value: playbooks/certificate_expiry/easy-mode-upload.yaml - - name: INVENTORY_FILE - value: /tmp/inventory/hosts # from configmap vol below - - name: ANSIBLE_PRIVATE_KEY_FILE # from secret vol below - value: /opt/app-root/src/.ssh/id_rsa/ssh-privatekey - - name: CERT_EXPIRY_WARN_DAYS - value: "45" # must be a string, don't forget the quotes - volumeMounts: - - name: sshkey - mountPath: /opt/app-root/src/.ssh/id_rsa - - name: inventory - mountPath: /tmp/inventory - volumes: - - name: sshkey - secret: - secretName: sshkey - - name: inventory - configMap: - name: inventory - restartPolicy: Never + parallelism: 1 + completions: 1 + template: + metadata: + name: certificate-check + spec: + containers: + - name: openshift-ansible + image: openshift/openshift-ansible + env: + - name: PLAYBOOK_FILE + value: playbooks/certificate_expiry/easy-mode-upload.yaml + - name: INVENTORY_FILE + value: /tmp/inventory/hosts # from configmap vol below + - name: ANSIBLE_PRIVATE_KEY_FILE # from secret vol below + value: /opt/app-root/src/.ssh/id_rsa/ssh-privatekey + - name: CERT_EXPIRY_WARN_DAYS + value: "45" # must be a string, don't forget the quotes + volumeMounts: + - name: sshkey + mountPath: /opt/app-root/src/.ssh/id_rsa + - name: inventory + mountPath: /tmp/inventory + volumes: + - name: sshkey + secret: + secretName: sshkey + - name: inventory + configMap: + name: inventory + restartPolicy: Never diff --git a/examples/certificate-check-volume.yaml b/examples/certificate-check-volume.yaml index c19dc1f88..f6613bcd8 100644 --- a/examples/certificate-check-volume.yaml +++ b/examples/certificate-check-volume.yaml @@ -22,33 +22,39 @@ kind: Job metadata: name: certificate-check spec: - containers: - - name: openshift-ansible - image: openshift/openshift-ansible - env: - - name: PLAYBOOK_FILE - value: playbooks/certificate_expiry/html_and_json_timestamp.yaml - - name: INVENTORY_FILE - value: /tmp/inventory/hosts # from configmap vol below - - name: ANSIBLE_PRIVATE_KEY_FILE # from secret vol below - value: /opt/app-root/src/.ssh/id_rsa/ssh-privatekey - - name: CERT_EXPIRY_WARN_DAYS - value: "45" # must be a string, don't forget the quotes - volumeMounts: - - name: sshkey - mountPath: /opt/app-root/src/.ssh/id_rsa - - name: inventory - mountPath: /tmp/inventory - - name: reports - mountPath: /var/lib/certcheck - volumes: - - name: sshkey - secret: - secretName: sshkey - - name: inventory - configMap: - name: inventory - - name: reports - persistentVolumeClaim: - claimName: certcheck-reports - restartPolicy: Never + parallelism: 1 + completions: 1 + template: + metadata: + name: certificate-check + spec: + containers: + - name: openshift-ansible + image: openshift/openshift-ansible + env: + - name: PLAYBOOK_FILE + value: playbooks/certificate_expiry/html_and_json_timestamp.yaml + - name: INVENTORY_FILE + value: /tmp/inventory/hosts # from configmap vol below + - name: ANSIBLE_PRIVATE_KEY_FILE # from secret vol below + value: /opt/app-root/src/.ssh/id_rsa/ssh-privatekey + - name: CERT_EXPIRY_WARN_DAYS + value: "45" # must be a string, don't forget the quotes + volumeMounts: + - name: sshkey + mountPath: /opt/app-root/src/.ssh/id_rsa + - name: inventory + mountPath: /tmp/inventory + - name: reports + mountPath: /var/lib/certcheck + volumes: + - name: sshkey + secret: + secretName: sshkey + - name: inventory + configMap: + name: inventory + - name: reports + persistentVolumeClaim: + claimName: certcheck-reports + restartPolicy: Never diff --git a/filter_plugins/oo_filters.py b/filter_plugins/oo_filters.py index 10c8600ba..d61184c48 100644 --- a/filter_plugins/oo_filters.py +++ b/filter_plugins/oo_filters.py @@ -11,6 +11,7 @@ import pdb import random import re +from base64 import b64encode from collections import Mapping # pylint no-name-in-module and import-error disabled here because pylint # fails to properly detect the packages when installed in a virtualenv @@ -672,8 +673,7 @@ def oo_generate_secret(num_bytes): if not isinstance(num_bytes, int): raise errors.AnsibleFilterError("|failed expects num_bytes is int") - secret = os.urandom(num_bytes) - return secret.encode('base-64').strip() + return b64encode(os.urandom(num_bytes)).decode('utf-8') def to_padded_yaml(data, level=0, indent=2, **kw): diff --git a/hack/hooks/README.md b/hack/hooks/README.md new file mode 100644 index 000000000..ef870540a --- /dev/null +++ b/hack/hooks/README.md @@ -0,0 +1,37 @@ +# OpenShift-Ansible Git Hooks + +## Introduction + +This `hack` sub-directory holds +[git commit hooks](https://www.atlassian.com/git/tutorials/git-hooks#conceptual-overview) +you may use when working on openshift-ansible contributions. See the +README in each sub-directory for an overview of what each hook does +and if the hook has any specific usage or setup instructions. + +## Usage + +Basic git hook usage is simple: + +1) Copy (or symbolic link) the hook to the `$REPO_ROOT/.git/hooks/` directory +2) Make the hook executable (`chmod +x $PATH_TO_HOOK`) + +## Multiple Hooks of the Same Type + +If you want to install multiple hooks of the same type, for example: +multiple `pre-commit` hooks, you will need some kind of *hook +dispatcher*. For an example of an easy to use hook dispatcher check +out this gist by carlos-jenkins: + +* [multihooks.py](https://gist.github.com/carlos-jenkins/89da9dcf9e0d528ac978311938aade43) + +## Contributing Hooks + +If you want to contribute a new hook there are only a few criteria +that must be met: + +* The hook **MUST** include a README describing the purpose of the hook +* The README **MUST** describe special setup instructions if they are required +* The hook **MUST** be in a sub-directory of this directory +* The hook file **MUST** be named following the standard git hook + naming pattern (i.e., pre-commit hooks **MUST** be called + `pre-commit`) diff --git a/hack/hooks/verify_generated_modules/README.md b/hack/hooks/verify_generated_modules/README.md new file mode 100644 index 000000000..093fcf76a --- /dev/null +++ b/hack/hooks/verify_generated_modules/README.md @@ -0,0 +1,19 @@ +# Verify Generated Modules + +Pre-commit hook for verifying that generated library modules match +their EXPECTED content. Library modules are generated from fragments +under the `roles/lib_(openshift|utils)/src/` directories. + +If the attempted commit modified files under the +`roles/lib_(openshift|utils)/` directories this script will run the +`generate.py --verify` command. + +This script will **NOT RUN** if module source fragments are modified +but *not part of the commit*. I.e., you can still make commits if you +modified module fragments AND other files but are *not comitting the +the module fragments*. + +# Setup Instructions + +Standard installation procedure. Copy the hook to the `.git/hooks/` +directory and ensure it is executable. diff --git a/hack/hooks/verify_generated_modules/pre-commit b/hack/hooks/verify_generated_modules/pre-commit new file mode 100755 index 000000000..8a319fd7e --- /dev/null +++ b/hack/hooks/verify_generated_modules/pre-commit @@ -0,0 +1,55 @@ +#!/bin/sh + +###################################################################### +# Pre-commit hook for verifying that generated library modules match +# their EXPECTED content. Library modules are generated from fragments +# under the 'roles/lib_(openshift|utils)/src/' directories. +# +# If the attempted commit modified files under the +# 'roles/lib_(openshift|utils)/' directories this script will run the +# 'generate.py --verify' command. +# +# This script will NOT RUN if module source fragments are modified but +# not part of the commit. I.e., you can still make commits if you +# modified module fragments AND other files but are not comitting the +# the module fragments. + +# Did the commit modify any source module files? +CHANGES=`git diff-index --stat --cached HEAD | grep -E '^ roles/lib_(openshift|utils)/src/(class|doc|ansible|lib)/'` +RET_CODE=$? +ABORT=0 + +if [ "${RET_CODE}" -eq "0" ]; then + # Modifications detected. Run the verification scripts. + + # Which was it? + if $(echo $CHANGES | grep -q 'roles/lib_openshift/'); then + echo "Validating lib_openshift..." + ./roles/lib_openshift/src/generate.py --verify + if [ "${?}" -ne "0" ]; then + ABORT=1 + fi + fi + + if $(echo $CHANGES | grep -q 'roles/lib_utils/'); then + echo "Validating lib_utils..." + ./roles/lib_utils/src/generate.py --verify + if [ "${?}" -ne "0" ]; then + ABORT=1 + fi + fi + + if [ "${ABORT}" -eq "1" ]; then + cat <<EOF + +ERROR: Module verification failed. Generated files do not match fragments. + +Choices to continue: + 1) Run './roles/lib_(openshift|utils)/src/generate.py' from the root of + the repo to regenerate the files + 2) Skip verification with '--no-verify' option to 'git commit' +EOF + fi +fi + +exit $ABORT diff --git a/Dockerfile b/images/installer/Dockerfile index eecf3630b..1df887f32 100644 --- a/Dockerfile +++ b/images/installer/Dockerfile @@ -16,6 +16,12 @@ LABEL name="openshift-ansible" \ USER root +# Create a symlink to /opt/app-root/src so that files under /usr/share/ansible are accessible. +# This is required since the system-container uses by default the playbook under +# /usr/share/ansible/openshift-ansible. With this change we won't need to keep two different +# configurations for the two images. +RUN mkdir -p /usr/share/ansible/ && ln -s /opt/app-root/src /usr/share/ansible/openshift-ansible + RUN INSTALL_PKGS="skopeo" && \ yum install -y --setopt=tsflags=nodocs $INSTALL_PKGS && \ rpm -V $INSTALL_PKGS && \ @@ -39,4 +45,7 @@ ADD . /tmp/src # as per the INSTALL_OC environment setting above RUN /usr/libexec/s2i/assemble +# Add files for running as a system container +COPY system-container/root / + CMD [ "/usr/libexec/s2i/run" ] diff --git a/Dockerfile.rhel7 b/images/installer/Dockerfile.rhel7 index 0d5a6038a..00841e660 100644 --- a/Dockerfile.rhel7 +++ b/images/installer/Dockerfile.rhel7 @@ -20,9 +20,10 @@ LABEL name="openshift3/openshift-ansible" \ # because all content and dependencies (like 'oc') is already # installed via yum. USER root -RUN INSTALL_PKGS="atomic-openshift-utils atomic-openshift-clients" && \ +RUN INSTALL_PKGS="atomic-openshift-utils atomic-openshift-clients python-boto" && \ yum repolist > /dev/null && \ yum-config-manager --enable rhel-7-server-ose-3.4-rpms && \ + yum-config-manager --enable rhel-7-server-rh-common-rpms && \ yum install -y $INSTALL_PKGS && \ yum clean all @@ -38,4 +39,7 @@ ENV PLAYBOOK_FILE=playbooks/byo/openshift_facts.yml \ WORK_DIR=/usr/share/ansible/openshift-ansible \ OPTS="-v" +# Add files for running as a system container +COPY system-container/root / + CMD [ "/usr/libexec/s2i/run" ] diff --git a/images/installer/system-container/README.md b/images/installer/system-container/README.md new file mode 100644 index 000000000..dc95307e5 --- /dev/null +++ b/images/installer/system-container/README.md @@ -0,0 +1,13 @@ +# System container installer + +These files are needed to run the installer using an [Atomic System container](http://www.projectatomic.io/blog/2016/09/intro-to-system-containers/). + +* config.json.template - Template of the configuration file used for running containers. + +* manifest.json - Used to define various settings for the system container, such as the default values to use for the installation. + +* run-system-container.sh - Entrypoint to the container. + +* service.template - Template file for the systemd service. + +* tmpfiles.template - Template file for systemd-tmpfiles. diff --git a/images/installer/system-container/root/exports/config.json.template b/images/installer/system-container/root/exports/config.json.template new file mode 100644 index 000000000..383e3696e --- /dev/null +++ b/images/installer/system-container/root/exports/config.json.template @@ -0,0 +1,223 @@ +{ + "ociVersion": "1.0.0", + "platform": { + "os": "linux", + "arch": "amd64" + }, + "process": { + "terminal": false, + "consoleSize": { + "height": 0, + "width": 0 + }, + "user": { + "uid": 0, + "gid": 0 + }, + "args": [ + "/usr/local/bin/run-system-container.sh" + ], + "env": [ + "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", + "TERM=xterm", + "OPTS=$OPTS", + "PLAYBOOK_FILE=$PLAYBOOK_FILE" + ], + "cwd": "/opt/app-root/src/", + "rlimits": [ + { + "type": "RLIMIT_NOFILE", + "hard": 1024, + "soft": 1024 + } + ], + "noNewPrivileges": true + }, + "root": { + "path": "rootfs", + "readonly": true + }, + "mounts": [ + { + "destination": "/proc", + "type": "proc", + "source": "proc" + }, + { + "destination": "/dev", + "type": "tmpfs", + "source": "tmpfs", + "options": [ + "nosuid", + "strictatime", + "mode=755", + "size=65536k" + ] + }, + { + "destination": "/dev/pts", + "type": "devpts", + "source": "devpts", + "options": [ + "nosuid", + "noexec", + "newinstance", + "ptmxmode=0666", + "mode=0620", + "gid=5" + ] + }, + { + "destination": "/dev/shm", + "type": "tmpfs", + "source": "shm", + "options": [ + "nosuid", + "noexec", + "nodev", + "mode=1777", + "size=65536k" + ] + }, + { + "destination": "/dev/mqueue", + "type": "mqueue", + "source": "mqueue", + "options": [ + "nosuid", + "noexec", + "nodev" + ] + }, + { + "destination": "/sys", + "type": "sysfs", + "source": "sysfs", + "options": [ + "nosuid", + "noexec", + "nodev", + "ro" + ] + }, + { + "type": "bind", + "source": "$SSH_ROOT", + "destination": "/opt/app-root/src/.ssh", + "options": [ + "bind", + "rw", + "mode=755" + ] + }, + { + "type": "bind", + "source": "$SSH_ROOT", + "destination": "/root/.ssh", + "options": [ + "bind", + "rw", + "mode=755" + ] + }, + { + "type": "bind", + "source": "$VAR_LIB_OPENSHIFT_INSTALLER", + "destination": "/var/lib/openshift-installer", + "options": [ + "bind", + "rw", + "mode=755" + ] + }, + { + "type": "bind", + "source": "$VAR_LOG_OPENSHIFT_LOG", + "destination": "/var/log/ansible.log", + "options": [ + "bind", + "rw", + "mode=755" + ] + }, + { + "destination": "/root/.ansible", + "type": "tmpfs", + "source": "tmpfs", + "options": [ + "nosuid", + "strictatime", + "mode=755" + ] + }, + { + "destination": "/tmp", + "type": "tmpfs", + "source": "tmpfs", + "options": [ + "nosuid", + "strictatime", + "mode=755" + ] + }, + { + "type": "bind", + "source": "$INVENTORY_FILE", + "destination": "/etc/ansible/hosts", + "options": [ + "bind", + "rw", + "mode=755" + ] + }, + { + "destination": "/sys/fs/cgroup", + "type": "cgroup", + "source": "cgroup", + "options": [ + "nosuid", + "noexec", + "nodev", + "relatime", + "ro" + ] + } + ], + "hooks": { + + }, + "linux": { + "resources": { + "devices": [ + { + "allow": false, + "access": "rwm" + } + ] + }, + "namespaces": [ + { + "type": "pid" + }, + { + "type": "mount" + } + ], + "maskedPaths": [ + "/proc/kcore", + "/proc/latency_stats", + "/proc/timer_list", + "/proc/timer_stats", + "/proc/sched_debug", + "/sys/firmware" + ], + "readonlyPaths": [ + "/proc/asound", + "/proc/bus", + "/proc/fs", + "/proc/irq", + "/proc/sys", + "/proc/sysrq-trigger" + ] + } +} diff --git a/images/installer/system-container/root/exports/manifest.json b/images/installer/system-container/root/exports/manifest.json new file mode 100644 index 000000000..1db845965 --- /dev/null +++ b/images/installer/system-container/root/exports/manifest.json @@ -0,0 +1,11 @@ +{ + "version": "1.0", + "defaultValues": { + "OPTS": "", + "VAR_LIB_OPENSHIFT_INSTALLER" : "/var/lib/openshift-installer", + "VAR_LOG_OPENSHIFT_LOG": "/var/log/ansible.log", + "PLAYBOOK_FILE": "/usr/share/ansible/openshift-ansible/playbooks/byo/config.yml", + "SSH_ROOT": "/root/.ssh", + "INVENTORY_FILE": "/dev/null" + } +} diff --git a/images/installer/system-container/root/exports/service.template b/images/installer/system-container/root/exports/service.template new file mode 100644 index 000000000..bf5316af6 --- /dev/null +++ b/images/installer/system-container/root/exports/service.template @@ -0,0 +1,6 @@ +[Service] +ExecStart=$EXEC_START +ExecStop=-$EXEC_STOP +Restart=no +WorkingDirectory=$DESTDIR +Type=oneshot diff --git a/images/installer/system-container/root/exports/tmpfiles.template b/images/installer/system-container/root/exports/tmpfiles.template new file mode 100644 index 000000000..b1f6caf47 --- /dev/null +++ b/images/installer/system-container/root/exports/tmpfiles.template @@ -0,0 +1,2 @@ +d $VAR_LIB_OPENSHIFT_INSTALLER - - - - - +f $VAR_LOG_OPENSHIFT_LOG - - - - - diff --git a/images/installer/system-container/root/usr/local/bin/run-system-container.sh b/images/installer/system-container/root/usr/local/bin/run-system-container.sh new file mode 100755 index 000000000..9ce7c7328 --- /dev/null +++ b/images/installer/system-container/root/usr/local/bin/run-system-container.sh @@ -0,0 +1,4 @@ +#!/bin/sh + +export ANSIBLE_LOG_PATH=/var/log/ansible.log +exec ansible-playbook -i /etc/ansible/hosts ${OPTS} ${PLAYBOOK_FILE} diff --git a/inventory/byo/hosts.origin.example b/inventory/byo/hosts.origin.example index f70971537..db6ac12ce 100644 --- a/inventory/byo/hosts.origin.example +++ b/inventory/byo/hosts.origin.example @@ -78,6 +78,18 @@ openshift_release=v1.5 #openshift_docker_blocked_registries=registry.hacker.com # Disable pushing to dockerhub #openshift_docker_disable_push_dockerhub=True +# Use Docker inside a System Container. Note that this is a tech preview and should +# not be used to upgrade! +# The following options for docker are ignored: +# - docker_version +# - docker_upgrade +# The following options must not be used +# - openshift_docker_options +#openshift_docker_use_system_container=False +# Force the registry to use for the system container. By default the registry +# will be built off of the deployment type and ansible_distribution. Only +# use this option if you are sure you know what you are doing! +#openshift_docker_systemcontainer_image_registry_override="registry.example.com" # Items added, as is, to end of /etc/sysconfig/docker OPTIONS # Default value: "--log-driver=journald" #openshift_docker_options="-l warn --ipv6=false" @@ -571,10 +583,17 @@ openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', # network blocks should be private and should not conflict with network blocks # in your infrastructure that pods may require access to. Can not be changed # after deployment. +# +# WARNING : Do not pick subnets that overlap with the default Docker bridge subnet of +# 172.17.0.0/16. Your installation will fail and/or your configuration change will +# cause the Pod SDN or Cluster SDN to fail. +# +# WORKAROUND : If you must use an overlapping subnet, you can configure a non conflicting +# docker0 CIDR range by adding '--bip=192.168.2.1/24' to DOCKER_NETWORK_OPTIONS +# environment variable located in /etc/sysconfig/docker-network. #osm_cluster_network_cidr=10.128.0.0/14 #openshift_portal_net=172.30.0.0/16 - # ExternalIPNetworkCIDRs controls what values are acceptable for the # service external IP field. If empty, no externalIP may be set. It # may contain a list of CIDRs which are checked for access. If a CIDR diff --git a/inventory/byo/hosts.ose.example b/inventory/byo/hosts.ose.example index f5e0de1b0..42097a593 100644 --- a/inventory/byo/hosts.ose.example +++ b/inventory/byo/hosts.ose.example @@ -78,6 +78,18 @@ openshift_release=v3.5 #openshift_docker_blocked_registries=registry.hacker.com # Disable pushing to dockerhub #openshift_docker_disable_push_dockerhub=True +# Use Docker inside a System Container. Note that this is a tech preview and should +# not be used to upgrade! +# The following options for docker are ignored: +# - docker_version +# - docker_upgrade +# The following options must not be used +# - openshift_docker_options +#openshift_docker_use_system_container=False +# Force the registry to use for the system container. By default the registry +# will be built off of the deployment type and ansible_distribution. Only +# use this option if you are sure you know what you are doing! +#openshift_docker_systemcontainer_image_registry_override="registry.example.com" # Items added, as is, to end of /etc/sysconfig/docker OPTIONS # Default value: "--log-driver=journald" #openshift_docker_options="-l warn --ipv6=false" @@ -572,10 +584,17 @@ openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', # network blocks should be private and should not conflict with network blocks # in your infrastructure that pods may require access to. Can not be changed # after deployment. +# +# WARNING : Do not pick subnets that overlap with the default Docker bridge subnet of +# 172.17.0.0/16. Your installation will fail and/or your configuration change will +# cause the Pod SDN or Cluster SDN to fail. +# +# WORKAROUND : If you must use an overlapping subnet, you can configure a non conflicting +# docker0 CIDR range by adding '--bip=192.168.2.1/24' to DOCKER_NETWORK_OPTIONS +# environment variable located in /etc/sysconfig/docker-network. #osm_cluster_network_cidr=10.128.0.0/14 #openshift_portal_net=172.30.0.0/16 - # ExternalIPNetworkCIDRs controls what values are acceptable for the # service external IP field. If empty, no externalIP may be set. It # may contain a list of CIDRs which are checked for access. If a CIDR diff --git a/openshift-ansible.spec b/openshift-ansible.spec index 0c081a635..bbf70eb7b 100644 --- a/openshift-ansible.spec +++ b/openshift-ansible.spec @@ -9,7 +9,7 @@ %global __requires_exclude ^/usr/bin/ansible-playbook$ Name: openshift-ansible -Version: 3.6.39 +Version: 3.6.58 Release: 1%{?dist} Summary: Openshift and Atomic Enterprise Ansible License: ASL 2.0 @@ -17,7 +17,7 @@ URL: https://github.com/openshift/openshift-ansible Source0: https://github.com/openshift/openshift-ansible/archive/%{commit}/%{name}-%{version}.tar.gz BuildArch: noarch -Requires: ansible >= 2.2.0.0-1 +Requires: ansible >= 2.2.2.0 Requires: python2 Requires: python-six Requires: tar @@ -25,6 +25,7 @@ Requires: openshift-ansible-docs = %{version} Requires: java-1.8.0-openjdk-headless Requires: httpd-tools Requires: libselinux-python +Requires: python-passlib %description Openshift and Atomic Enterprise Ansible @@ -273,6 +274,110 @@ Atomic OpenShift Utilities includes %changelog +* Mon May 08 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.58-1 +- Moving Dockerfile content to images dir (jupierce@redhat.com) + +* Mon May 08 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.57-1 +- + +* Sun May 07 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.56-1 +- + +* Sat May 06 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.55-1 +- Fix 1448368, and some other minors issues (ghuang@redhat.com) +- mux startup is broken without this fix (rmeggins@redhat.com) +- Dockerfile: create symlink for /opt/app-root/src (gscrivan@redhat.com) +- docs: Add basic system container dev docs (smilner@redhat.com) +- installer: Add system container variable for log saving (smilner@redhat.com) +- installer: support running as a system container (gscrivan@redhat.com) + +* Fri May 05 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.54-1 +- Allow oc_ modules to pass unicode results (rteague@redhat.com) +- Ensure repo cache is clean on the first run (rteague@redhat.com) +- move etcdctl.yml from etcd to etcd_common role (jchaloup@redhat.com) +- Modified pick from release-1.5 for updating hawkular htpasswd generation + (ewolinet@redhat.com) + +* Thu May 04 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.53-1 +- Correctly setting the primary and replica shard count settings + (ewolinet@redhat.com) +- System container docker (smilner@redhat.com) +- Stop logging AWS credentials in master role. (dgoodwin@redhat.com) +- Remove set operations from openshift_master_certificates iteration. + (abutcher@redhat.com) +- Refactor system fact gathering to avoid dictionary size change during + iteration. (abutcher@redhat.com) +- Refactor secret generation for python3. (abutcher@redhat.com) +- redhat-ci: use requirements.txt (jlebon@redhat.com) + +* Wed May 03 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.52-1 +- Making mux with_items list evaluate as empty if didnt get objects before + (ewolinet@redhat.com) +- etcd Upgrade Refactor (rteague@redhat.com) +- v3.3 Upgrade Refactor (rteague@redhat.com) +- v3.4 Upgrade Refactor (rteague@redhat.com) +- v3.5 Upgrade Refactor (rteague@redhat.com) +- v3.6 Upgrade Refactor (rteague@redhat.com) +- Fix variants for v3.6 (rteague@redhat.com) +- Normalizing groups. (kwoodson@redhat.com) +- Use openshift_ca_host's hostnames to sign the CA (sdodson@redhat.com) + +* Tue May 02 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.51-1 +- Remove std_include from playbooks/byo/rhel_subscribe.yml + (abutcher@redhat.com) +- Adding way to add labels and nodeselectors to logging project + (ewolinet@redhat.com) + +* Tue May 02 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.50-1 +- Don't double quote when conditions (sdodson@redhat.com) +- Remove jinja template delimeters from when conditions (sdodson@redhat.com) +- move excluder upgrade validation tasks under openshift_excluder role + (jchaloup@redhat.com) +- Fix test compatibility with OpenSSL 1.1.0 (pierre- + louis.bonicoli@libregerbil.fr) + +* Mon May 01 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.49-1 +- Warn users about conflicts with docker0 CIDR range (lpsantil@gmail.com) +- Bump ansible rpm dependency to 2.2.2.0 (sdodson@redhat.com) + +* Mon May 01 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.48-1 +- + +* Mon May 01 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.47-1 +- + +* Mon May 01 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.46-1 +- Contrib: Hook to verify modules match assembled fragments + (tbielawa@redhat.com) + +* Mon May 01 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.45-1 +- + +* Sun Apr 30 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.44-1 +- Refactor etcd roles (jchaloup@redhat.com) + +* Sat Apr 29 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.43-1 +- Document the Pull Request process (rhcarvalho@gmail.com) +- Add Table of Contents (rhcarvalho@gmail.com) +- Improve Contribution Guide (rhcarvalho@gmail.com) +- Replace absolute with relative URLs (rhcarvalho@gmail.com) +- Move repo structure to a separate document (rhcarvalho@gmail.com) +- Remove outdated information about PRs (rhcarvalho@gmail.com) +- Move link to BUILD.md to README.md (rhcarvalho@gmail.com) +- Adding checks for starting mux for 2.2.0 (ewolinet@redhat.com) +- Fix OpenShift registry deployment on OSE 3.2 (lhuard@amadeus.com) + +* Fri Apr 28 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.42-1 +- Fix certificate check Job examples (pep@redhat.com) +- Add python-boto requirement (pep@redhat.com) + +* Thu Apr 27 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.41-1 +- Add bool for proper conditional handling (rteague@redhat.com) + +* Thu Apr 27 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.40-1 +- Fix cluster creation with `bin/cluster` when there’s no glusterfs node + (lhuard@amadeus.com) + * Thu Apr 27 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.39-1 - Move container build instructions to BUILD.md (pep@redhat.com) - Elaborate container image usage instructions (pep@redhat.com) @@ -317,7 +422,7 @@ Atomic OpenShift Utilities includes - Remove v1.5 and v1.6 metrics/logging templates (sdodson@redhat.com) * Sun Apr 23 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.35-1 -- +- * Fri Apr 21 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.34-1 - GlusterFS: provide default for groups.oo_glusterfs_to_config in with_items diff --git a/playbooks/adhoc/create_pv/create_pv.yaml b/playbooks/adhoc/create_pv/create_pv.yaml index 81c1ee653..64f861c6a 100644 --- a/playbooks/adhoc/create_pv/create_pv.yaml +++ b/playbooks/adhoc/create_pv/create_pv.yaml @@ -20,7 +20,7 @@ pre_tasks: - fail: msg: "This playbook requires {{item}} to be set." - when: "{{ item }} is not defined or {{ item }} == ''" + when: item is not defined or item == '' with_items: - cli_volume_size - cli_device_name diff --git a/playbooks/adhoc/docker_loopback_to_lvm/docker_loopback_to_direct_lvm.yml b/playbooks/adhoc/docker_loopback_to_lvm/docker_loopback_to_direct_lvm.yml index f638fab83..507ac0f05 100644 --- a/playbooks/adhoc/docker_loopback_to_lvm/docker_loopback_to_direct_lvm.yml +++ b/playbooks/adhoc/docker_loopback_to_lvm/docker_loopback_to_direct_lvm.yml @@ -33,7 +33,7 @@ pre_tasks: - fail: msg: "This playbook requires {{item}} to be set." - when: "{{ item }} is not defined or {{ item }} == ''" + when: item is not defined or item == '' with_items: - cli_tag_name - cli_volume_size diff --git a/playbooks/adhoc/docker_loopback_to_lvm/ops-docker-loopback-to-direct-lvm.yml b/playbooks/adhoc/docker_loopback_to_lvm/ops-docker-loopback-to-direct-lvm.yml index d988a28b0..3059d3dc5 100755 --- a/playbooks/adhoc/docker_loopback_to_lvm/ops-docker-loopback-to-direct-lvm.yml +++ b/playbooks/adhoc/docker_loopback_to_lvm/ops-docker-loopback-to-direct-lvm.yml @@ -24,7 +24,7 @@ pre_tasks: - fail: msg: "This playbook requires {{item}} to be set." - when: "{{ item }} is not defined or {{ item }} == ''" + when: item is not defined or item == '' with_items: - cli_docker_device diff --git a/playbooks/adhoc/docker_storage_cleanup/docker_storage_cleanup.yml b/playbooks/adhoc/docker_storage_cleanup/docker_storage_cleanup.yml index b6dde357e..5e12cd181 100644 --- a/playbooks/adhoc/docker_storage_cleanup/docker_storage_cleanup.yml +++ b/playbooks/adhoc/docker_storage_cleanup/docker_storage_cleanup.yml @@ -25,7 +25,7 @@ - fail: msg: "This playbook requires {{item}} to be set." - when: "{{ item }} is not defined or {{ item }} == ''" + when: item is not defined or item == '' with_items: - cli_tag_name diff --git a/playbooks/adhoc/grow_docker_vg/grow_docker_vg.yml b/playbooks/adhoc/grow_docker_vg/grow_docker_vg.yml index 598f1966d..eb8440d1b 100644 --- a/playbooks/adhoc/grow_docker_vg/grow_docker_vg.yml +++ b/playbooks/adhoc/grow_docker_vg/grow_docker_vg.yml @@ -42,7 +42,7 @@ pre_tasks: - fail: msg: "This playbook requires {{item}} to be set." - when: "{{ item }} is not defined or {{ item }} == ''" + when: item is not defined or item == '' with_items: - cli_tag_name - cli_volume_size diff --git a/playbooks/adhoc/uninstall.yml b/playbooks/adhoc/uninstall.yml index ffdcd0ce1..beaf20b07 100644 --- a/playbooks/adhoc/uninstall.yml +++ b/playbooks/adhoc/uninstall.yml @@ -125,7 +125,7 @@ - name: Remove flannel package package: name=flannel state=absent when: openshift_use_flannel | default(false) | bool - when: "{{ not is_atomic | bool }}" + when: not is_atomic | bool - shell: systemctl reset-failed changed_when: False @@ -146,7 +146,7 @@ - lbr0 - vlinuxbr - vovsbr - when: "{{ openshift_remove_all | default(true) | bool }}" + when: openshift_remove_all | default(true) | bool - shell: atomic uninstall "{{ item }}"-master-api changed_when: False @@ -239,7 +239,7 @@ changed_when: False failed_when: False with_items: "{{ images_to_delete.results }}" - when: "{{ openshift_uninstall_images | default(True) | bool }}" + when: openshift_uninstall_images | default(True) | bool - name: remove sdn drop files file: @@ -252,7 +252,7 @@ - /etc/sysconfig/openshift-node - /etc/sysconfig/openvswitch - /run/openshift-sdn - when: "{{ openshift_remove_all | default(True) | bool }}" + when: openshift_remove_all | default(True) | bool - find: path={{ item }} file_type=file register: files diff --git a/playbooks/byo/openshift-cluster/initialize_groups.yml b/playbooks/byo/openshift-cluster/initialize_groups.yml index 2785dcc3b..2a725510a 100644 --- a/playbooks/byo/openshift-cluster/initialize_groups.yml +++ b/playbooks/byo/openshift-cluster/initialize_groups.yml @@ -8,17 +8,3 @@ - always tasks: - include_vars: cluster_hosts.yml - - name: Evaluate group l_oo_all_hosts - add_host: - name: "{{ item }}" - groups: l_oo_all_hosts - with_items: "{{ g_all_hosts | default([]) }}" - changed_when: no - -- name: Create initial host groups for all hosts - hosts: l_oo_all_hosts - gather_facts: no - tags: - - always - tasks: - - include_vars: cluster_hosts.yml diff --git a/playbooks/byo/openshift-cluster/upgrades/v3_3/upgrade.yml b/playbooks/byo/openshift-cluster/upgrades/v3_3/upgrade.yml index 690b663f4..697a18c4d 100644 --- a/playbooks/byo/openshift-cluster/upgrades/v3_3/upgrade.yml +++ b/playbooks/byo/openshift-cluster/upgrades/v3_3/upgrade.yml @@ -4,106 +4,4 @@ # - include: ../../initialize_groups.yml -- include: ../../../../common/openshift-cluster/upgrades/init.yml - tags: - - pre_upgrade - -- name: Configure the upgrade target for the common upgrade tasks - hosts: l_oo_all_hosts - tags: - - pre_upgrade - tasks: - - set_fact: - openshift_upgrade_target: "{{ '1.3' if deployment_type == 'origin' else '3.3' }}" - openshift_upgrade_min: "{{ '1.2' if deployment_type == 'origin' else '3.2' }}" - -# Pre-upgrade - -- include: ../../../../common/openshift-cluster/upgrades/initialize_nodes_to_upgrade.yml - tags: - - pre_upgrade - -- name: Update repos and initialize facts on all hosts - hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config:oo_lb_to_config - tags: - - pre_upgrade - roles: - - openshift_repos - -- name: Set openshift_no_proxy_internal_hostnames - hosts: oo_masters_to_config:oo_nodes_to_upgrade - tags: - - pre_upgrade - tasks: - - set_fact: - openshift_no_proxy_internal_hostnames: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_config'] - | union(groups['oo_masters_to_config']) - | union(groups['oo_etcd_to_config'] | default([]))) - | oo_collect('openshift.common.hostname') | default([]) | join (',') - }}" - when: "{{ (openshift_http_proxy is defined or openshift_https_proxy is defined) and - openshift_generate_no_proxy_hosts | default(True) | bool }}" - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_inventory_vars.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/disable_excluder.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/initialize_openshift_version.yml - tags: - - pre_upgrade - vars: - # Request specific openshift_release and let the openshift_version role handle converting this - # to a more specific version, respecting openshift_image_tag and openshift_pkg_version if - # defined, and overriding the normal behavior of protecting the installed version - openshift_release: "{{ openshift_upgrade_target }}" - openshift_protect_installed_version: False - - # We skip the docker role at this point in upgrade to prevent - # unintended package, container, or config upgrades which trigger - # docker restarts. At this early stage of upgrade we can assume - # docker is configured and running. - skip_docker_role: True - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_control_plane_running.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-master/validate_restart.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_docker_upgrade_targets.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/gate_checks.yml - tags: - - pre_upgrade - -# Pre-upgrade completed, nothing after this should be tagged pre_upgrade. - -# Separate step so we can execute in parallel and clear out anything unused -# before we get into the serialized upgrade process which will then remove -# remaining images if possible. -- name: Cleanup unused Docker images - hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config - tasks: - - include: ../../../../common/openshift-cluster/upgrades/cleanup_unused_images.yml - -- include: ../../../../common/openshift-cluster/upgrades/upgrade_control_plane.yml - vars: - master_config_hook: "v3_3/master_config_upgrade.yml" - -- include: ../../../../common/openshift-cluster/upgrades/upgrade_nodes.yml - vars: - node_config_hook: "v3_3/node_config_upgrade.yml" - -- include: ../../../../common/openshift-cluster/upgrades/post_control_plane.yml +- include: ../../../../common/openshift-cluster/upgrades/v3_3/upgrade.yml diff --git a/playbooks/byo/openshift-cluster/upgrades/v3_3/upgrade_control_plane.yml b/playbooks/byo/openshift-cluster/upgrades/v3_3/upgrade_control_plane.yml index fca2c04f3..4d284c279 100644 --- a/playbooks/byo/openshift-cluster/upgrades/v3_3/upgrade_control_plane.yml +++ b/playbooks/byo/openshift-cluster/upgrades/v3_3/upgrade_control_plane.yml @@ -13,101 +13,4 @@ # - include: ../../initialize_groups.yml -- include: ../../../../common/openshift-cluster/upgrades/init.yml - tags: - - pre_upgrade - -- name: Configure the upgrade target for the common upgrade tasks - hosts: l_oo_all_hosts - tags: - - pre_upgrade - tasks: - - set_fact: - openshift_upgrade_target: "{{ '1.3' if deployment_type == 'origin' else '3.3' }}" - openshift_upgrade_min: "{{ '1.2' if deployment_type == 'origin' else '3.2' }}" - -# Pre-upgrade -- include: ../../../../common/openshift-cluster/upgrades/initialize_nodes_to_upgrade.yml - tags: - - pre_upgrade - -- name: Update repos on control plane hosts - hosts: oo_masters_to_config:oo_etcd_to_config:oo_lb_to_config - tags: - - pre_upgrade - roles: - - openshift_repos - -- name: Set openshift_no_proxy_internal_hostnames - hosts: oo_masters_to_config:oo_nodes_to_upgrade - tags: - - pre_upgrade - tasks: - - set_fact: - openshift_no_proxy_internal_hostnames: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_config'] - | union(groups['oo_masters_to_config']) - | union(groups['oo_etcd_to_config'] | default([]))) - | oo_collect('openshift.common.hostname') | default([]) | join (',') - }}" - when: "{{ (openshift_http_proxy is defined or openshift_https_proxy is defined) and - openshift_generate_no_proxy_hosts | default(True) | bool }}" - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_inventory_vars.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/disable_excluder.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/initialize_openshift_version.yml - tags: - - pre_upgrade - vars: - # Request specific openshift_release and let the openshift_version role handle converting this - # to a more specific version, respecting openshift_image_tag and openshift_pkg_version if - # defined, and overriding the normal behavior of protecting the installed version - openshift_release: "{{ openshift_upgrade_target }}" - openshift_protect_installed_version: False - - # We skip the docker role at this point in upgrade to prevent - # unintended package, container, or config upgrades which trigger - # docker restarts. At this early stage of upgrade we can assume - # docker is configured and running. - skip_docker_role: True - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_control_plane_running.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-master/validate_restart.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_docker_upgrade_targets.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/gate_checks.yml - tags: - - pre_upgrade - -# Pre-upgrade completed, nothing after this should be tagged pre_upgrade. - -# Separate step so we can execute in parallel and clear out anything unused -# before we get into the serialized upgrade process which will then remove -# remaining images if possible. -- name: Cleanup unused Docker images - hosts: oo_masters_to_config:oo_etcd_to_config - tasks: - - include: ../../../../common/openshift-cluster/upgrades/cleanup_unused_images.yml - -- include: ../../../../common/openshift-cluster/upgrades/upgrade_control_plane.yml - vars: - master_config_hook: "v3_3/master_config_upgrade.yml" - -- include: ../../../../common/openshift-cluster/upgrades/post_control_plane.yml +- include: ../../../../common/openshift-cluster/upgrades/v3_3/upgrade_control_plane.yml diff --git a/playbooks/byo/openshift-cluster/upgrades/v3_3/upgrade_nodes.yml b/playbooks/byo/openshift-cluster/upgrades/v3_3/upgrade_nodes.yml index d171ac3cd..180a2821f 100644 --- a/playbooks/byo/openshift-cluster/upgrades/v3_3/upgrade_nodes.yml +++ b/playbooks/byo/openshift-cluster/upgrades/v3_3/upgrade_nodes.yml @@ -6,103 +6,4 @@ # - include: ../../initialize_groups.yml -- include: ../../../../common/openshift-cluster/upgrades/init.yml - tags: - - pre_upgrade - -- name: Configure the upgrade target for the common upgrade tasks - hosts: l_oo_all_hosts - tags: - - pre_upgrade - tasks: - - set_fact: - openshift_upgrade_target: "{{ '1.3' if deployment_type == 'origin' else '3.3' }}" - openshift_upgrade_min: "{{ '1.2' if deployment_type == 'origin' else '3.2' }}" - -# Pre-upgrade -- include: ../../../../common/openshift-cluster/upgrades/initialize_nodes_to_upgrade.yml - tags: - - pre_upgrade - -- name: Update repos on nodes - hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config:oo_lb_to_config - roles: - - openshift_repos - tags: - - pre_upgrade - -- name: Set openshift_no_proxy_internal_hostnames - hosts: oo_masters_to_config:oo_nodes_to_upgrade - tags: - - pre_upgrade - tasks: - - set_fact: - openshift_no_proxy_internal_hostnames: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_upgrade'] - | union(groups['oo_masters_to_config']) - | union(groups['oo_etcd_to_config'] | default([]))) - | oo_collect('openshift.common.hostname') | default([]) | join (',') - }}" - when: "{{ (openshift_http_proxy is defined or openshift_https_proxy is defined) and - openshift_generate_no_proxy_hosts | default(True) | bool }}" - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_inventory_vars.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/disable_excluder.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/initialize_openshift_version.yml - tags: - - pre_upgrade - vars: - # Request specific openshift_release and let the openshift_version role handle converting this - # to a more specific version, respecting openshift_image_tag and openshift_pkg_version if - # defined, and overriding the normal behavior of protecting the installed version - openshift_release: "{{ openshift_upgrade_target }}" - openshift_protect_installed_version: False - - # We skip the docker role at this point in upgrade to prevent - # unintended package, container, or config upgrades which trigger - # docker restarts. At this early stage of upgrade we can assume - # docker is configured and running. - skip_docker_role: True - -- name: Verify masters are already upgraded - hosts: oo_masters_to_config - tags: - - pre_upgrade - tasks: - - fail: msg="Master running {{ openshift.common.version }} must be upgraded to {{ openshift_version }} before node upgrade can be run." - when: openshift.common.version != openshift_version - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_control_plane_running.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_docker_upgrade_targets.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/gate_checks.yml - tags: - - pre_upgrade - -# Pre-upgrade completed, nothing after this should be tagged pre_upgrade. - -# Separate step so we can execute in parallel and clear out anything unused -# before we get into the serialized upgrade process which will then remove -# remaining images if possible. -- name: Cleanup unused Docker images - hosts: oo_nodes_to_upgrade - tasks: - - include: ../../../../common/openshift-cluster/upgrades/cleanup_unused_images.yml - -- include: ../../../../common/openshift-cluster/upgrades/upgrade_nodes.yml - vars: - node_config_hook: "v3_3/node_config_upgrade.yml" +- include: ../../../../common/openshift-cluster/upgrades/v3_3/upgrade_nodes.yml diff --git a/playbooks/byo/openshift-cluster/upgrades/v3_4/upgrade.yml b/playbooks/byo/openshift-cluster/upgrades/v3_4/upgrade.yml index 217163802..8cce91b3f 100644 --- a/playbooks/byo/openshift-cluster/upgrades/v3_4/upgrade.yml +++ b/playbooks/byo/openshift-cluster/upgrades/v3_4/upgrade.yml @@ -4,104 +4,4 @@ # - include: ../../initialize_groups.yml -- include: ../../../../common/openshift-cluster/upgrades/init.yml - tags: - - pre_upgrade - -- name: Configure the upgrade target for the common upgrade tasks - hosts: l_oo_all_hosts - tags: - - pre_upgrade - tasks: - - set_fact: - openshift_upgrade_target: "{{ '1.4' if deployment_type == 'origin' else '3.4' }}" - openshift_upgrade_min: "{{ '1.3' if deployment_type == 'origin' else '3.3' }}" - -# Pre-upgrade - -- include: ../../../../common/openshift-cluster/upgrades/initialize_nodes_to_upgrade.yml - tags: - - pre_upgrade - -- name: Update repos and initialize facts on all hosts - hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config:oo_lb_to_config - tags: - - pre_upgrade - roles: - - openshift_repos - -- name: Set openshift_no_proxy_internal_hostnames - hosts: oo_masters_to_config:oo_nodes_to_upgrade - tags: - - pre_upgrade - tasks: - - set_fact: - openshift_no_proxy_internal_hostnames: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_config'] - | union(groups['oo_masters_to_config']) - | union(groups['oo_etcd_to_config'] | default([]))) - | oo_collect('openshift.common.hostname') | default([]) | join (',') - }}" - when: "{{ (openshift_http_proxy is defined or openshift_https_proxy is defined) and - openshift_generate_no_proxy_hosts | default(True) | bool }}" - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_inventory_vars.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/disable_excluder.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/initialize_openshift_version.yml - tags: - - pre_upgrade - vars: - # Request specific openshift_release and let the openshift_version role handle converting this - # to a more specific version, respecting openshift_image_tag and openshift_pkg_version if - # defined, and overriding the normal behavior of protecting the installed version - openshift_release: "{{ openshift_upgrade_target }}" - openshift_protect_installed_version: False - - # We skip the docker role at this point in upgrade to prevent - # unintended package, container, or config upgrades which trigger - # docker restarts. At this early stage of upgrade we can assume - # docker is configured and running. - skip_docker_role: True - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_control_plane_running.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-master/validate_restart.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_docker_upgrade_targets.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/gate_checks.yml - tags: - - pre_upgrade - -# Pre-upgrade completed, nothing after this should be tagged pre_upgrade. - -# Separate step so we can execute in parallel and clear out anything unused -# before we get into the serialized upgrade process which will then remove -# remaining images if possible. -- name: Cleanup unused Docker images - hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config - tasks: - - include: ../../../../common/openshift-cluster/upgrades/cleanup_unused_images.yml - -- include: ../../../../common/openshift-cluster/upgrades/upgrade_control_plane.yml - vars: - master_config_hook: "v3_4/master_config_upgrade.yml" - -- include: ../../../../common/openshift-cluster/upgrades/upgrade_nodes.yml - -- include: ../../../../common/openshift-cluster/upgrades/post_control_plane.yml +- include: ../../../../common/openshift-cluster/upgrades/v3_4/upgrade.yml diff --git a/playbooks/byo/openshift-cluster/upgrades/v3_4/upgrade_control_plane.yml b/playbooks/byo/openshift-cluster/upgrades/v3_4/upgrade_control_plane.yml index d21c195bf..8e5d0f5f9 100644 --- a/playbooks/byo/openshift-cluster/upgrades/v3_4/upgrade_control_plane.yml +++ b/playbooks/byo/openshift-cluster/upgrades/v3_4/upgrade_control_plane.yml @@ -13,101 +13,4 @@ # - include: ../../initialize_groups.yml -- include: ../../../../common/openshift-cluster/upgrades/init.yml - tags: - - pre_upgrade - -- name: Configure the upgrade target for the common upgrade tasks - hosts: l_oo_all_hosts - tags: - - pre_upgrade - tasks: - - set_fact: - openshift_upgrade_target: "{{ '1.4' if deployment_type == 'origin' else '3.4' }}" - openshift_upgrade_min: "{{ '1.3' if deployment_type == 'origin' else '3.3' }}" - -# Pre-upgrade -- include: ../../../../common/openshift-cluster/upgrades/initialize_nodes_to_upgrade.yml - tags: - - pre_upgrade - -- name: Update repos on control plane hosts - hosts: oo_masters_to_config:oo_etcd_to_config:oo_lb_to_config - tags: - - pre_upgrade - roles: - - openshift_repos - -- name: Set openshift_no_proxy_internal_hostnames - hosts: oo_masters_to_config:oo_nodes_to_upgrade - tags: - - pre_upgrade - tasks: - - set_fact: - openshift_no_proxy_internal_hostnames: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_config'] - | union(groups['oo_masters_to_config']) - | union(groups['oo_etcd_to_config'] | default([]))) - | oo_collect('openshift.common.hostname') | default([]) | join (',') - }}" - when: "{{ (openshift_http_proxy is defined or openshift_https_proxy is defined) and - openshift_generate_no_proxy_hosts | default(True) | bool }}" - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_inventory_vars.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/disable_excluder.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/initialize_openshift_version.yml - tags: - - pre_upgrade - vars: - # Request specific openshift_release and let the openshift_version role handle converting this - # to a more specific version, respecting openshift_image_tag and openshift_pkg_version if - # defined, and overriding the normal behavior of protecting the installed version - openshift_release: "{{ openshift_upgrade_target }}" - openshift_protect_installed_version: False - - # We skip the docker role at this point in upgrade to prevent - # unintended package, container, or config upgrades which trigger - # docker restarts. At this early stage of upgrade we can assume - # docker is configured and running. - skip_docker_role: True - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_control_plane_running.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-master/validate_restart.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_docker_upgrade_targets.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/gate_checks.yml - tags: - - pre_upgrade - -# Pre-upgrade completed, nothing after this should be tagged pre_upgrade. - -# Separate step so we can execute in parallel and clear out anything unused -# before we get into the serialized upgrade process which will then remove -# remaining images if possible. -- name: Cleanup unused Docker images - hosts: oo_masters_to_config:oo_etcd_to_config - tasks: - - include: ../../../../common/openshift-cluster/upgrades/cleanup_unused_images.yml - -- include: ../../../../common/openshift-cluster/upgrades/upgrade_control_plane.yml - vars: - master_config_hook: "v3_4/master_config_upgrade.yml" - -- include: ../../../../common/openshift-cluster/upgrades/post_control_plane.yml +- include: ../../../../common/openshift-cluster/upgrades/v3_4/upgrade_control_plane.yml diff --git a/playbooks/byo/openshift-cluster/upgrades/v3_4/upgrade_nodes.yml b/playbooks/byo/openshift-cluster/upgrades/v3_4/upgrade_nodes.yml index 7bb66611c..d5329b858 100644 --- a/playbooks/byo/openshift-cluster/upgrades/v3_4/upgrade_nodes.yml +++ b/playbooks/byo/openshift-cluster/upgrades/v3_4/upgrade_nodes.yml @@ -6,101 +6,4 @@ # - include: ../../initialize_groups.yml -- include: ../../../../common/openshift-cluster/upgrades/init.yml - tags: - - pre_upgrade - -- name: Configure the upgrade target for the common upgrade tasks - hosts: l_oo_all_hosts - tags: - - pre_upgrade - tasks: - - set_fact: - openshift_upgrade_target: "{{ '1.4' if deployment_type == 'origin' else '3.4' }}" - openshift_upgrade_min: "{{ '1.3' if deployment_type == 'origin' else '3.3' }}" - -# Pre-upgrade -- include: ../../../../common/openshift-cluster/upgrades/initialize_nodes_to_upgrade.yml - tags: - - pre_upgrade - -- name: Update repos on nodes - hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config:oo_lb_to_config - roles: - - openshift_repos - tags: - - pre_upgrade - -- name: Set openshift_no_proxy_internal_hostnames - hosts: oo_masters_to_config:oo_nodes_to_upgrade - tags: - - pre_upgrade - tasks: - - set_fact: - openshift_no_proxy_internal_hostnames: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_upgrade'] - | union(groups['oo_masters_to_config']) - | union(groups['oo_etcd_to_config'] | default([]))) - | oo_collect('openshift.common.hostname') | default([]) | join (',') - }}" - when: "{{ (openshift_http_proxy is defined or openshift_https_proxy is defined) and - openshift_generate_no_proxy_hosts | default(True) | bool }}" - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_inventory_vars.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/disable_excluder.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/initialize_openshift_version.yml - tags: - - pre_upgrade - vars: - # Request specific openshift_release and let the openshift_version role handle converting this - # to a more specific version, respecting openshift_image_tag and openshift_pkg_version if - # defined, and overriding the normal behavior of protecting the installed version - openshift_release: "{{ openshift_upgrade_target }}" - openshift_protect_installed_version: False - - # We skip the docker role at this point in upgrade to prevent - # unintended package, container, or config upgrades which trigger - # docker restarts. At this early stage of upgrade we can assume - # docker is configured and running. - skip_docker_role: True - -- name: Verify masters are already upgraded - hosts: oo_masters_to_config - tags: - - pre_upgrade - tasks: - - fail: msg="Master running {{ openshift.common.version }} must be upgraded to {{ openshift_version }} before node upgrade can be run." - when: openshift.common.version != openshift_version - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_control_plane_running.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_docker_upgrade_targets.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/gate_checks.yml - tags: - - pre_upgrade - -# Pre-upgrade completed, nothing after this should be tagged pre_upgrade. - -# Separate step so we can execute in parallel and clear out anything unused -# before we get into the serialized upgrade process which will then remove -# remaining images if possible. -- name: Cleanup unused Docker images - hosts: oo_nodes_to_upgrade - tasks: - - include: ../../../../common/openshift-cluster/upgrades/cleanup_unused_images.yml - -- include: ../../../../common/openshift-cluster/upgrades/upgrade_nodes.yml +- include: ../../../../common/openshift-cluster/upgrades/v3_4/upgrade_nodes.yml diff --git a/playbooks/byo/openshift-cluster/upgrades/v3_5/roles b/playbooks/byo/openshift-cluster/upgrades/v3_5/roles deleted file mode 120000 index 6bc1a7aef..000000000 --- a/playbooks/byo/openshift-cluster/upgrades/v3_5/roles +++ /dev/null @@ -1 +0,0 @@ -../../../../../roles
\ No newline at end of file diff --git a/playbooks/byo/openshift-cluster/upgrades/v3_5/upgrade.yml b/playbooks/byo/openshift-cluster/upgrades/v3_5/upgrade.yml index f0900e04e..f44d55ad2 100644 --- a/playbooks/byo/openshift-cluster/upgrades/v3_5/upgrade.yml +++ b/playbooks/byo/openshift-cluster/upgrades/v3_5/upgrade.yml @@ -4,110 +4,4 @@ # - include: ../../initialize_groups.yml -- include: ../../../../common/openshift-cluster/upgrades/init.yml - tags: - - pre_upgrade - -- name: Configure the upgrade target for the common upgrade tasks - hosts: l_oo_all_hosts - tags: - - pre_upgrade - tasks: - - set_fact: - openshift_upgrade_target: "{{ '1.5' if deployment_type == 'origin' else '3.5' }}" - openshift_upgrade_min: "{{ '1.4' if deployment_type == 'origin' else '3.4' }}" - -# Pre-upgrade - -- include: ../../../../common/openshift-cluster/upgrades/initialize_nodes_to_upgrade.yml - tags: - - pre_upgrade - -- name: Update repos and initialize facts on all hosts - hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config:oo_lb_to_config - tags: - - pre_upgrade - roles: - - openshift_repos - -- name: Set openshift_no_proxy_internal_hostnames - hosts: oo_masters_to_config:oo_nodes_to_upgrade - tags: - - pre_upgrade - tasks: - - set_fact: - openshift_no_proxy_internal_hostnames: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_config'] - | union(groups['oo_masters_to_config']) - | union(groups['oo_etcd_to_config'] | default([]))) - | oo_collect('openshift.common.hostname') | default([]) | join (',') - }}" - when: "{{ (openshift_http_proxy is defined or openshift_https_proxy is defined) and - openshift_generate_no_proxy_hosts | default(True) | bool }}" - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_inventory_vars.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/disable_excluder.yml - tags: - - pre_upgrade - -# Note: During upgrade the openshift excluder is not unexcluded inside the initialize_openshift_version.yml play. -# So it is necassary to run the play after running disable_excluder.yml. -- include: ../../../../common/openshift-cluster/initialize_openshift_version.yml - tags: - - pre_upgrade - vars: - # Request specific openshift_release and let the openshift_version role handle converting this - # to a more specific version, respecting openshift_image_tag and openshift_pkg_version if - # defined, and overriding the normal behavior of protecting the installed version - openshift_release: "{{ openshift_upgrade_target }}" - openshift_protect_installed_version: False - - # We skip the docker role at this point in upgrade to prevent - # unintended package, container, or config upgrades which trigger - # docker restarts. At this early stage of upgrade we can assume - # docker is configured and running. - skip_docker_role: True - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_control_plane_running.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-master/validate_restart.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_docker_upgrade_targets.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/v3_5/validator.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/gate_checks.yml - tags: - - pre_upgrade - -# Pre-upgrade completed, nothing after this should be tagged pre_upgrade. - -# Separate step so we can execute in parallel and clear out anything unused -# before we get into the serialized upgrade process which will then remove -# remaining images if possible. -- name: Cleanup unused Docker images - hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config - tasks: - - include: ../../../../common/openshift-cluster/upgrades/cleanup_unused_images.yml - -- include: ../../../../common/openshift-cluster/upgrades/upgrade_control_plane.yml - -- include: ../../../../common/openshift-cluster/upgrades/upgrade_nodes.yml - -- include: ../../../../common/openshift-cluster/upgrades/post_control_plane.yml - -- include: ../../../../common/openshift-cluster/upgrades/v3_5/storage_upgrade.yml +- include: ../../../../common/openshift-cluster/upgrades/v3_5/upgrade.yml diff --git a/playbooks/byo/openshift-cluster/upgrades/v3_5/upgrade_control_plane.yml b/playbooks/byo/openshift-cluster/upgrades/v3_5/upgrade_control_plane.yml index e8d834a04..2377713fa 100644 --- a/playbooks/byo/openshift-cluster/upgrades/v3_5/upgrade_control_plane.yml +++ b/playbooks/byo/openshift-cluster/upgrades/v3_5/upgrade_control_plane.yml @@ -13,105 +13,4 @@ # - include: ../../initialize_groups.yml -- include: ../../../../common/openshift-cluster/upgrades/init.yml - tags: - - pre_upgrade - -# Configure the upgrade target for the common upgrade tasks: -- hosts: l_oo_all_hosts - tags: - - pre_upgrade - tasks: - - set_fact: - openshift_upgrade_target: "{{ '1.5' if deployment_type == 'origin' else '3.5' }}" - openshift_upgrade_min: "{{ '1.4' if deployment_type == 'origin' else '3.4' }}" - -# Pre-upgrade -- include: ../../../../common/openshift-cluster/upgrades/initialize_nodes_to_upgrade.yml - tags: - - pre_upgrade - -- name: Update repos on control plane hosts - hosts: oo_masters_to_config:oo_etcd_to_config:oo_lb_to_config - tags: - - pre_upgrade - roles: - - openshift_repos - -- name: Set openshift_no_proxy_internal_hostnames - hosts: oo_masters_to_config:oo_nodes_to_upgrade - tags: - - pre_upgrade - tasks: - - set_fact: - openshift_no_proxy_internal_hostnames: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_config'] - | union(groups['oo_masters_to_config']) - | union(groups['oo_etcd_to_config'] | default([]))) - | oo_collect('openshift.common.hostname') | default([]) | join (',') - }}" - when: "{{ (openshift_http_proxy is defined or openshift_https_proxy is defined) and - openshift_generate_no_proxy_hosts | default(True) | bool }}" - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_inventory_vars.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/disable_excluder.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/initialize_openshift_version.yml - tags: - - pre_upgrade - vars: - # Request specific openshift_release and let the openshift_version role handle converting this - # to a more specific version, respecting openshift_image_tag and openshift_pkg_version if - # defined, and overriding the normal behavior of protecting the installed version - openshift_release: "{{ openshift_upgrade_target }}" - openshift_protect_installed_version: False - - # We skip the docker role at this point in upgrade to prevent - # unintended package, container, or config upgrades which trigger - # docker restarts. At this early stage of upgrade we can assume - # docker is configured and running. - skip_docker_role: True - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_control_plane_running.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-master/validate_restart.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_docker_upgrade_targets.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/v3_5/validator.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/gate_checks.yml - tags: - - pre_upgrade - -# Pre-upgrade completed, nothing after this should be tagged pre_upgrade. - -# Separate step so we can execute in parallel and clear out anything unused -# before we get into the serialized upgrade process which will then remove -# remaining images if possible. -- name: Cleanup unused Docker images - hosts: oo_masters_to_config:oo_etcd_to_config - tasks: - - include: ../../../../common/openshift-cluster/upgrades/cleanup_unused_images.yml - -- include: ../../../../common/openshift-cluster/upgrades/upgrade_control_plane.yml - -- include: ../../../../common/openshift-cluster/upgrades/post_control_plane.yml - -- include: ../../../../common/openshift-cluster/upgrades/v3_5/storage_upgrade.yml +- include: ../../../../common/openshift-cluster/upgrades/v3_5/upgrade_control_plane.yml diff --git a/playbooks/byo/openshift-cluster/upgrades/v3_5/upgrade_nodes.yml b/playbooks/byo/openshift-cluster/upgrades/v3_5/upgrade_nodes.yml index c2a4debc8..5b3f6ab06 100644 --- a/playbooks/byo/openshift-cluster/upgrades/v3_5/upgrade_nodes.yml +++ b/playbooks/byo/openshift-cluster/upgrades/v3_5/upgrade_nodes.yml @@ -6,101 +6,4 @@ # - include: ../../initialize_groups.yml -- include: ../../../../common/openshift-cluster/upgrades/init.yml - tags: - - pre_upgrade - -# Configure the upgrade target for the common upgrade tasks: -- hosts: l_oo_all_hosts - tags: - - pre_upgrade - tasks: - - set_fact: - openshift_upgrade_target: "{{ '1.5' if deployment_type == 'origin' else '3.5' }}" - openshift_upgrade_min: "{{ '1.4' if deployment_type == 'origin' else '3.4' }}" - -# Pre-upgrade -- include: ../../../../common/openshift-cluster/upgrades/initialize_nodes_to_upgrade.yml - tags: - - pre_upgrade - -- name: Update repos on nodes - hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config:oo_lb_to_config - roles: - - openshift_repos - tags: - - pre_upgrade - -- name: Set openshift_no_proxy_internal_hostnames - hosts: oo_masters_to_config:oo_nodes_to_upgrade - tags: - - pre_upgrade - tasks: - - set_fact: - openshift_no_proxy_internal_hostnames: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_upgrade'] - | union(groups['oo_masters_to_config']) - | union(groups['oo_etcd_to_config'] | default([]))) - | oo_collect('openshift.common.hostname') | default([]) | join (',') - }}" - when: "{{ (openshift_http_proxy is defined or openshift_https_proxy is defined) and - openshift_generate_no_proxy_hosts | default(True) | bool }}" - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_inventory_vars.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/disable_excluder.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/initialize_openshift_version.yml - tags: - - pre_upgrade - vars: - # Request specific openshift_release and let the openshift_version role handle converting this - # to a more specific version, respecting openshift_image_tag and openshift_pkg_version if - # defined, and overriding the normal behavior of protecting the installed version - openshift_release: "{{ openshift_upgrade_target }}" - openshift_protect_installed_version: False - - # We skip the docker role at this point in upgrade to prevent - # unintended package, container, or config upgrades which trigger - # docker restarts. At this early stage of upgrade we can assume - # docker is configured and running. - skip_docker_role: True - -- name: Verify masters are already upgraded - hosts: oo_masters_to_config - tags: - - pre_upgrade - tasks: - - fail: msg="Master running {{ openshift.common.version }} must be upgraded to {{ openshift_version }} before node upgrade can be run." - when: openshift.common.version != openshift_version - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_control_plane_running.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_docker_upgrade_targets.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/gate_checks.yml - tags: - - pre_upgrade - -# Pre-upgrade completed, nothing after this should be tagged pre_upgrade. - -# Separate step so we can execute in parallel and clear out anything unused -# before we get into the serialized upgrade process which will then remove -# remaining images if possible. -- name: Cleanup unused Docker images - hosts: oo_nodes_to_upgrade - tasks: - - include: ../../../../common/openshift-cluster/upgrades/cleanup_unused_images.yml - -- include: ../../../../common/openshift-cluster/upgrades/upgrade_nodes.yml +- include: ../../../../common/openshift-cluster/upgrades/v3_5/upgrade_nodes.yml diff --git a/playbooks/byo/openshift-cluster/upgrades/v3_6/roles b/playbooks/byo/openshift-cluster/upgrades/v3_6/roles deleted file mode 120000 index 6bc1a7aef..000000000 --- a/playbooks/byo/openshift-cluster/upgrades/v3_6/roles +++ /dev/null @@ -1 +0,0 @@ -../../../../../roles
\ No newline at end of file diff --git a/playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade.yml b/playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade.yml index 763e79e01..40120b3e8 100644 --- a/playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade.yml +++ b/playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade.yml @@ -4,110 +4,4 @@ # - include: ../../initialize_groups.yml -- include: ../../../../common/openshift-cluster/upgrades/init.yml - tags: - - pre_upgrade - -- name: Configure the upgrade target for the common upgrade tasks - hosts: l_oo_all_hosts - tags: - - pre_upgrade - tasks: - - set_fact: - openshift_upgrade_target: '3.6' - openshift_upgrade_min: "{{ '1.5' if deployment_type == 'origin' else '3.5' }}" - -# Pre-upgrade - -- include: ../../../../common/openshift-cluster/upgrades/initialize_nodes_to_upgrade.yml - tags: - - pre_upgrade - -- name: Update repos and initialize facts on all hosts - hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config:oo_lb_to_config - tags: - - pre_upgrade - roles: - - openshift_repos - -- name: Set openshift_no_proxy_internal_hostnames - hosts: oo_masters_to_config:oo_nodes_to_upgrade - tags: - - pre_upgrade - tasks: - - set_fact: - openshift_no_proxy_internal_hostnames: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_config'] - | union(groups['oo_masters_to_config']) - | union(groups['oo_etcd_to_config'] | default([]))) - | oo_collect('openshift.common.hostname') | default([]) | join (',') - }}" - when: "{{ (openshift_http_proxy is defined or openshift_https_proxy is defined) and - openshift_generate_no_proxy_hosts | default(True) | bool }}" - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_inventory_vars.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/disable_excluder.yml - tags: - - pre_upgrade - -# Note: During upgrade the openshift excluder is not unexcluded inside the initialize_openshift_version.yml play. -# So it is necassary to run the play after running disable_excluder.yml. -- include: ../../../../common/openshift-cluster/initialize_openshift_version.yml - tags: - - pre_upgrade - vars: - # Request specific openshift_release and let the openshift_version role handle converting this - # to a more specific version, respecting openshift_image_tag and openshift_pkg_version if - # defined, and overriding the normal behavior of protecting the installed version - openshift_release: "{{ openshift_upgrade_target }}" - openshift_protect_installed_version: False - - # We skip the docker role at this point in upgrade to prevent - # unintended package, container, or config upgrades which trigger - # docker restarts. At this early stage of upgrade we can assume - # docker is configured and running. - skip_docker_role: True - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_control_plane_running.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-master/validate_restart.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_docker_upgrade_targets.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/v3_6/validator.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/gate_checks.yml - tags: - - pre_upgrade - -# Pre-upgrade completed, nothing after this should be tagged pre_upgrade. - -# Separate step so we can execute in parallel and clear out anything unused -# before we get into the serialized upgrade process which will then remove -# remaining images if possible. -- name: Cleanup unused Docker images - hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config - tasks: - - include: ../../../../common/openshift-cluster/upgrades/cleanup_unused_images.yml - -- include: ../../../../common/openshift-cluster/upgrades/upgrade_control_plane.yml - -- include: ../../../../common/openshift-cluster/upgrades/upgrade_nodes.yml - -- include: ../../../../common/openshift-cluster/upgrades/post_control_plane.yml - -- include: ../../../../common/openshift-cluster/upgrades/v3_6/storage_upgrade.yml +- include: ../../../../common/openshift-cluster/upgrades/v3_6/upgrade.yml diff --git a/playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade_control_plane.yml b/playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade_control_plane.yml index 7a1377be2..408a4c631 100644 --- a/playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade_control_plane.yml +++ b/playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade_control_plane.yml @@ -13,105 +13,4 @@ # - include: ../../initialize_groups.yml -- include: ../../../../common/openshift-cluster/upgrades/init.yml - tags: - - pre_upgrade - -# Configure the upgrade target for the common upgrade tasks: -- hosts: l_oo_all_hosts - tags: - - pre_upgrade - tasks: - - set_fact: - openshift_upgrade_target: '3.6' - openshift_upgrade_min: "{{ '1.5' if deployment_type == 'origin' else '3.5' }}" - -# Pre-upgrade -- include: ../../../../common/openshift-cluster/upgrades/initialize_nodes_to_upgrade.yml - tags: - - pre_upgrade - -- name: Update repos on control plane hosts - hosts: oo_masters_to_config:oo_etcd_to_config:oo_lb_to_config - tags: - - pre_upgrade - roles: - - openshift_repos - -- name: Set openshift_no_proxy_internal_hostnames - hosts: oo_masters_to_config:oo_nodes_to_upgrade - tags: - - pre_upgrade - tasks: - - set_fact: - openshift_no_proxy_internal_hostnames: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_config'] - | union(groups['oo_masters_to_config']) - | union(groups['oo_etcd_to_config'] | default([]))) - | oo_collect('openshift.common.hostname') | default([]) | join (',') - }}" - when: "{{ (openshift_http_proxy is defined or openshift_https_proxy is defined) and - openshift_generate_no_proxy_hosts | default(True) | bool }}" - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_inventory_vars.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/disable_excluder.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/initialize_openshift_version.yml - tags: - - pre_upgrade - vars: - # Request specific openshift_release and let the openshift_version role handle converting this - # to a more specific version, respecting openshift_image_tag and openshift_pkg_version if - # defined, and overriding the normal behavior of protecting the installed version - openshift_release: "{{ openshift_upgrade_target }}" - openshift_protect_installed_version: False - - # We skip the docker role at this point in upgrade to prevent - # unintended package, container, or config upgrades which trigger - # docker restarts. At this early stage of upgrade we can assume - # docker is configured and running. - skip_docker_role: True - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_control_plane_running.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-master/validate_restart.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_docker_upgrade_targets.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/v3_6/validator.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/gate_checks.yml - tags: - - pre_upgrade - -# Pre-upgrade completed, nothing after this should be tagged pre_upgrade. - -# Separate step so we can execute in parallel and clear out anything unused -# before we get into the serialized upgrade process which will then remove -# remaining images if possible. -- name: Cleanup unused Docker images - hosts: oo_masters_to_config:oo_etcd_to_config - tasks: - - include: ../../../../common/openshift-cluster/upgrades/cleanup_unused_images.yml - -- include: ../../../../common/openshift-cluster/upgrades/upgrade_control_plane.yml - -- include: ../../../../common/openshift-cluster/upgrades/post_control_plane.yml - -- include: ../../../../common/openshift-cluster/upgrades/v3_6/storage_upgrade.yml +- include: ../../../../common/openshift-cluster/upgrades/v3_6/upgrade_control_plane.yml diff --git a/playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade_nodes.yml b/playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade_nodes.yml index 065746493..b5f42b804 100644 --- a/playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade_nodes.yml +++ b/playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade_nodes.yml @@ -6,101 +6,4 @@ # - include: ../../initialize_groups.yml -- include: ../../../../common/openshift-cluster/upgrades/init.yml - tags: - - pre_upgrade - -# Configure the upgrade target for the common upgrade tasks: -- hosts: l_oo_all_hosts - tags: - - pre_upgrade - tasks: - - set_fact: - openshift_upgrade_target: '3.6' - openshift_upgrade_min: "{{ '1.5' if deployment_type == 'origin' else '3.5' }}" - -# Pre-upgrade -- include: ../../../../common/openshift-cluster/upgrades/initialize_nodes_to_upgrade.yml - tags: - - pre_upgrade - -- name: Update repos on nodes - hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config:oo_lb_to_config - roles: - - openshift_repos - tags: - - pre_upgrade - -- name: Set openshift_no_proxy_internal_hostnames - hosts: oo_masters_to_config:oo_nodes_to_upgrade - tags: - - pre_upgrade - tasks: - - set_fact: - openshift_no_proxy_internal_hostnames: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_upgrade'] - | union(groups['oo_masters_to_config']) - | union(groups['oo_etcd_to_config'] | default([]))) - | oo_collect('openshift.common.hostname') | default([]) | join (',') - }}" - when: "{{ (openshift_http_proxy is defined or openshift_https_proxy is defined) and - openshift_generate_no_proxy_hosts | default(True) | bool }}" - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_inventory_vars.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/disable_excluder.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/initialize_openshift_version.yml - tags: - - pre_upgrade - vars: - # Request specific openshift_release and let the openshift_version role handle converting this - # to a more specific version, respecting openshift_image_tag and openshift_pkg_version if - # defined, and overriding the normal behavior of protecting the installed version - openshift_release: "{{ openshift_upgrade_target }}" - openshift_protect_installed_version: False - - # We skip the docker role at this point in upgrade to prevent - # unintended package, container, or config upgrades which trigger - # docker restarts. At this early stage of upgrade we can assume - # docker is configured and running. - skip_docker_role: True - -- name: Verify masters are already upgraded - hosts: oo_masters_to_config - tags: - - pre_upgrade - tasks: - - fail: msg="Master running {{ openshift.common.version }} must be upgraded to {{ openshift_version }} before node upgrade can be run." - when: openshift.common.version != openshift_version - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_control_plane_running.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/verify_docker_upgrade_targets.yml - tags: - - pre_upgrade - -- include: ../../../../common/openshift-cluster/upgrades/pre/gate_checks.yml - tags: - - pre_upgrade - -# Pre-upgrade completed, nothing after this should be tagged pre_upgrade. - -# Separate step so we can execute in parallel and clear out anything unused -# before we get into the serialized upgrade process which will then remove -# remaining images if possible. -- name: Cleanup unused Docker images - hosts: oo_nodes_to_upgrade - tasks: - - include: ../../../../common/openshift-cluster/upgrades/cleanup_unused_images.yml - -- include: ../../../../common/openshift-cluster/upgrades/upgrade_nodes.yml +- include: ../../../../common/openshift-cluster/upgrades/v3_6/upgrade_nodes.yml diff --git a/playbooks/byo/openshift-etcd/config.yml b/playbooks/byo/openshift-etcd/config.yml new file mode 100644 index 000000000..dd3f47a4d --- /dev/null +++ b/playbooks/byo/openshift-etcd/config.yml @@ -0,0 +1,14 @@ +--- +- include: ../openshift-cluster/initialize_groups.yml + tags: + - always + +- include: ../../common/openshift-cluster/std_include.yml + tags: + - always + +- include: ../../common/openshift-etcd/config.yml + vars: + openshift_cluster_id: "{{ cluster_id | default('default') }}" + openshift_debug_level: "{{ debug_level | default(2) }}" + openshift_deployment_subtype: "{{ deployment_subtype | default(none) }}" diff --git a/playbooks/byo/openshift-preflight/check.yml b/playbooks/byo/openshift-preflight/check.yml index c5f05d0f0..04a55308a 100644 --- a/playbooks/byo/openshift-preflight/check.yml +++ b/playbooks/byo/openshift-preflight/check.yml @@ -1,5 +1,7 @@ --- -- hosts: OSEv3 +- include: ../openshift-cluster/initialize_groups.yml + +- hosts: g_all_hosts name: run OpenShift health checks roles: - openshift_health_checker diff --git a/playbooks/byo/openshift_facts.yml b/playbooks/byo/openshift_facts.yml index 3b10323d6..75b606e61 100644 --- a/playbooks/byo/openshift_facts.yml +++ b/playbooks/byo/openshift_facts.yml @@ -8,7 +8,7 @@ - always - name: Gather Cluster facts - hosts: OSEv3 + hosts: g_all_hosts roles: - openshift_facts tasks: diff --git a/playbooks/byo/rhel_subscribe.yml b/playbooks/byo/rhel_subscribe.yml index 777743def..aec87cf82 100644 --- a/playbooks/byo/rhel_subscribe.yml +++ b/playbooks/byo/rhel_subscribe.yml @@ -3,12 +3,8 @@ tags: - always -- include: ../common/openshift-cluster/std_include.yml - tags: - - always - - name: Subscribe hosts, update repos and update OS packages - hosts: l_oo_all_hosts + hosts: g_all_hosts roles: - role: rhel_subscribe when: deployment_type in ['atomic-enterprise', 'enterprise', 'openshift-enterprise'] and diff --git a/playbooks/common/openshift-cluster/evaluate_groups.yml b/playbooks/common/openshift-cluster/evaluate_groups.yml index 6aac70f63..17a177644 100644 --- a/playbooks/common/openshift-cluster/evaluate_groups.yml +++ b/playbooks/common/openshift-cluster/evaluate_groups.yml @@ -5,33 +5,40 @@ become: no gather_facts: no tasks: - - fail: + - name: Evaluate groups - g_etcd_hosts required + fail: msg: This playbook requires g_etcd_hosts to be set - when: "{{ g_etcd_hosts is not defined }}" + when: g_etcd_hosts is not defined - - fail: + - name: Evaluate groups - g_master_hosts or g_new_master_hosts required + fail: msg: This playbook requires g_master_hosts or g_new_master_hosts to be set - when: "{{ g_master_hosts is not defined and g_new_master_hosts is not defined }}" + when: g_master_hosts is not defined or g_new_master_hosts is not defined - - fail: + - name: Evaluate groups - g_node_hosts or g_new_node_hosts required + fail: msg: This playbook requires g_node_hosts or g_new_node_hosts to be set - when: "{{ g_node_hosts is not defined and g_new_node_hosts is not defined }}" + when: g_node_hosts is not defined or g_new_node_hosts is not defined - - fail: + - name: Evaluate groups - g_lb_hosts required + fail: msg: This playbook requires g_lb_hosts to be set - when: "{{ g_lb_hosts is not defined }}" + when: g_lb_hosts is not defined - - fail: + - name: Evaluate groups - g_nfs_hosts required + fail: msg: This playbook requires g_nfs_hosts to be set - when: "{{ g_nfs_hosts is not defined }}" + when: g_nfs_hosts is not defined - - fail: + - name: Evaluate groups - g_nfs_hosts is single host + fail: msg: The nfs group must be limited to one host - when: "{{ (groups[g_nfs_hosts] | default([])) | length > 1 }}" + when: (groups[g_nfs_hosts] | default([])) | length > 1 - - fail: + - name: Evaluate groups - g_glusterfs_hosts required + fail: msg: This playbook requires g_glusterfs_hosts to be set - when: "{{ g_glusterfs_hosts is not defined }}" + when: g_glusterfs_hosts is not defined - name: Evaluate oo_all_hosts add_host: @@ -51,13 +58,13 @@ with_items: "{{ g_master_hosts | union(g_new_master_hosts) | default([]) }}" changed_when: no - - name: Evaluate oo_etcd_to_config + - name: Evaluate oo_first_master add_host: - name: "{{ item }}" - groups: oo_etcd_to_config + name: "{{ g_master_hosts[0] }}" + groups: oo_first_master ansible_ssh_user: "{{ g_ssh_user | default(omit) }}" ansible_become: "{{ g_sudo | default(omit) }}" - with_items: "{{ g_etcd_hosts | default([]) }}" + when: g_master_hosts|length > 0 changed_when: no - name: Evaluate oo_masters_to_config @@ -69,41 +76,59 @@ with_items: "{{ g_new_master_hosts | default(g_master_hosts | default([], true), true) }}" changed_when: no - - name: Evaluate oo_nodes_to_config + - name: Evaluate oo_etcd_to_config add_host: name: "{{ item }}" - groups: oo_nodes_to_config + groups: oo_etcd_to_config ansible_ssh_user: "{{ g_ssh_user | default(omit) }}" ansible_become: "{{ g_sudo | default(omit) }}" - with_items: "{{ g_new_node_hosts | default(g_node_hosts | default([], true), true) }}" + with_items: "{{ g_etcd_hosts | default([]) }}" changed_when: no - # Skip adding the master to oo_nodes_to_config when g_new_node_hosts is - - name: Add master to oo_nodes_to_config + - name: Evaluate oo_first_etcd add_host: - name: "{{ item }}" - groups: oo_nodes_to_config + name: "{{ g_etcd_hosts[0] }}" + groups: oo_first_etcd ansible_ssh_user: "{{ g_ssh_user | default(omit) }}" ansible_become: "{{ g_sudo | default(omit) }}" - with_items: "{{ g_master_hosts | default([]) }}" - when: "{{ g_nodeonmaster | default(false) | bool and not g_new_node_hosts | default(false) | bool }}" + when: g_etcd_hosts|length > 0 changed_when: no - - name: Evaluate oo_first_etcd + # We use two groups one for hosts we're upgrading which doesn't include embedded etcd + # The other for backing up which includes the embedded etcd host, there's no need to + # upgrade embedded etcd that just happens when the master is updated. + - name: Evaluate oo_etcd_hosts_to_upgrade add_host: - name: "{{ g_etcd_hosts[0] }}" - groups: oo_first_etcd + name: "{{ item }}" + groups: oo_etcd_hosts_to_upgrade + with_items: "{{ groups.oo_etcd_to_config if groups.oo_etcd_to_config is defined and groups.oo_etcd_to_config | length > 0 else [] }}" + changed_when: False + + - name: Evaluate oo_etcd_hosts_to_backup + add_host: + name: "{{ item }}" + groups: oo_etcd_hosts_to_backup + with_items: "{{ groups.oo_etcd_to_config if groups.oo_etcd_to_config is defined and groups.oo_etcd_to_config | length > 0 else groups.oo_first_master }}" + changed_when: False + + - name: Evaluate oo_nodes_to_config + add_host: + name: "{{ item }}" + groups: oo_nodes_to_config ansible_ssh_user: "{{ g_ssh_user | default(omit) }}" - when: "{{ g_etcd_hosts|length > 0 }}" + ansible_become: "{{ g_sudo | default(omit) }}" + with_items: "{{ g_new_node_hosts | default(g_node_hosts | default([], true), true) }}" changed_when: no - - name: Evaluate oo_first_master + # Skip adding the master to oo_nodes_to_config when g_new_node_hosts is + - name: Add master to oo_nodes_to_config add_host: - name: "{{ g_master_hosts[0] }}" - groups: oo_first_master + name: "{{ item }}" + groups: oo_nodes_to_config ansible_ssh_user: "{{ g_ssh_user | default(omit) }}" ansible_become: "{{ g_sudo | default(omit) }}" - when: "{{ g_master_hosts|length > 0 }}" + with_items: "{{ g_master_hosts | default([]) }}" + when: g_nodeonmaster | default(false) | bool and not g_new_node_hosts | default(false) | bool changed_when: no - name: Evaluate oo_lb_to_config diff --git a/playbooks/common/openshift-cluster/initialize_openshift_version.yml b/playbooks/common/openshift-cluster/initialize_openshift_version.yml index 07b38920f..f4e52869e 100644 --- a/playbooks/common/openshift-cluster/initialize_openshift_version.yml +++ b/playbooks/common/openshift-cluster/initialize_openshift_version.yml @@ -1,13 +1,14 @@ --- # NOTE: requires openshift_facts be run - name: Verify compatible yum/subscription-manager combination - hosts: l_oo_all_hosts + hosts: oo_all_hosts gather_facts: no tasks: # See: # https://bugzilla.redhat.com/show_bug.cgi?id=1395047 # https://bugzilla.redhat.com/show_bug.cgi?id=1282961 # https://github.com/openshift/openshift-ansible/issues/1138 + # Consider the repoquery module for this work - name: Check for bad combinations of yum and subscription-manager command: > {{ repoquery_cmd }} --installed --qf '%{version}' "yum" @@ -16,7 +17,7 @@ when: not openshift.common.is_atomic | bool - fail: msg: Incompatible versions of yum and subscription-manager found. You may need to update yum and yum-utils. - when: "not openshift.common.is_atomic | bool and 'Plugin \"search-disabled-repos\" requires API 2.7. Supported API is 2.6.' in yum_ver_test.stdout" + when: not openshift.common.is_atomic | bool and 'Plugin \"search-disabled-repos\" requires API 2.7. Supported API is 2.6.' in yum_ver_test.stdout - name: Determine openshift_version to configure on first master hosts: oo_first_master diff --git a/playbooks/common/openshift-cluster/upgrades/disable_excluder.yml b/playbooks/common/openshift-cluster/upgrades/disable_excluder.yml index a30952929..02042c1ef 100644 --- a/playbooks/common/openshift-cluster/upgrades/disable_excluder.yml +++ b/playbooks/common/openshift-cluster/upgrades/disable_excluder.yml @@ -3,15 +3,10 @@ hosts: oo_masters_to_config:oo_nodes_to_config gather_facts: no tasks: - - include: pre/validate_excluder.yml - vars: - excluder: "{{ openshift.common.service_type }}-docker-excluder" - when: enable_docker_excluder | default(enable_excluders) | default(True) | bool - - include: pre/validate_excluder.yml - vars: - excluder: "{{ openshift.common.service_type }}-excluder" - when: enable_openshift_excluder | default(enable_excluders) | default(True) | bool - + # verify the excluders can be upgraded + - include_role: + name: openshift_excluder + tasks_from: verify_upgrade # disable excluders based on their status - include_role: diff --git a/playbooks/common/openshift-cluster/upgrades/etcd/backup.yml b/playbooks/common/openshift-cluster/upgrades/etcd/backup.yml index fb51a0061..9d0333ca8 100644 --- a/playbooks/common/openshift-cluster/upgrades/etcd/backup.yml +++ b/playbooks/common/openshift-cluster/upgrades/etcd/backup.yml @@ -1,6 +1,6 @@ --- - name: Backup etcd - hosts: etcd_hosts_to_backup + hosts: oo_etcd_hosts_to_backup vars: embedded_etcd: "{{ groups.oo_etcd_to_config | default([]) | length == 0 }}" etcdctl_command: "{{ 'etcdctl' if not openshift.common.is_containerized or embedded_etcd else 'docker exec etcd_container etcdctl' if not openshift.common.is_etcd_system_container else 'runc exec etcd etcdctl' }}" @@ -87,10 +87,10 @@ tasks: - set_fact: etcd_backup_completed: "{{ hostvars - | oo_select_keys(groups.etcd_hosts_to_backup) + | oo_select_keys(groups.oo_etcd_hosts_to_backup) | oo_collect('inventory_hostname', {'etcd_backup_complete': true}) }}" - set_fact: - etcd_backup_failed: "{{ groups.etcd_hosts_to_backup | difference(etcd_backup_completed) }}" + etcd_backup_failed: "{{ groups.oo_etcd_hosts_to_backup | difference(etcd_backup_completed) }}" - fail: msg: "Upgrade cannot continue. The following hosts did not complete etcd backup: {{ etcd_backup_failed | join(',') }}" when: etcd_backup_failed | length > 0 diff --git a/playbooks/common/openshift-cluster/upgrades/etcd/main.yml b/playbooks/common/openshift-cluster/upgrades/etcd/main.yml index fa86d29fb..d9b59edcb 100644 --- a/playbooks/common/openshift-cluster/upgrades/etcd/main.yml +++ b/playbooks/common/openshift-cluster/upgrades/etcd/main.yml @@ -5,32 +5,6 @@ # mirrored packages on your own because only the GA and latest versions are # available in the repos. So for Fedora we'll simply skip this, sorry. -- include: ../../evaluate_groups.yml - tags: - - always - -# We use two groups one for hosts we're upgrading which doesn't include embedded etcd -# The other for backing up which includes the embedded etcd host, there's no need to -# upgrade embedded etcd that just happens when the master is updated. -- name: Evaluate additional groups for etcd - hosts: localhost - connection: local - become: no - tasks: - - name: Evaluate etcd_hosts_to_upgrade - add_host: - name: "{{ item }}" - groups: etcd_hosts_to_upgrade - with_items: "{{ groups.oo_etcd_to_config if groups.oo_etcd_to_config is defined and groups.oo_etcd_to_config | length > 0 else [] }}" - changed_when: False - - - name: Evaluate etcd_hosts_to_backup - add_host: - name: "{{ item }}" - groups: etcd_hosts_to_backup - with_items: "{{ groups.oo_etcd_to_config if groups.oo_etcd_to_config is defined and groups.oo_etcd_to_config | length > 0 else groups.oo_first_master }}" - changed_when: False - - name: Backup etcd before upgrading anything include: backup.yml vars: @@ -38,9 +12,11 @@ when: openshift_etcd_backup | default(true) | bool - name: Drop etcdctl profiles - hosts: etcd_hosts_to_upgrade + hosts: oo_etcd_hosts_to_upgrade tasks: - - include: roles/etcd/tasks/etcdctl.yml + - include_role: + name: etcd_common + tasks_from: etcdctl.yml - name: Perform etcd upgrade include: ./upgrade.yml diff --git a/playbooks/common/openshift-cluster/upgrades/etcd/upgrade.yml b/playbooks/common/openshift-cluster/upgrades/etcd/upgrade.yml index a9b5b94e6..45e301315 100644 --- a/playbooks/common/openshift-cluster/upgrades/etcd/upgrade.yml +++ b/playbooks/common/openshift-cluster/upgrades/etcd/upgrade.yml @@ -1,6 +1,6 @@ --- - name: Determine etcd version - hosts: etcd_hosts_to_upgrade + hosts: oo_etcd_hosts_to_upgrade tasks: - name: Record RPM based etcd version command: rpm -qa --qf '%{version}' etcd\* @@ -43,7 +43,7 @@ # I really dislike this copy/pasta but I wasn't able to find a way to get it to loop # through hosts, then loop through tasks only when appropriate - name: Upgrade to 2.1 - hosts: etcd_hosts_to_upgrade + hosts: oo_etcd_hosts_to_upgrade serial: 1 vars: upgrade_version: '2.1' @@ -52,7 +52,7 @@ when: etcd_rpm_version.stdout | default('99') | version_compare('2.1','<') and ansible_distribution == 'RedHat' and not openshift.common.is_containerized | bool - name: Upgrade RPM hosts to 2.2 - hosts: etcd_hosts_to_upgrade + hosts: oo_etcd_hosts_to_upgrade serial: 1 vars: upgrade_version: '2.2' @@ -61,7 +61,7 @@ when: etcd_rpm_version.stdout | default('99') | version_compare('2.2','<') and ansible_distribution == 'RedHat' and not openshift.common.is_containerized | bool - name: Upgrade containerized hosts to 2.2.5 - hosts: etcd_hosts_to_upgrade + hosts: oo_etcd_hosts_to_upgrade serial: 1 vars: upgrade_version: 2.2.5 @@ -70,7 +70,7 @@ when: etcd_container_version.stdout | default('99') | version_compare('2.2','<') and openshift.common.is_containerized | bool - name: Upgrade RPM hosts to 2.3 - hosts: etcd_hosts_to_upgrade + hosts: oo_etcd_hosts_to_upgrade serial: 1 vars: upgrade_version: '2.3' @@ -79,7 +79,7 @@ when: etcd_rpm_version.stdout | default('99') | version_compare('2.3','<') and ansible_distribution == 'RedHat' and not openshift.common.is_containerized | bool - name: Upgrade containerized hosts to 2.3.7 - hosts: etcd_hosts_to_upgrade + hosts: oo_etcd_hosts_to_upgrade serial: 1 vars: upgrade_version: 2.3.7 @@ -88,7 +88,7 @@ when: etcd_container_version.stdout | default('99') | version_compare('2.3','<') and openshift.common.is_containerized | bool - name: Upgrade RPM hosts to 3.0 - hosts: etcd_hosts_to_upgrade + hosts: oo_etcd_hosts_to_upgrade serial: 1 vars: upgrade_version: '3.0' @@ -97,7 +97,7 @@ when: etcd_rpm_version.stdout | default('99') | version_compare('3.0','<') and ansible_distribution == 'RedHat' and not openshift.common.is_containerized | bool - name: Upgrade containerized hosts to etcd3 image - hosts: etcd_hosts_to_upgrade + hosts: oo_etcd_hosts_to_upgrade serial: 1 vars: upgrade_version: 3.0.15 @@ -106,7 +106,7 @@ when: etcd_container_version.stdout | default('99') | version_compare('3.0','<') and openshift.common.is_containerized | bool - name: Upgrade fedora to latest - hosts: etcd_hosts_to_upgrade + hosts: oo_etcd_hosts_to_upgrade serial: 1 tasks: - include: fedora_tasks.yml diff --git a/playbooks/common/openshift-cluster/upgrades/init.yml b/playbooks/common/openshift-cluster/upgrades/init.yml index cbf6d58b3..0f421928b 100644 --- a/playbooks/common/openshift-cluster/upgrades/init.yml +++ b/playbooks/common/openshift-cluster/upgrades/init.yml @@ -10,17 +10,6 @@ - include: ../initialize_facts.yml -- name: Ensure clean repo cache in the event repos have been changed manually - hosts: oo_all_hosts - tags: - - pre_upgrade - tasks: - - name: Clean package cache - command: "{{ ansible_pkg_mgr }} clean all" - when: not openshift.common.is_atomic | bool - args: - warn: no - - name: Ensure firewall is not switched during upgrade hosts: oo_all_hosts tasks: diff --git a/playbooks/common/openshift-cluster/upgrades/upgrade_control_plane.yml b/playbooks/common/openshift-cluster/upgrades/upgrade_control_plane.yml index c6e799261..0ad934d2d 100644 --- a/playbooks/common/openshift-cluster/upgrades/upgrade_control_plane.yml +++ b/playbooks/common/openshift-cluster/upgrades/upgrade_control_plane.yml @@ -2,17 +2,6 @@ ############################################################################### # Upgrade Masters ############################################################################### -- name: Evaluate additional groups for upgrade - hosts: localhost - connection: local - become: no - tasks: - - name: Evaluate etcd_hosts_to_backup - add_host: - name: "{{ item }}" - groups: etcd_hosts_to_backup - with_items: "{{ groups.oo_etcd_to_config if groups.oo_etcd_to_config is defined and groups.oo_etcd_to_config | length > 0 else groups.oo_first_master }}" - changed_when: False # If facts cache were for some reason deleted, this fact may not be set, and if not set # it will always default to true. This causes problems for the etcd data dir fact detection diff --git a/playbooks/common/openshift-cluster/upgrades/upgrade_scheduler.yml b/playbooks/common/openshift-cluster/upgrades/upgrade_scheduler.yml index 88f2ddc78..83d2cec81 100644 --- a/playbooks/common/openshift-cluster/upgrades/upgrade_scheduler.yml +++ b/playbooks/common/openshift-cluster/upgrades/upgrade_scheduler.yml @@ -63,12 +63,12 @@ - block: - debug: msg: "WARNING: openshift_master_scheduler_predicates is set to defaults from an earlier release of OpenShift current defaults are: {{ openshift_master_scheduler_default_predicates }}" - when: "{{ openshift_master_scheduler_predicates in older_predicates + older_predicates_no_region + [prev_predicates] + [prev_predicates_no_region] }}" + when: openshift_master_scheduler_predicates in older_predicates + older_predicates_no_region + [prev_predicates] + [prev_predicates_no_region] - debug: msg: "WARNING: openshift_master_scheduler_predicates does not match current defaults of: {{ openshift_master_scheduler_default_predicates }}" - when: "{{ openshift_master_scheduler_predicates != openshift_master_scheduler_default_predicates }}" - when: "{{ openshift_master_scheduler_predicates | default(none) is not none }}" + when: openshift_master_scheduler_predicates != openshift_master_scheduler_default_predicates + when: openshift_master_scheduler_predicates | default(none) is not none # Handle cases where openshift_master_predicates is not defined - block: @@ -87,7 +87,7 @@ when: "{{ openshift_master_scheduler_current_predicates != default_predicates_no_region and openshift_master_scheduler_current_predicates in older_predicates_no_region + [prev_predicates_no_region] }}" - when: "{{ openshift_master_scheduler_predicates | default(none) is none }}" + when: openshift_master_scheduler_predicates | default(none) is none # Upgrade priorities @@ -120,12 +120,12 @@ - block: - debug: msg: "WARNING: openshift_master_scheduler_priorities is set to defaults from an earlier release of OpenShift current defaults are: {{ openshift_master_scheduler_default_priorities }}" - when: "{{ openshift_master_scheduler_priorities in older_priorities + older_priorities_no_zone + [prev_priorities] + [prev_priorities_no_zone] }}" + when: openshift_master_scheduler_priorities in older_priorities + older_priorities_no_zone + [prev_priorities] + [prev_priorities_no_zone] - debug: msg: "WARNING: openshift_master_scheduler_priorities does not match current defaults of: {{ openshift_master_scheduler_default_priorities }}" - when: "{{ openshift_master_scheduler_priorities != openshift_master_scheduler_default_priorities }}" - when: "{{ openshift_master_scheduler_priorities | default(none) is not none }}" + when: openshift_master_scheduler_priorities != openshift_master_scheduler_default_priorities + when: openshift_master_scheduler_priorities | default(none) is not none # Handle cases where openshift_master_priorities is not defined - block: @@ -144,7 +144,7 @@ when: "{{ openshift_master_scheduler_current_priorities != default_priorities_no_zone and openshift_master_scheduler_current_priorities in older_priorities_no_zone + [prev_priorities_no_zone] }}" - when: "{{ openshift_master_scheduler_priorities | default(none) is none }}" + when: openshift_master_scheduler_priorities | default(none) is none # Update scheduler diff --git a/playbooks/common/openshift-cluster/upgrades/v3_3/master_config_upgrade.yml b/playbooks/common/openshift-cluster/upgrades/v3_3/master_config_upgrade.yml index 68c71a132..d69472fad 100644 --- a/playbooks/common/openshift-cluster/upgrades/v3_3/master_config_upgrade.yml +++ b/playbooks/common/openshift-cluster/upgrades/v3_3/master_config_upgrade.yml @@ -53,7 +53,7 @@ dest: "{{ openshift.common.config_base}}/master/master-config.yaml" yaml_key: 'admissionConfig.pluginConfig' yaml_value: "{{ openshift.master.admission_plugin_config }}" - when: "{{ 'admission_plugin_config' in openshift.master }}" + when: "'admission_plugin_config' in openshift.master" - modify_yaml: dest: "{{ openshift.common.config_base}}/master/master-config.yaml" diff --git a/playbooks/byo/openshift-cluster/upgrades/v3_3/roles b/playbooks/common/openshift-cluster/upgrades/v3_3/roles index 6bc1a7aef..6bc1a7aef 120000 --- a/playbooks/byo/openshift-cluster/upgrades/v3_3/roles +++ b/playbooks/common/openshift-cluster/upgrades/v3_3/roles diff --git a/playbooks/common/openshift-cluster/upgrades/v3_3/upgrade.yml b/playbooks/common/openshift-cluster/upgrades/v3_3/upgrade.yml new file mode 100644 index 000000000..be18c1edd --- /dev/null +++ b/playbooks/common/openshift-cluster/upgrades/v3_3/upgrade.yml @@ -0,0 +1,107 @@ +--- +# +# Full Control Plane + Nodes Upgrade +# +- include: ../init.yml + tags: + - pre_upgrade + +- name: Configure the upgrade target for the common upgrade tasks + hosts: oo_all_hosts + tags: + - pre_upgrade + tasks: + - set_fact: + openshift_upgrade_target: "{{ '1.3' if deployment_type == 'origin' else '3.3' }}" + openshift_upgrade_min: "{{ '1.2' if deployment_type == 'origin' else '3.2' }}" + +# Pre-upgrade + +- include: ../initialize_nodes_to_upgrade.yml + tags: + - pre_upgrade + +- name: Update repos and initialize facts on all hosts + hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config:oo_lb_to_config + tags: + - pre_upgrade + roles: + - openshift_repos + +- name: Set openshift_no_proxy_internal_hostnames + hosts: oo_masters_to_config:oo_nodes_to_upgrade + tags: + - pre_upgrade + tasks: + - set_fact: + openshift_no_proxy_internal_hostnames: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_config'] + | union(groups['oo_masters_to_config']) + | union(groups['oo_etcd_to_config'] | default([]))) + | oo_collect('openshift.common.hostname') | default([]) | join (',') + }}" + when: "{{ (openshift_http_proxy is defined or openshift_https_proxy is defined) and + openshift_generate_no_proxy_hosts | default(True) | bool }}" + +- include: ../pre/verify_inventory_vars.yml + tags: + - pre_upgrade + +- include: ../disable_excluder.yml + tags: + - pre_upgrade + +- include: ../../initialize_openshift_version.yml + tags: + - pre_upgrade + vars: + # Request specific openshift_release and let the openshift_version role handle converting this + # to a more specific version, respecting openshift_image_tag and openshift_pkg_version if + # defined, and overriding the normal behavior of protecting the installed version + openshift_release: "{{ openshift_upgrade_target }}" + openshift_protect_installed_version: False + + # We skip the docker role at this point in upgrade to prevent + # unintended package, container, or config upgrades which trigger + # docker restarts. At this early stage of upgrade we can assume + # docker is configured and running. + skip_docker_role: True + +- include: ../pre/verify_control_plane_running.yml + tags: + - pre_upgrade + +- include: ../../../openshift-master/validate_restart.yml + tags: + - pre_upgrade + +- include: ../pre/verify_upgrade_targets.yml + tags: + - pre_upgrade + +- include: ../pre/verify_docker_upgrade_targets.yml + tags: + - pre_upgrade + +- include: ../pre/gate_checks.yml + tags: + - pre_upgrade + +# Pre-upgrade completed, nothing after this should be tagged pre_upgrade. + +# Separate step so we can execute in parallel and clear out anything unused +# before we get into the serialized upgrade process which will then remove +# remaining images if possible. +- name: Cleanup unused Docker images + hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config + tasks: + - include: ../cleanup_unused_images.yml + +- include: ../upgrade_control_plane.yml + vars: + master_config_hook: "v3_3/master_config_upgrade.yml" + +- include: ../upgrade_nodes.yml + vars: + node_config_hook: "v3_3/node_config_upgrade.yml" + +- include: ../post_control_plane.yml diff --git a/playbooks/common/openshift-cluster/upgrades/v3_3/upgrade_control_plane.yml b/playbooks/common/openshift-cluster/upgrades/v3_3/upgrade_control_plane.yml new file mode 100644 index 000000000..20dffb44b --- /dev/null +++ b/playbooks/common/openshift-cluster/upgrades/v3_3/upgrade_control_plane.yml @@ -0,0 +1,111 @@ +--- +# +# Control Plane Upgrade Playbook +# +# Upgrades masters and Docker (only on standalone etcd hosts) +# +# This upgrade does not include: +# - node service running on masters +# - docker running on masters +# - node service running on dedicated nodes +# +# You can run the upgrade_nodes.yml playbook after this to upgrade these components separately. +# +- include: ../init.yml + tags: + - pre_upgrade + +- name: Configure the upgrade target for the common upgrade tasks + hosts: oo_all_hosts + tags: + - pre_upgrade + tasks: + - set_fact: + openshift_upgrade_target: "{{ '1.3' if deployment_type == 'origin' else '3.3' }}" + openshift_upgrade_min: "{{ '1.2' if deployment_type == 'origin' else '3.2' }}" + +# Pre-upgrade +- include: ../initialize_nodes_to_upgrade.yml + tags: + - pre_upgrade + +- name: Update repos on control plane hosts + hosts: oo_masters_to_config:oo_etcd_to_config:oo_lb_to_config + tags: + - pre_upgrade + roles: + - openshift_repos + +- name: Set openshift_no_proxy_internal_hostnames + hosts: oo_masters_to_config:oo_nodes_to_upgrade + tags: + - pre_upgrade + tasks: + - set_fact: + openshift_no_proxy_internal_hostnames: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_config'] + | union(groups['oo_masters_to_config']) + | union(groups['oo_etcd_to_config'] | default([]))) + | oo_collect('openshift.common.hostname') | default([]) | join (',') + }}" + when: "{{ (openshift_http_proxy is defined or openshift_https_proxy is defined) and + openshift_generate_no_proxy_hosts | default(True) | bool }}" + +- include: ../pre/verify_inventory_vars.yml + tags: + - pre_upgrade + +- include: ../disable_excluder.yml + tags: + - pre_upgrade + +- include: ../../initialize_openshift_version.yml + tags: + - pre_upgrade + vars: + # Request specific openshift_release and let the openshift_version role handle converting this + # to a more specific version, respecting openshift_image_tag and openshift_pkg_version if + # defined, and overriding the normal behavior of protecting the installed version + openshift_release: "{{ openshift_upgrade_target }}" + openshift_protect_installed_version: False + + # We skip the docker role at this point in upgrade to prevent + # unintended package, container, or config upgrades which trigger + # docker restarts. At this early stage of upgrade we can assume + # docker is configured and running. + skip_docker_role: True + +- include: ../pre/verify_control_plane_running.yml + tags: + - pre_upgrade + +- include: ../../../openshift-master/validate_restart.yml + tags: + - pre_upgrade + +- include: ../pre/verify_upgrade_targets.yml + tags: + - pre_upgrade + +- include: ../pre/verify_docker_upgrade_targets.yml + tags: + - pre_upgrade + +- include: ../pre/gate_checks.yml + tags: + - pre_upgrade + +# Pre-upgrade completed, nothing after this should be tagged pre_upgrade. + +# Separate step so we can execute in parallel and clear out anything unused +# before we get into the serialized upgrade process which will then remove +# remaining images if possible. +- name: Cleanup unused Docker images + hosts: oo_masters_to_config:oo_etcd_to_config + tasks: + - include: ../cleanup_unused_images.yml + +- include: ../upgrade_control_plane.yml + vars: + master_config_hook: "v3_3/master_config_upgrade.yml" + +- include: ../post_control_plane.yml diff --git a/playbooks/common/openshift-cluster/upgrades/v3_3/upgrade_nodes.yml b/playbooks/common/openshift-cluster/upgrades/v3_3/upgrade_nodes.yml new file mode 100644 index 000000000..14aaf70d6 --- /dev/null +++ b/playbooks/common/openshift-cluster/upgrades/v3_3/upgrade_nodes.yml @@ -0,0 +1,106 @@ +--- +# +# Node Upgrade Playbook +# +# Upgrades nodes only, but requires the control plane to have already been upgraded. +# +- include: ../init.yml + tags: + - pre_upgrade + +- name: Configure the upgrade target for the common upgrade tasks + hosts: oo_all_hosts + tags: + - pre_upgrade + tasks: + - set_fact: + openshift_upgrade_target: "{{ '1.3' if deployment_type == 'origin' else '3.3' }}" + openshift_upgrade_min: "{{ '1.2' if deployment_type == 'origin' else '3.2' }}" + +# Pre-upgrade +- include: ../initialize_nodes_to_upgrade.yml + tags: + - pre_upgrade + +- name: Update repos on nodes + hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config:oo_lb_to_config + roles: + - openshift_repos + tags: + - pre_upgrade + +- name: Set openshift_no_proxy_internal_hostnames + hosts: oo_masters_to_config:oo_nodes_to_upgrade + tags: + - pre_upgrade + tasks: + - set_fact: + openshift_no_proxy_internal_hostnames: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_upgrade'] + | union(groups['oo_masters_to_config']) + | union(groups['oo_etcd_to_config'] | default([]))) + | oo_collect('openshift.common.hostname') | default([]) | join (',') + }}" + when: "{{ (openshift_http_proxy is defined or openshift_https_proxy is defined) and + openshift_generate_no_proxy_hosts | default(True) | bool }}" + +- include: ../pre/verify_inventory_vars.yml + tags: + - pre_upgrade + +- include: ../disable_excluder.yml + tags: + - pre_upgrade + +- include: ../../initialize_openshift_version.yml + tags: + - pre_upgrade + vars: + # Request specific openshift_release and let the openshift_version role handle converting this + # to a more specific version, respecting openshift_image_tag and openshift_pkg_version if + # defined, and overriding the normal behavior of protecting the installed version + openshift_release: "{{ openshift_upgrade_target }}" + openshift_protect_installed_version: False + + # We skip the docker role at this point in upgrade to prevent + # unintended package, container, or config upgrades which trigger + # docker restarts. At this early stage of upgrade we can assume + # docker is configured and running. + skip_docker_role: True + +- name: Verify masters are already upgraded + hosts: oo_masters_to_config + tags: + - pre_upgrade + tasks: + - fail: msg="Master running {{ openshift.common.version }} must be upgraded to {{ openshift_version }} before node upgrade can be run." + when: openshift.common.version != openshift_version + +- include: ../pre/verify_control_plane_running.yml + tags: + - pre_upgrade + +- include: ../pre/verify_upgrade_targets.yml + tags: + - pre_upgrade + +- include: ../pre/verify_docker_upgrade_targets.yml + tags: + - pre_upgrade + +- include: ../pre/gate_checks.yml + tags: + - pre_upgrade + +# Pre-upgrade completed, nothing after this should be tagged pre_upgrade. + +# Separate step so we can execute in parallel and clear out anything unused +# before we get into the serialized upgrade process which will then remove +# remaining images if possible. +- name: Cleanup unused Docker images + hosts: oo_nodes_to_upgrade + tasks: + - include: ../cleanup_unused_images.yml + +- include: ../upgrade_nodes.yml + vars: + node_config_hook: "v3_3/node_config_upgrade.yml" diff --git a/playbooks/common/openshift-cluster/upgrades/v3_4/master_config_upgrade.yml b/playbooks/common/openshift-cluster/upgrades/v3_4/master_config_upgrade.yml index 43c2ffcd4..ed89dbe8d 100644 --- a/playbooks/common/openshift-cluster/upgrades/v3_4/master_config_upgrade.yml +++ b/playbooks/common/openshift-cluster/upgrades/v3_4/master_config_upgrade.yml @@ -3,7 +3,7 @@ dest: "{{ openshift.common.config_base}}/master/master-config.yaml" yaml_key: 'admissionConfig.pluginConfig' yaml_value: "{{ openshift.master.admission_plugin_config }}" - when: "{{ 'admission_plugin_config' in openshift.master }}" + when: "'admission_plugin_config' in openshift.master" - modify_yaml: dest: "{{ openshift.common.config_base}}/master/master-config.yaml" diff --git a/playbooks/byo/openshift-cluster/upgrades/v3_4/roles b/playbooks/common/openshift-cluster/upgrades/v3_4/roles index 6bc1a7aef..6bc1a7aef 120000 --- a/playbooks/byo/openshift-cluster/upgrades/v3_4/roles +++ b/playbooks/common/openshift-cluster/upgrades/v3_4/roles diff --git a/playbooks/common/openshift-cluster/upgrades/v3_4/upgrade.yml b/playbooks/common/openshift-cluster/upgrades/v3_4/upgrade.yml new file mode 100644 index 000000000..5d6455bef --- /dev/null +++ b/playbooks/common/openshift-cluster/upgrades/v3_4/upgrade.yml @@ -0,0 +1,105 @@ +--- +# +# Full Control Plane + Nodes Upgrade +# +- include: ../init.yml + tags: + - pre_upgrade + +- name: Configure the upgrade target for the common upgrade tasks + hosts: oo_all_hosts + tags: + - pre_upgrade + tasks: + - set_fact: + openshift_upgrade_target: "{{ '1.4' if deployment_type == 'origin' else '3.4' }}" + openshift_upgrade_min: "{{ '1.3' if deployment_type == 'origin' else '3.3' }}" + +# Pre-upgrade + +- include: ../initialize_nodes_to_upgrade.yml + tags: + - pre_upgrade + +- name: Update repos and initialize facts on all hosts + hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config:oo_lb_to_config + tags: + - pre_upgrade + roles: + - openshift_repos + +- name: Set openshift_no_proxy_internal_hostnames + hosts: oo_masters_to_config:oo_nodes_to_upgrade + tags: + - pre_upgrade + tasks: + - set_fact: + openshift_no_proxy_internal_hostnames: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_config'] + | union(groups['oo_masters_to_config']) + | union(groups['oo_etcd_to_config'] | default([]))) + | oo_collect('openshift.common.hostname') | default([]) | join (',') + }}" + when: "{{ (openshift_http_proxy is defined or openshift_https_proxy is defined) and + openshift_generate_no_proxy_hosts | default(True) | bool }}" + +- include: ../pre/verify_inventory_vars.yml + tags: + - pre_upgrade + +- include: ../disable_excluder.yml + tags: + - pre_upgrade + +- include: ../../initialize_openshift_version.yml + tags: + - pre_upgrade + vars: + # Request specific openshift_release and let the openshift_version role handle converting this + # to a more specific version, respecting openshift_image_tag and openshift_pkg_version if + # defined, and overriding the normal behavior of protecting the installed version + openshift_release: "{{ openshift_upgrade_target }}" + openshift_protect_installed_version: False + + # We skip the docker role at this point in upgrade to prevent + # unintended package, container, or config upgrades which trigger + # docker restarts. At this early stage of upgrade we can assume + # docker is configured and running. + skip_docker_role: True + +- include: ../pre/verify_control_plane_running.yml + tags: + - pre_upgrade + +- include: ../../../openshift-master/validate_restart.yml + tags: + - pre_upgrade + +- include: ../pre/verify_upgrade_targets.yml + tags: + - pre_upgrade + +- include: ../pre/verify_docker_upgrade_targets.yml + tags: + - pre_upgrade + +- include: ../pre/gate_checks.yml + tags: + - pre_upgrade + +# Pre-upgrade completed, nothing after this should be tagged pre_upgrade. + +# Separate step so we can execute in parallel and clear out anything unused +# before we get into the serialized upgrade process which will then remove +# remaining images if possible. +- name: Cleanup unused Docker images + hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config + tasks: + - include: ../cleanup_unused_images.yml + +- include: ../upgrade_control_plane.yml + vars: + master_config_hook: "v3_4/master_config_upgrade.yml" + +- include: ../upgrade_nodes.yml + +- include: ../post_control_plane.yml diff --git a/playbooks/common/openshift-cluster/upgrades/v3_4/upgrade_control_plane.yml b/playbooks/common/openshift-cluster/upgrades/v3_4/upgrade_control_plane.yml new file mode 100644 index 000000000..c76920586 --- /dev/null +++ b/playbooks/common/openshift-cluster/upgrades/v3_4/upgrade_control_plane.yml @@ -0,0 +1,111 @@ +--- +# +# Control Plane Upgrade Playbook +# +# Upgrades masters and Docker (only on standalone etcd hosts) +# +# This upgrade does not include: +# - node service running on masters +# - docker running on masters +# - node service running on dedicated nodes +# +# You can run the upgrade_nodes.yml playbook after this to upgrade these components separately. +# +- include: ../init.yml + tags: + - pre_upgrade + +- name: Configure the upgrade target for the common upgrade tasks + hosts: oo_all_hosts + tags: + - pre_upgrade + tasks: + - set_fact: + openshift_upgrade_target: "{{ '1.4' if deployment_type == 'origin' else '3.4' }}" + openshift_upgrade_min: "{{ '1.3' if deployment_type == 'origin' else '3.3' }}" + +# Pre-upgrade +- include: ../initialize_nodes_to_upgrade.yml + tags: + - pre_upgrade + +- name: Update repos on control plane hosts + hosts: oo_masters_to_config:oo_etcd_to_config:oo_lb_to_config + tags: + - pre_upgrade + roles: + - openshift_repos + +- name: Set openshift_no_proxy_internal_hostnames + hosts: oo_masters_to_config:oo_nodes_to_upgrade + tags: + - pre_upgrade + tasks: + - set_fact: + openshift_no_proxy_internal_hostnames: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_config'] + | union(groups['oo_masters_to_config']) + | union(groups['oo_etcd_to_config'] | default([]))) + | oo_collect('openshift.common.hostname') | default([]) | join (',') + }}" + when: "{{ (openshift_http_proxy is defined or openshift_https_proxy is defined) and + openshift_generate_no_proxy_hosts | default(True) | bool }}" + +- include: ../pre/verify_inventory_vars.yml + tags: + - pre_upgrade + +- include: ../disable_excluder.yml + tags: + - pre_upgrade + +- include: ../../initialize_openshift_version.yml + tags: + - pre_upgrade + vars: + # Request specific openshift_release and let the openshift_version role handle converting this + # to a more specific version, respecting openshift_image_tag and openshift_pkg_version if + # defined, and overriding the normal behavior of protecting the installed version + openshift_release: "{{ openshift_upgrade_target }}" + openshift_protect_installed_version: False + + # We skip the docker role at this point in upgrade to prevent + # unintended package, container, or config upgrades which trigger + # docker restarts. At this early stage of upgrade we can assume + # docker is configured and running. + skip_docker_role: True + +- include: ../pre/verify_control_plane_running.yml + tags: + - pre_upgrade + +- include: ../../../openshift-master/validate_restart.yml + tags: + - pre_upgrade + +- include: ../pre/verify_upgrade_targets.yml + tags: + - pre_upgrade + +- include: ../pre/verify_docker_upgrade_targets.yml + tags: + - pre_upgrade + +- include: ../pre/gate_checks.yml + tags: + - pre_upgrade + +# Pre-upgrade completed, nothing after this should be tagged pre_upgrade. + +# Separate step so we can execute in parallel and clear out anything unused +# before we get into the serialized upgrade process which will then remove +# remaining images if possible. +- name: Cleanup unused Docker images + hosts: oo_masters_to_config:oo_etcd_to_config + tasks: + - include: ../cleanup_unused_images.yml + +- include: ../upgrade_control_plane.yml + vars: + master_config_hook: "v3_4/master_config_upgrade.yml" + +- include: ../post_control_plane.yml diff --git a/playbooks/common/openshift-cluster/upgrades/v3_4/upgrade_nodes.yml b/playbooks/common/openshift-cluster/upgrades/v3_4/upgrade_nodes.yml new file mode 100644 index 000000000..f397f6015 --- /dev/null +++ b/playbooks/common/openshift-cluster/upgrades/v3_4/upgrade_nodes.yml @@ -0,0 +1,104 @@ +--- +# +# Node Upgrade Playbook +# +# Upgrades nodes only, but requires the control plane to have already been upgraded. +# +- include: ../init.yml + tags: + - pre_upgrade + +- name: Configure the upgrade target for the common upgrade tasks + hosts: oo_all_hosts + tags: + - pre_upgrade + tasks: + - set_fact: + openshift_upgrade_target: "{{ '1.4' if deployment_type == 'origin' else '3.4' }}" + openshift_upgrade_min: "{{ '1.3' if deployment_type == 'origin' else '3.3' }}" + +# Pre-upgrade +- include: ../initialize_nodes_to_upgrade.yml + tags: + - pre_upgrade + +- name: Update repos on nodes + hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config:oo_lb_to_config + roles: + - openshift_repos + tags: + - pre_upgrade + +- name: Set openshift_no_proxy_internal_hostnames + hosts: oo_masters_to_config:oo_nodes_to_upgrade + tags: + - pre_upgrade + tasks: + - set_fact: + openshift_no_proxy_internal_hostnames: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_upgrade'] + | union(groups['oo_masters_to_config']) + | union(groups['oo_etcd_to_config'] | default([]))) + | oo_collect('openshift.common.hostname') | default([]) | join (',') + }}" + when: "{{ (openshift_http_proxy is defined or openshift_https_proxy is defined) and + openshift_generate_no_proxy_hosts | default(True) | bool }}" + +- include: ../pre/verify_inventory_vars.yml + tags: + - pre_upgrade + +- include: ../disable_excluder.yml + tags: + - pre_upgrade + +- include: ../../initialize_openshift_version.yml + tags: + - pre_upgrade + vars: + # Request specific openshift_release and let the openshift_version role handle converting this + # to a more specific version, respecting openshift_image_tag and openshift_pkg_version if + # defined, and overriding the normal behavior of protecting the installed version + openshift_release: "{{ openshift_upgrade_target }}" + openshift_protect_installed_version: False + + # We skip the docker role at this point in upgrade to prevent + # unintended package, container, or config upgrades which trigger + # docker restarts. At this early stage of upgrade we can assume + # docker is configured and running. + skip_docker_role: True + +- name: Verify masters are already upgraded + hosts: oo_masters_to_config + tags: + - pre_upgrade + tasks: + - fail: msg="Master running {{ openshift.common.version }} must be upgraded to {{ openshift_version }} before node upgrade can be run." + when: openshift.common.version != openshift_version + +- include: ../pre/verify_control_plane_running.yml + tags: + - pre_upgrade + +- include: ../pre/verify_upgrade_targets.yml + tags: + - pre_upgrade + +- include: ../pre/verify_docker_upgrade_targets.yml + tags: + - pre_upgrade + +- include: ../pre/gate_checks.yml + tags: + - pre_upgrade + +# Pre-upgrade completed, nothing after this should be tagged pre_upgrade. + +# Separate step so we can execute in parallel and clear out anything unused +# before we get into the serialized upgrade process which will then remove +# remaining images if possible. +- name: Cleanup unused Docker images + hosts: oo_nodes_to_upgrade + tasks: + - include: ../cleanup_unused_images.yml + +- include: ../upgrade_nodes.yml diff --git a/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade.yml b/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade.yml new file mode 100644 index 000000000..7cedfb1ca --- /dev/null +++ b/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade.yml @@ -0,0 +1,111 @@ +--- +# +# Full Control Plane + Nodes Upgrade +# +- include: ../init.yml + tags: + - pre_upgrade + +- name: Configure the upgrade target for the common upgrade tasks + hosts: oo_all_hosts + tags: + - pre_upgrade + tasks: + - set_fact: + openshift_upgrade_target: "{{ '1.5' if deployment_type == 'origin' else '3.5' }}" + openshift_upgrade_min: "{{ '1.4' if deployment_type == 'origin' else '3.4' }}" + +# Pre-upgrade + +- include: ../initialize_nodes_to_upgrade.yml + tags: + - pre_upgrade + +- name: Update repos and initialize facts on all hosts + hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config:oo_lb_to_config + tags: + - pre_upgrade + roles: + - openshift_repos + +- name: Set openshift_no_proxy_internal_hostnames + hosts: oo_masters_to_config:oo_nodes_to_upgrade + tags: + - pre_upgrade + tasks: + - set_fact: + openshift_no_proxy_internal_hostnames: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_config'] + | union(groups['oo_masters_to_config']) + | union(groups['oo_etcd_to_config'] | default([]))) + | oo_collect('openshift.common.hostname') | default([]) | join (',') + }}" + when: "{{ (openshift_http_proxy is defined or openshift_https_proxy is defined) and + openshift_generate_no_proxy_hosts | default(True) | bool }}" + +- include: ../pre/verify_inventory_vars.yml + tags: + - pre_upgrade + +- include: ../disable_excluder.yml + tags: + - pre_upgrade + +# Note: During upgrade the openshift excluder is not unexcluded inside the initialize_openshift_version.yml play. +# So it is necessary to run the play after running disable_excluder.yml. +- include: ../../initialize_openshift_version.yml + tags: + - pre_upgrade + vars: + # Request specific openshift_release and let the openshift_version role handle converting this + # to a more specific version, respecting openshift_image_tag and openshift_pkg_version if + # defined, and overriding the normal behavior of protecting the installed version + openshift_release: "{{ openshift_upgrade_target }}" + openshift_protect_installed_version: False + + # We skip the docker role at this point in upgrade to prevent + # unintended package, container, or config upgrades which trigger + # docker restarts. At this early stage of upgrade we can assume + # docker is configured and running. + skip_docker_role: True + +- include: ../pre/verify_control_plane_running.yml + tags: + - pre_upgrade + +- include: ../../../openshift-master/validate_restart.yml + tags: + - pre_upgrade + +- include: ../pre/verify_upgrade_targets.yml + tags: + - pre_upgrade + +- include: ../pre/verify_docker_upgrade_targets.yml + tags: + - pre_upgrade + +- include: validator.yml + tags: + - pre_upgrade + +- include: ../pre/gate_checks.yml + tags: + - pre_upgrade + +# Pre-upgrade completed, nothing after this should be tagged pre_upgrade. + +# Separate step so we can execute in parallel and clear out anything unused +# before we get into the serialized upgrade process which will then remove +# remaining images if possible. +- name: Cleanup unused Docker images + hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config + tasks: + - include: ../cleanup_unused_images.yml + +- include: ../upgrade_control_plane.yml + +- include: ../upgrade_nodes.yml + +- include: ../post_control_plane.yml + +- include: storage_upgrade.yml diff --git a/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade_control_plane.yml b/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade_control_plane.yml new file mode 100644 index 000000000..0198074ed --- /dev/null +++ b/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade_control_plane.yml @@ -0,0 +1,115 @@ +--- +# +# Control Plane Upgrade Playbook +# +# Upgrades masters and Docker (only on standalone etcd hosts) +# +# This upgrade does not include: +# - node service running on masters +# - docker running on masters +# - node service running on dedicated nodes +# +# You can run the upgrade_nodes.yml playbook after this to upgrade these components separately. +# +- include: ../init.yml + tags: + - pre_upgrade + +- name: Configure the upgrade target for the common upgrade tasks + hosts: oo_all_hosts + tags: + - pre_upgrade + tasks: + - set_fact: + openshift_upgrade_target: "{{ '1.5' if deployment_type == 'origin' else '3.5' }}" + openshift_upgrade_min: "{{ '1.4' if deployment_type == 'origin' else '3.4' }}" + +# Pre-upgrade +- include: ../initialize_nodes_to_upgrade.yml + tags: + - pre_upgrade + +- name: Update repos on control plane hosts + hosts: oo_masters_to_config:oo_etcd_to_config:oo_lb_to_config + tags: + - pre_upgrade + roles: + - openshift_repos + +- name: Set openshift_no_proxy_internal_hostnames + hosts: oo_masters_to_config:oo_nodes_to_upgrade + tags: + - pre_upgrade + tasks: + - set_fact: + openshift_no_proxy_internal_hostnames: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_config'] + | union(groups['oo_masters_to_config']) + | union(groups['oo_etcd_to_config'] | default([]))) + | oo_collect('openshift.common.hostname') | default([]) | join (',') + }}" + when: "{{ (openshift_http_proxy is defined or openshift_https_proxy is defined) and + openshift_generate_no_proxy_hosts | default(True) | bool }}" + +- include: ../pre/verify_inventory_vars.yml + tags: + - pre_upgrade + +- include: ../disable_excluder.yml + tags: + - pre_upgrade + +- include: ../../initialize_openshift_version.yml + tags: + - pre_upgrade + vars: + # Request specific openshift_release and let the openshift_version role handle converting this + # to a more specific version, respecting openshift_image_tag and openshift_pkg_version if + # defined, and overriding the normal behavior of protecting the installed version + openshift_release: "{{ openshift_upgrade_target }}" + openshift_protect_installed_version: False + + # We skip the docker role at this point in upgrade to prevent + # unintended package, container, or config upgrades which trigger + # docker restarts. At this early stage of upgrade we can assume + # docker is configured and running. + skip_docker_role: True + +- include: ../pre/verify_control_plane_running.yml + tags: + - pre_upgrade + +- include: ../../../openshift-master/validate_restart.yml + tags: + - pre_upgrade + +- include: ../pre/verify_upgrade_targets.yml + tags: + - pre_upgrade + +- include: ../pre/verify_docker_upgrade_targets.yml + tags: + - pre_upgrade + +- include: validator.yml + tags: + - pre_upgrade + +- include: ../pre/gate_checks.yml + tags: + - pre_upgrade + +# Pre-upgrade completed, nothing after this should be tagged pre_upgrade. + +# Separate step so we can execute in parallel and clear out anything unused +# before we get into the serialized upgrade process which will then remove +# remaining images if possible. +- name: Cleanup unused Docker images + hosts: oo_masters_to_config:oo_etcd_to_config + tasks: + - include: ../cleanup_unused_images.yml + +- include: ../upgrade_control_plane.yml + +- include: ../post_control_plane.yml + +- include: storage_upgrade.yml diff --git a/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade_nodes.yml b/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade_nodes.yml new file mode 100644 index 000000000..2b16875f4 --- /dev/null +++ b/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade_nodes.yml @@ -0,0 +1,104 @@ +--- +# +# Node Upgrade Playbook +# +# Upgrades nodes only, but requires the control plane to have already been upgraded. +# +- include: ../init.yml + tags: + - pre_upgrade + +- name: Configure the upgrade target for the common upgrade tasks + hosts: oo_all_hosts + tags: + - pre_upgrade + tasks: + - set_fact: + openshift_upgrade_target: "{{ '1.5' if deployment_type == 'origin' else '3.5' }}" + openshift_upgrade_min: "{{ '1.4' if deployment_type == 'origin' else '3.4' }}" + +# Pre-upgrade +- include: ../initialize_nodes_to_upgrade.yml + tags: + - pre_upgrade + +- name: Update repos on nodes + hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config:oo_lb_to_config + roles: + - openshift_repos + tags: + - pre_upgrade + +- name: Set openshift_no_proxy_internal_hostnames + hosts: oo_masters_to_config:oo_nodes_to_upgrade + tags: + - pre_upgrade + tasks: + - set_fact: + openshift_no_proxy_internal_hostnames: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_upgrade'] + | union(groups['oo_masters_to_config']) + | union(groups['oo_etcd_to_config'] | default([]))) + | oo_collect('openshift.common.hostname') | default([]) | join (',') + }}" + when: "{{ (openshift_http_proxy is defined or openshift_https_proxy is defined) and + openshift_generate_no_proxy_hosts | default(True) | bool }}" + +- include: ../pre/verify_inventory_vars.yml + tags: + - pre_upgrade + +- include: ../disable_excluder.yml + tags: + - pre_upgrade + +- include: ../../initialize_openshift_version.yml + tags: + - pre_upgrade + vars: + # Request specific openshift_release and let the openshift_version role handle converting this + # to a more specific version, respecting openshift_image_tag and openshift_pkg_version if + # defined, and overriding the normal behavior of protecting the installed version + openshift_release: "{{ openshift_upgrade_target }}" + openshift_protect_installed_version: False + + # We skip the docker role at this point in upgrade to prevent + # unintended package, container, or config upgrades which trigger + # docker restarts. At this early stage of upgrade we can assume + # docker is configured and running. + skip_docker_role: True + +- name: Verify masters are already upgraded + hosts: oo_masters_to_config + tags: + - pre_upgrade + tasks: + - fail: msg="Master running {{ openshift.common.version }} must be upgraded to {{ openshift_version }} before node upgrade can be run." + when: openshift.common.version != openshift_version + +- include: ../pre/verify_control_plane_running.yml + tags: + - pre_upgrade + +- include: ../pre/verify_upgrade_targets.yml + tags: + - pre_upgrade + +- include: ../pre/verify_docker_upgrade_targets.yml + tags: + - pre_upgrade + +- include: ../pre/gate_checks.yml + tags: + - pre_upgrade + +# Pre-upgrade completed, nothing after this should be tagged pre_upgrade. + +# Separate step so we can execute in parallel and clear out anything unused +# before we get into the serialized upgrade process which will then remove +# remaining images if possible. +- name: Cleanup unused Docker images + hosts: oo_nodes_to_upgrade + tasks: + - include: ../cleanup_unused_images.yml + +- include: ../upgrade_nodes.yml diff --git a/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade.yml b/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade.yml new file mode 100644 index 000000000..4604bdc8b --- /dev/null +++ b/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade.yml @@ -0,0 +1,111 @@ +--- +# +# Full Control Plane + Nodes Upgrade +# +- include: ../init.yml + tags: + - pre_upgrade + +- name: Configure the upgrade target for the common upgrade tasks + hosts: oo_all_hosts + tags: + - pre_upgrade + tasks: + - set_fact: + openshift_upgrade_target: '3.6' + openshift_upgrade_min: "{{ '1.5' if deployment_type == 'origin' else '3.5' }}" + +# Pre-upgrade + +- include: ../initialize_nodes_to_upgrade.yml + tags: + - pre_upgrade + +- name: Update repos and initialize facts on all hosts + hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config:oo_lb_to_config + tags: + - pre_upgrade + roles: + - openshift_repos + +- name: Set openshift_no_proxy_internal_hostnames + hosts: oo_masters_to_config:oo_nodes_to_upgrade + tags: + - pre_upgrade + tasks: + - set_fact: + openshift_no_proxy_internal_hostnames: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_config'] + | union(groups['oo_masters_to_config']) + | union(groups['oo_etcd_to_config'] | default([]))) + | oo_collect('openshift.common.hostname') | default([]) | join (',') + }}" + when: "{{ (openshift_http_proxy is defined or openshift_https_proxy is defined) and + openshift_generate_no_proxy_hosts | default(True) | bool }}" + +- include: ../pre/verify_inventory_vars.yml + tags: + - pre_upgrade + +- include: ../disable_excluder.yml + tags: + - pre_upgrade + +# Note: During upgrade the openshift excluder is not unexcluded inside the initialize_openshift_version.yml play. +# So it is necassary to run the play after running disable_excluder.yml. +- include: ../../initialize_openshift_version.yml + tags: + - pre_upgrade + vars: + # Request specific openshift_release and let the openshift_version role handle converting this + # to a more specific version, respecting openshift_image_tag and openshift_pkg_version if + # defined, and overriding the normal behavior of protecting the installed version + openshift_release: "{{ openshift_upgrade_target }}" + openshift_protect_installed_version: False + + # We skip the docker role at this point in upgrade to prevent + # unintended package, container, or config upgrades which trigger + # docker restarts. At this early stage of upgrade we can assume + # docker is configured and running. + skip_docker_role: True + +- include: ../pre/verify_control_plane_running.yml + tags: + - pre_upgrade + +- include: ../../../openshift-master/validate_restart.yml + tags: + - pre_upgrade + +- include: ../pre/verify_upgrade_targets.yml + tags: + - pre_upgrade + +- include: ../pre/verify_docker_upgrade_targets.yml + tags: + - pre_upgrade + +- include: validator.yml + tags: + - pre_upgrade + +- include: ../pre/gate_checks.yml + tags: + - pre_upgrade + +# Pre-upgrade completed, nothing after this should be tagged pre_upgrade. + +# Separate step so we can execute in parallel and clear out anything unused +# before we get into the serialized upgrade process which will then remove +# remaining images if possible. +- name: Cleanup unused Docker images + hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config + tasks: + - include: ../cleanup_unused_images.yml + +- include: ../upgrade_control_plane.yml + +- include: ../upgrade_nodes.yml + +- include: ../post_control_plane.yml + +- include: storage_upgrade.yml diff --git a/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade_control_plane.yml b/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade_control_plane.yml new file mode 100644 index 000000000..a09097ed9 --- /dev/null +++ b/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade_control_plane.yml @@ -0,0 +1,115 @@ +--- +# +# Control Plane Upgrade Playbook +# +# Upgrades masters and Docker (only on standalone etcd hosts) +# +# This upgrade does not include: +# - node service running on masters +# - docker running on masters +# - node service running on dedicated nodes +# +# You can run the upgrade_nodes.yml playbook after this to upgrade these components separately. +# +- include: ../init.yml + tags: + - pre_upgrade + +- name: Configure the upgrade target for the common upgrade tasks + hosts: oo_all_hosts + tags: + - pre_upgrade + tasks: + - set_fact: + openshift_upgrade_target: '3.6' + openshift_upgrade_min: "{{ '1.5' if deployment_type == 'origin' else '3.5' }}" + +# Pre-upgrade +- include: ../initialize_nodes_to_upgrade.yml + tags: + - pre_upgrade + +- name: Update repos on control plane hosts + hosts: oo_masters_to_config:oo_etcd_to_config:oo_lb_to_config + tags: + - pre_upgrade + roles: + - openshift_repos + +- name: Set openshift_no_proxy_internal_hostnames + hosts: oo_masters_to_config:oo_nodes_to_upgrade + tags: + - pre_upgrade + tasks: + - set_fact: + openshift_no_proxy_internal_hostnames: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_config'] + | union(groups['oo_masters_to_config']) + | union(groups['oo_etcd_to_config'] | default([]))) + | oo_collect('openshift.common.hostname') | default([]) | join (',') + }}" + when: "{{ (openshift_http_proxy is defined or openshift_https_proxy is defined) and + openshift_generate_no_proxy_hosts | default(True) | bool }}" + +- include: ../pre/verify_inventory_vars.yml + tags: + - pre_upgrade + +- include: ../disable_excluder.yml + tags: + - pre_upgrade + +- include: ../../initialize_openshift_version.yml + tags: + - pre_upgrade + vars: + # Request specific openshift_release and let the openshift_version role handle converting this + # to a more specific version, respecting openshift_image_tag and openshift_pkg_version if + # defined, and overriding the normal behavior of protecting the installed version + openshift_release: "{{ openshift_upgrade_target }}" + openshift_protect_installed_version: False + + # We skip the docker role at this point in upgrade to prevent + # unintended package, container, or config upgrades which trigger + # docker restarts. At this early stage of upgrade we can assume + # docker is configured and running. + skip_docker_role: True + +- include: ../pre/verify_control_plane_running.yml + tags: + - pre_upgrade + +- include: ../../../openshift-master/validate_restart.yml + tags: + - pre_upgrade + +- include: ../pre/verify_upgrade_targets.yml + tags: + - pre_upgrade + +- include: ../pre/verify_docker_upgrade_targets.yml + tags: + - pre_upgrade + +- include: validator.yml + tags: + - pre_upgrade + +- include: ../pre/gate_checks.yml + tags: + - pre_upgrade + +# Pre-upgrade completed, nothing after this should be tagged pre_upgrade. + +# Separate step so we can execute in parallel and clear out anything unused +# before we get into the serialized upgrade process which will then remove +# remaining images if possible. +- name: Cleanup unused Docker images + hosts: oo_masters_to_config:oo_etcd_to_config + tasks: + - include: ../cleanup_unused_images.yml + +- include: ../upgrade_control_plane.yml + +- include: ../post_control_plane.yml + +- include: storage_upgrade.yml diff --git a/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade_nodes.yml b/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade_nodes.yml new file mode 100644 index 000000000..7640f2116 --- /dev/null +++ b/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade_nodes.yml @@ -0,0 +1,104 @@ +--- +# +# Node Upgrade Playbook +# +# Upgrades nodes only, but requires the control plane to have already been upgraded. +# +- include: ../init.yml + tags: + - pre_upgrade + +- name: Configure the upgrade target for the common upgrade tasks + hosts: oo_all_hosts + tags: + - pre_upgrade + tasks: + - set_fact: + openshift_upgrade_target: '3.6' + openshift_upgrade_min: "{{ '1.5' if deployment_type == 'origin' else '3.5' }}" + +# Pre-upgrade +- include: ../initialize_nodes_to_upgrade.yml + tags: + - pre_upgrade + +- name: Update repos on nodes + hosts: oo_masters_to_config:oo_nodes_to_upgrade:oo_etcd_to_config:oo_lb_to_config + roles: + - openshift_repos + tags: + - pre_upgrade + +- name: Set openshift_no_proxy_internal_hostnames + hosts: oo_masters_to_config:oo_nodes_to_upgrade + tags: + - pre_upgrade + tasks: + - set_fact: + openshift_no_proxy_internal_hostnames: "{{ hostvars | oo_select_keys(groups['oo_nodes_to_upgrade'] + | union(groups['oo_masters_to_config']) + | union(groups['oo_etcd_to_config'] | default([]))) + | oo_collect('openshift.common.hostname') | default([]) | join (',') + }}" + when: "{{ (openshift_http_proxy is defined or openshift_https_proxy is defined) and + openshift_generate_no_proxy_hosts | default(True) | bool }}" + +- include: ../pre/verify_inventory_vars.yml + tags: + - pre_upgrade + +- include: ../disable_excluder.yml + tags: + - pre_upgrade + +- include: ../../initialize_openshift_version.yml + tags: + - pre_upgrade + vars: + # Request specific openshift_release and let the openshift_version role handle converting this + # to a more specific version, respecting openshift_image_tag and openshift_pkg_version if + # defined, and overriding the normal behavior of protecting the installed version + openshift_release: "{{ openshift_upgrade_target }}" + openshift_protect_installed_version: False + + # We skip the docker role at this point in upgrade to prevent + # unintended package, container, or config upgrades which trigger + # docker restarts. At this early stage of upgrade we can assume + # docker is configured and running. + skip_docker_role: True + +- name: Verify masters are already upgraded + hosts: oo_masters_to_config + tags: + - pre_upgrade + tasks: + - fail: msg="Master running {{ openshift.common.version }} must be upgraded to {{ openshift_version }} before node upgrade can be run." + when: openshift.common.version != openshift_version + +- include: ../pre/verify_control_plane_running.yml + tags: + - pre_upgrade + +- include: ../pre/verify_upgrade_targets.yml + tags: + - pre_upgrade + +- include: ../pre/verify_docker_upgrade_targets.yml + tags: + - pre_upgrade + +- include: ../pre/gate_checks.yml + tags: + - pre_upgrade + +# Pre-upgrade completed, nothing after this should be tagged pre_upgrade. + +# Separate step so we can execute in parallel and clear out anything unused +# before we get into the serialized upgrade process which will then remove +# remaining images if possible. +- name: Cleanup unused Docker images + hosts: oo_nodes_to_upgrade + tasks: + - include: ../cleanup_unused_images.yml + +- include: ../upgrade_nodes.yml diff --git a/playbooks/common/openshift-master/scaleup.yml b/playbooks/common/openshift-master/scaleup.yml index 92f16dc47..ab0045a39 100644 --- a/playbooks/common/openshift-master/scaleup.yml +++ b/playbooks/common/openshift-master/scaleup.yml @@ -51,7 +51,7 @@ changed_when: false - name: Configure docker hosts - hosts: oo_masters_to-config:oo_nodes_to_config + hosts: oo_masters_to_config:oo_nodes_to_config vars: docker_additional_registries: "{{ lookup('oo_option', 'docker_additional_registries') | oo_split }}" docker_insecure_registries: "{{ lookup('oo_option', 'docker_insecure_registries') | oo_split }}" diff --git a/playbooks/common/openshift-node/network_manager.yml b/playbooks/common/openshift-node/network_manager.yml index be050c12c..0014a5dbd 100644 --- a/playbooks/common/openshift-node/network_manager.yml +++ b/playbooks/common/openshift-node/network_manager.yml @@ -1,6 +1,6 @@ --- - name: Install and configure NetworkManager - hosts: l_oo_all_hosts + hosts: oo_all_hosts become: yes tasks: - name: install NetworkManager diff --git a/playbooks/libvirt/openshift-cluster/tasks/launch_instances.yml b/playbooks/libvirt/openshift-cluster/tasks/launch_instances.yml index 78581fdfe..ccd29be29 100644 --- a/playbooks/libvirt/openshift-cluster/tasks/launch_instances.yml +++ b/playbooks/libvirt/openshift-cluster/tasks/launch_instances.yml @@ -14,7 +14,7 @@ url: '{{ image_url }}' sha256sum: '{{ image_sha256 }}' dest: '{{ libvirt_storage_pool_path }}/{{ [image_name, image_compression] | difference([""]) | join(".") }}' - when: '{{ ( lookup("oo_option", "skip_image_download") | default("no", True) | lower ) in ["false", "no"] }}' + when: ( lookup("oo_option", "skip_image_download") | default("no", True) | lower ) in ["false", "no"] register: downloaded_image - name: Uncompress xz compressed base cloud image diff --git a/requirements.txt b/requirements.txt index d00de5ed4..1996a967d 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,6 +1,7 @@ # Versions are pinned to prevent pypi releases arbitrarily breaking # tests with new APIs/semantics. We want to update versions deliberately. ansible==2.2.2.0 +boto==2.45.0 click==6.7 pyOpenSSL==16.2.0 # We need to disable ruamel.yaml for now because of test failures diff --git a/roles/calico/handlers/main.yml b/roles/calico/handlers/main.yml index 65d75cf00..53cecfcc3 100644 --- a/roles/calico/handlers/main.yml +++ b/roles/calico/handlers/main.yml @@ -5,4 +5,6 @@ - name: restart docker become: yes - systemd: name=docker state=restarted + systemd: + name: "{{ openshift.docker.service_name }}" + state: restarted diff --git a/roles/contiv/tasks/netplugin.yml b/roles/contiv/tasks/netplugin.yml index 97b9762df..0847c92bc 100644 --- a/roles/contiv/tasks/netplugin.yml +++ b/roles/contiv/tasks/netplugin.yml @@ -105,7 +105,7 @@ - name: Docker | Restart docker service: - name: docker + name: "{{ openshift.docker.service_name }}" state: restarted when: docker_updated|changed diff --git a/roles/docker/README.md b/roles/docker/README.md index ea06fd41a..f25ca03cd 100644 --- a/roles/docker/README.md +++ b/roles/docker/README.md @@ -1,7 +1,7 @@ Docker ========= -Ensures docker package is installed, and optionally raises timeout for systemd-udevd.service to 5 minutes. +Ensures docker package or system container is installed, and optionally raises timeout for systemd-udevd.service to 5 minutes. Requirements ------------ @@ -11,8 +11,10 @@ Ansible 2.2 Role Variables -------------- -udevw_udevd_dir: location of systemd config for systemd-udevd.service +docker_conf_dir: location of the Docker configuration directory +docker_systemd_dir location of the systemd directory for Docker docker_udev_workaround: raises udevd timeout to 5 minutes (https://bugzilla.redhat.com/show_bug.cgi?id=1272446) +udevw_udevd_dir: location of systemd config for systemd-udevd.service Dependencies ------------ @@ -26,6 +28,7 @@ Example Playbook roles: - role: docker docker_udev_workaround: "true" + docker_use_system_container: False License ------- diff --git a/roles/docker/handlers/main.yml b/roles/docker/handlers/main.yml index 9ccb306fc..7f91afb37 100644 --- a/roles/docker/handlers/main.yml +++ b/roles/docker/handlers/main.yml @@ -2,7 +2,7 @@ - name: restart docker systemd: - name: docker + name: "{{ openshift.docker.service_name }}" state: restarted when: not docker_service_status_changed | default(false) | bool diff --git a/roles/docker/meta/main.yml b/roles/docker/meta/main.yml index ad28cece9..cd4083572 100644 --- a/roles/docker/meta/main.yml +++ b/roles/docker/meta/main.yml @@ -11,3 +11,4 @@ galaxy_info: - 7 dependencies: - role: os_firewall +- role: lib_openshift diff --git a/roles/docker/tasks/main.yml b/roles/docker/tasks/main.yml index c34700aeb..0c2b16acf 100644 --- a/roles/docker/tasks/main.yml +++ b/roles/docker/tasks/main.yml @@ -1,119 +1,17 @@ --- -- name: Get current installed Docker version - command: "{{ repoquery_cmd }} --installed --qf '%{version}' docker" - when: not openshift.common.is_atomic | bool - register: curr_docker_version - changed_when: false - -- name: Error out if Docker pre-installed but too old - fail: - msg: "Docker {{ curr_docker_version.stdout }} is installed, but >= 1.9.1 is required." - when: not curr_docker_version | skipped and curr_docker_version.stdout != '' and curr_docker_version.stdout | version_compare('1.9.1', '<') and not docker_version is defined - -- name: Error out if requested Docker is too old - fail: - msg: "Docker {{ docker_version }} requested, but >= 1.9.1 is required." - when: docker_version is defined and docker_version | version_compare('1.9.1', '<') - -# If a docker_version was requested, sanity check that we can install or upgrade to it, and -# no downgrade is required. -- name: Fail if Docker version requested but downgrade is required - fail: - msg: "Docker {{ curr_docker_version.stdout }} is installed, but version {{ docker_version }} was requested." - when: not curr_docker_version | skipped and curr_docker_version.stdout != '' and docker_version is defined and curr_docker_version.stdout | version_compare(docker_version, '>') - -# This involves an extremely slow migration process, users should instead run the -# Docker 1.10 upgrade playbook to accomplish this. -- name: Error out if attempting to upgrade Docker across the 1.10 boundary - fail: - msg: "Cannot upgrade Docker to >= 1.10, please upgrade or remove Docker manually, or use the Docker upgrade playbook if OpenShift is already installed." - when: not curr_docker_version | skipped and curr_docker_version.stdout != '' and curr_docker_version.stdout | version_compare('1.10', '<') and docker_version is defined and docker_version | version_compare('1.10', '>=') - -# Make sure Docker is installed, but does not update a running version. -# Docker upgrades are handled by a separate playbook. -- name: Install Docker - package: name=docker{{ '-' + docker_version if docker_version is defined else '' }} state=present - when: not openshift.common.is_atomic | bool - -- block: - # Extend the default Docker service unit file when using iptables-services - - name: Ensure docker.service.d directory exists - file: - path: "{{ docker_systemd_dir }}" - state: directory - - - name: Configure Docker service unit file - template: - dest: "{{ docker_systemd_dir }}/custom.conf" - src: custom.conf.j2 - when: not os_firewall_use_firewalld | default(True) | bool +# These tasks dispatch to the proper set of docker tasks based on the +# inventory:openshift_docker_use_system_container variable - include: udev_workaround.yml when: docker_udev_workaround | default(False) | bool -- stat: path=/etc/sysconfig/docker - register: docker_check - -- name: Set registry params - lineinfile: - dest: /etc/sysconfig/docker - regexp: '^{{ item.reg_conf_var }}=.*$' - line: "{{ item.reg_conf_var }}='{{ item.reg_fact_val | oo_prepend_strings_in_list(item.reg_flag ~ ' ') | join(' ') }}'" - when: item.reg_fact_val != '' and docker_check.stat.isreg is defined and docker_check.stat.isreg - with_items: - - reg_conf_var: ADD_REGISTRY - reg_fact_val: "{{ docker_additional_registries | default(None, true)}}" - reg_flag: --add-registry - - reg_conf_var: BLOCK_REGISTRY - reg_fact_val: "{{ docker_blocked_registries| default(None, true) }}" - reg_flag: --block-registry - - reg_conf_var: INSECURE_REGISTRY - reg_fact_val: "{{ docker_insecure_registries| default(None, true) }}" - reg_flag: --insecure-registry - notify: - - restart docker - -- name: Set Proxy Settings - lineinfile: - dest: /etc/sysconfig/docker - regexp: '^{{ item.reg_conf_var }}=.*$' - line: "{{ item.reg_conf_var }}='{{ item.reg_fact_val }}'" - state: "{{ 'present' if item.reg_fact_val != '' else 'absent'}}" - with_items: - - reg_conf_var: HTTP_PROXY - reg_fact_val: "{{ docker_http_proxy | default('') }}" - - reg_conf_var: HTTPS_PROXY - reg_fact_val: "{{ docker_https_proxy | default('') }}" - - reg_conf_var: NO_PROXY - reg_fact_val: "{{ docker_no_proxy | default('') }}" - notify: - - restart docker - when: - - docker_check.stat.isreg is defined and docker_check.stat.isreg and '"http_proxy" in openshift.common or "https_proxy" in openshift.common' - -- name: Set various Docker options - lineinfile: - dest: /etc/sysconfig/docker - regexp: '^OPTIONS=.*$' - line: "OPTIONS='\ - {% if ansible_selinux.status | default(None) == '''enabled''' and docker_selinux_enabled | default(true) %} --selinux-enabled {% endif %}\ - {% if docker_log_driver is defined %} --log-driver {{ docker_log_driver }}{% endif %}\ - {% if docker_log_options is defined %} {{ docker_log_options | oo_split() | oo_prepend_strings_in_list('--log-opt ') | join(' ')}}{% endif %}\ - {% if docker_options is defined %} {{ docker_options }}{% endif %}\ - {% if docker_disable_push_dockerhub is defined %} --confirm-def-push={{ docker_disable_push_dockerhub | bool }}{% endif %}'" - when: docker_check.stat.isreg is defined and docker_check.stat.isreg - notify: - - restart docker - -- name: Start the Docker service - systemd: - name: docker - enabled: yes - state: started - daemon_reload: yes - register: start_result - - set_fact: - docker_service_status_changed: start_result | changed + l_use_system_container: "{{ openshift.docker.use_system_container | default(False) }}" + +- name: Use Package Docker if Requested + include: package_docker.yml + when: not l_use_system_container -- meta: flush_handlers +- name: Use System Container Docker if Requested + include: systemcontainer_docker.yml + when: l_use_system_container diff --git a/roles/docker/tasks/package_docker.yml b/roles/docker/tasks/package_docker.yml new file mode 100644 index 000000000..10fb5772c --- /dev/null +++ b/roles/docker/tasks/package_docker.yml @@ -0,0 +1,116 @@ +--- +- name: Get current installed Docker version + command: "{{ repoquery_cmd }} --installed --qf '%{version}' docker" + when: not openshift.common.is_atomic | bool + register: curr_docker_version + changed_when: false + +- name: Error out if Docker pre-installed but too old + fail: + msg: "Docker {{ curr_docker_version.stdout }} is installed, but >= 1.9.1 is required." + when: not curr_docker_version | skipped and curr_docker_version.stdout != '' and curr_docker_version.stdout | version_compare('1.9.1', '<') and not docker_version is defined + +- name: Error out if requested Docker is too old + fail: + msg: "Docker {{ docker_version }} requested, but >= 1.9.1 is required." + when: docker_version is defined and docker_version | version_compare('1.9.1', '<') + +# If a docker_version was requested, sanity check that we can install or upgrade to it, and +# no downgrade is required. +- name: Fail if Docker version requested but downgrade is required + fail: + msg: "Docker {{ curr_docker_version.stdout }} is installed, but version {{ docker_version }} was requested." + when: not curr_docker_version | skipped and curr_docker_version.stdout != '' and docker_version is defined and curr_docker_version.stdout | version_compare(docker_version, '>') + +# This involves an extremely slow migration process, users should instead run the +# Docker 1.10 upgrade playbook to accomplish this. +- name: Error out if attempting to upgrade Docker across the 1.10 boundary + fail: + msg: "Cannot upgrade Docker to >= 1.10, please upgrade or remove Docker manually, or use the Docker upgrade playbook if OpenShift is already installed." + when: not curr_docker_version | skipped and curr_docker_version.stdout != '' and curr_docker_version.stdout | version_compare('1.10', '<') and docker_version is defined and docker_version | version_compare('1.10', '>=') + +# Make sure Docker is installed, but does not update a running version. +# Docker upgrades are handled by a separate playbook. +- name: Install Docker + package: name=docker{{ '-' + docker_version if docker_version is defined else '' }} state=present + when: not openshift.common.is_atomic | bool + +- block: + # Extend the default Docker service unit file when using iptables-services + - name: Ensure docker.service.d directory exists + file: + path: "{{ docker_systemd_dir }}" + state: directory + + - name: Configure Docker service unit file + template: + dest: "{{ docker_systemd_dir }}/custom.conf" + src: custom.conf.j2 + when: not os_firewall_use_firewalld | default(True) | bool + +- stat: path=/etc/sysconfig/docker + register: docker_check + +- name: Set registry params + lineinfile: + dest: /etc/sysconfig/docker + regexp: '^{{ item.reg_conf_var }}=.*$' + line: "{{ item.reg_conf_var }}='{{ item.reg_fact_val | oo_prepend_strings_in_list(item.reg_flag ~ ' ') | join(' ') }}'" + when: item.reg_fact_val != '' and docker_check.stat.isreg is defined and docker_check.stat.isreg + with_items: + - reg_conf_var: ADD_REGISTRY + reg_fact_val: "{{ docker_additional_registries | default(None, true)}}" + reg_flag: --add-registry + - reg_conf_var: BLOCK_REGISTRY + reg_fact_val: "{{ docker_blocked_registries| default(None, true) }}" + reg_flag: --block-registry + - reg_conf_var: INSECURE_REGISTRY + reg_fact_val: "{{ docker_insecure_registries| default(None, true) }}" + reg_flag: --insecure-registry + notify: + - restart docker + +- name: Set Proxy Settings + lineinfile: + dest: /etc/sysconfig/docker + regexp: '^{{ item.reg_conf_var }}=.*$' + line: "{{ item.reg_conf_var }}='{{ item.reg_fact_val }}'" + state: "{{ 'present' if item.reg_fact_val != '' else 'absent'}}" + with_items: + - reg_conf_var: HTTP_PROXY + reg_fact_val: "{{ docker_http_proxy | default('') }}" + - reg_conf_var: HTTPS_PROXY + reg_fact_val: "{{ docker_https_proxy | default('') }}" + - reg_conf_var: NO_PROXY + reg_fact_val: "{{ docker_no_proxy | default('') }}" + notify: + - restart docker + when: + - docker_check.stat.isreg is defined and docker_check.stat.isreg and '"http_proxy" in openshift.common or "https_proxy" in openshift.common' + +- name: Set various Docker options + lineinfile: + dest: /etc/sysconfig/docker + regexp: '^OPTIONS=.*$' + line: "OPTIONS='\ + {% if ansible_selinux.status | default(None) == '''enabled''' and docker_selinux_enabled | default(true) %} --selinux-enabled {% endif %}\ + {% if docker_log_driver is defined %} --log-driver {{ docker_log_driver }}{% endif %}\ + {% if docker_log_options is defined %} {{ docker_log_options | oo_split() | oo_prepend_strings_in_list('--log-opt ') | join(' ')}}{% endif %}\ + {% if docker_options is defined %} {{ docker_options }}{% endif %}\ + {% if docker_disable_push_dockerhub is defined %} --confirm-def-push={{ docker_disable_push_dockerhub | bool }}{% endif %}'" + when: docker_check.stat.isreg is defined and docker_check.stat.isreg + notify: + - restart docker + +- name: Start the Docker service + systemd: + name: docker + enabled: yes + state: started + daemon_reload: yes + register: start_result + +- set_fact: + docker_service_status_changed: start_result | changed + +- meta: flush_handlers diff --git a/roles/docker/tasks/systemcontainer_docker.yml b/roles/docker/tasks/systemcontainer_docker.yml new file mode 100644 index 000000000..b0d0632b0 --- /dev/null +++ b/roles/docker/tasks/systemcontainer_docker.yml @@ -0,0 +1,135 @@ +--- +# If docker_options are provided we should fail. We should not install docker and ignore +# the users configuration. NOTE: docker_options == inventory:openshift_docker_options +- name: Fail quickly if openshift_docker_options are set + assert: + that: + - docker_options is defined + - docker_options != "" + msg: | + Docker via System Container does not allow for the use of the openshift_docker_options + variable. If you want to use openshift_docker_options you will need to use the + traditional docker package install. Otherwise, comment out openshift_docker_options + in your inventory file. + +# Used to pull and install the system container +- name: Ensure atomic is installed + package: + name: atomic + state: present + when: not openshift.common.is_atomic | bool + +# At the time of writing the atomic command requires runc for it's own use. This +# task is here in the even that the atomic package ever removes the dependency. +- name: Ensure runc is installed + package: + name: runc + state: present + when: not openshift.common.is_atomic | bool + +# If we are on atomic, set http_proxy and https_proxy in /etc/atomic.conf +- block: + + - name: Add http_proxy to /etc/atomic.conf + lineinfile: + path: /etc/atomic.conf + line: "http_proxy={{ openshift.common.http_proxy | default('') }}" + when: + - openshift.common.http_proxy is defined + - openshift.common.http_proxy != '' + + - name: Add https_proxy to /etc/atomic.conf + lineinfile: + path: /etc/atomic.conf + line: "https_proxy={{ openshift.common.https_proxy | default('') }}" + when: + - openshift.common.https_proxy is defined + - openshift.common.https_proxy != '' + + when: openshift.common.is_atomic | bool + + +- block: + + - name: Set to default prepend + set_fact: + l_docker_image_prepend: "gscrivano" + + - name: Use Red Hat Registry for image when distribution is Red Hat + set_fact: + l_docker_image_prepend: "registry.access.redhat.com/openshift3" + when: ansible_distribution == 'RedHat' + + - name: Use Fedora Registry for image when distribution is Fedora + set_fact: + l_docker_image_prepend: "registry.fedoraproject.org" + when: ansible_distribution == 'Fedora' + + # For https://github.com/openshift/openshift-ansible/pull/4049#discussion_r114478504 + - name: Use a testing registry if requested + set_fact: + l_docker_image_prepend: "{{ openshift_docker_systemcontainer_image_registry_override }}" + when: + - openshift_docker_systemcontainer_image_registry_override is defined + - openshift_docker_systemcontainer_image_registry_override != "" + + - name: Set the full image name + set_fact: + l_docker_image: "{{ l_docker_image_prepend }}/{{ openshift.docker.service_name }}:latest" + +- name: Pre-pull Container Enginer System Container image + command: "atomic pull --storage ostree {{ l_docker_image }}" + changed_when: false + +# Make sure docker is disabled Errors are ignored as docker may not +# be installed. +- name: Disable Docker + systemd: + name: docker + enabled: no + state: stopped + daemon_reload: yes + ignore_errors: True + +- name: Ensure docker.service.d directory exists + file: + path: "{{ docker_systemd_dir }}" + state: directory + +- name: Ensure /etc/docker directory exists + file: + path: "{{ docker_conf_dir }}" + state: directory + +- name: Install Container Enginer System Container + oc_atomic_container: + name: "{{ openshift.docker.service_name }}" + image: "{{ l_docker_image }}" + state: latest + values: + - "system-package=no" + +- name: Configure Container Engine Service File + template: + dest: "{{ docker_systemd_dir }}/custom.conf" + src: systemcontainercustom.conf.j2 + +# Configure container-engine using the daemon.json file +- name: Configure Container Engine + template: + dest: "{{ docker_conf_dir }}/daemon.json" + src: daemon.json + +# Enable and start the container-engine service +- name: Start the Container Engine service + systemd: + name: "{{ openshift.docker.service_name }}" + enabled: yes + state: started + daemon_reload: yes + register: start_result + +- set_fact: + docker_service_status_changed: start_result | changed + +- meta: flush_handlers diff --git a/roles/docker/templates/daemon.json b/roles/docker/templates/daemon.json new file mode 100644 index 000000000..30a1b30f4 --- /dev/null +++ b/roles/docker/templates/daemon.json @@ -0,0 +1,66 @@ + +{ + "api-cors-header": "", + "authorization-plugins": ["rhel-push-plugin"], + "bip": "", + "bridge": "", + "cgroup-parent": "", + "cluster-store": "", + "cluster-store-opts": {}, + "cluster-advertise": "", + "debug": true, + "default-gateway": "", + "default-gateway-v6": "", + "default-runtime": "oci", + "containerd": "/var/run/containerd.sock", + "default-ulimits": {}, + "disable-legacy-registry": false, + "dns": [], + "dns-opts": [], + "dns-search": [], + "exec-opts": ["native.cgroupdriver=systemd"], + "exec-root": "", + "fixed-cidr": "", + "fixed-cidr-v6": "", + "graph": "", + "group": "", + "hosts": [], + "icc": false, + "insecure-registries": {{ docker_insecure_registries|default([]) }}, + "ip": "0.0.0.0", + "iptables": false, + "ipv6": false, + "ip-forward": false, + "ip-masq": false, + "labels": [], + "live-restore": true, +{% if docker_log_driver is defined %} + "log-driver": "{{ docker_log_driver }}", +{% endif %} + "log-level": "", + "log-opts": {{ docker_log_options|default({}) }}, + "max-concurrent-downloads": 3, + "max-concurrent-uploads": 5, + "mtu": 0, + "oom-score-adjust": -500, + "pidfile": "", + "raw-logs": false, + "registry-mirrors": [], + "runtimes": { + "oci": { + "path": "/usr/libexec/docker/docker-runc-current" + } + }, + "selinux-enabled": {{ docker_selinux_enabled|default(true) }}, + "storage-driver": "", + "storage-opts": [], + "tls": true, + "tlscacert": "", + "tlscert": "", + "tlskey": "", + "tlsverify": true, + "userns-remap": "", + "add-registry": {{ docker_additional_registries|default([]) }}, + "blocked-registries": {{ docker_blocked_registries|default([]) }}, + "userland-proxy-path": "/usr/libexec/docker/docker-proxy-current" +} diff --git a/roles/docker/templates/systemcontainercustom.conf.j2 b/roles/docker/templates/systemcontainercustom.conf.j2 new file mode 100644 index 000000000..a4fb01d2b --- /dev/null +++ b/roles/docker/templates/systemcontainercustom.conf.j2 @@ -0,0 +1,17 @@ +# {{ ansible_managed }} + +[Service] +{%- if "http_proxy" in openshift.common %} +ENVIRONMENT=HTTP_PROXY={{ docker_http_proxy }} +{%- endif -%} +{%- if "https_proxy" in openshift.common %} +ENVIRONMENT=HTTPS_PROXY={{ docker_http_proxy }} +{%- endif -%} +{%- if "no_proxy" in openshift.common %} +ENVIRONMENT=NO_PROXY={{ docker_no_proxy }} +{%- endif %} +{%- if os_firewall_use_firewalld|default(true) %} +[Unit] +Wants=iptables.service +After=iptables.service +{%- endif %} diff --git a/roles/docker/vars/main.yml b/roles/docker/vars/main.yml index 5237ed8f2..0082ded1e 100644 --- a/roles/docker/vars/main.yml +++ b/roles/docker/vars/main.yml @@ -1,3 +1,4 @@ --- -udevw_udevd_dir: /etc/systemd/system/systemd-udevd.service.d docker_systemd_dir: /etc/systemd/system/docker.service.d +docker_conf_dir: /etc/docker/ +udevw_udevd_dir: /etc/systemd/system/systemd-udevd.service.d diff --git a/roles/etcd/defaults/main.yaml b/roles/etcd/defaults/main.yaml index 29153f4df..e45f53219 100644 --- a/roles/etcd/defaults/main.yaml +++ b/roles/etcd/defaults/main.yaml @@ -13,5 +13,4 @@ etcd_listen_peer_urls: "{{ etcd_peer_url_scheme }}://{{ etcd_ip }}:{{ etcd_peer_ etcd_advertise_client_urls: "{{ etcd_url_scheme }}://{{ etcd_ip }}:{{ etcd_client_port }}" etcd_listen_client_urls: "{{ etcd_url_scheme }}://{{ etcd_ip }}:{{ etcd_client_port }}" -etcd_data_dir: /var/lib/etcd/ etcd_systemd_dir: "/etc/systemd/system/{{ etcd_service }}.service.d" diff --git a/roles/etcd/files/etcdctl.sh b/roles/etcd/files/etcdctl.sh deleted file mode 100644 index 0e324a8a9..000000000 --- a/roles/etcd/files/etcdctl.sh +++ /dev/null @@ -1,11 +0,0 @@ -#!/bin/bash -# Sets up handy aliases for etcd, need etcdctl2 and etcdctl3 because -# command flags are different between the two. Should work on stand -# alone etcd hosts and master + etcd hosts too because we use the peer keys. -etcdctl2() { - /usr/bin/etcdctl --cert-file /etc/etcd/peer.crt --key-file /etc/etcd/peer.key --ca-file /etc/etcd/ca.crt -C https://`hostname`:2379 ${@} -} - -etcdctl3() { - ETCDCTL_API=3 /usr/bin/etcdctl --cert /etc/etcd/peer.crt --key /etc/etcd/peer.key --cacert /etc/etcd/ca.crt --endpoints https://`hostname`:2379 ${@} -} diff --git a/roles/etcd/meta/main.yml b/roles/etcd/meta/main.yml index e0c70a181..689c07a84 100644 --- a/roles/etcd/meta/main.yml +++ b/roles/etcd/meta/main.yml @@ -24,3 +24,4 @@ dependencies: - service: etcd peering port: "{{ etcd_peer_port }}/tcp" - role: etcd_server_certificates +- role: etcd_common diff --git a/roles/etcd/tasks/main.yml b/roles/etcd/tasks/main.yml index c09da3b61..fa2f44609 100644 --- a/roles/etcd/tasks/main.yml +++ b/roles/etcd/tasks/main.yml @@ -10,51 +10,45 @@ package: name=etcd{{ '-' + etcd_version if etcd_version is defined else '' }} state=present when: not etcd_is_containerized | bool -- name: Pull etcd container - command: docker pull {{ openshift.etcd.etcd_image }} - register: pull_result - changed_when: "'Downloaded newer image' in pull_result.stdout" +- block: + - name: Pull etcd container + command: docker pull {{ openshift.etcd.etcd_image }} + register: pull_result + changed_when: "'Downloaded newer image' in pull_result.stdout" + + - name: Install etcd container service file + template: + dest: "/etc/systemd/system/etcd_container.service" + src: etcd.docker.service when: - etcd_is_containerized | bool - not openshift.common.is_etcd_system_container | bool -- name: Install etcd container service file - template: - dest: "/etc/systemd/system/etcd_container.service" - src: etcd.docker.service - when: - - etcd_is_containerized | bool - - not openshift.common.is_etcd_system_container | bool - - # Start secondary etcd instance for third party integrations # TODO: Determine an alternative to using thirdparty variable - -- name: Create configuration directory - file: - path: "{{ etcd_conf_dir }}" - state: directory - mode: 0700 - when: etcd_is_thirdparty | bool +- block: + - name: Create configuration directory + file: + path: "{{ etcd_conf_dir }}" + state: directory + mode: 0700 # TODO: retest with symlink to confirm it does or does not function -- name: Copy service file for etcd instance - copy: - src: /usr/lib/systemd/system/etcd.service - dest: "/etc/systemd/system/{{ etcd_service }}.service" - remote_src: True - when: etcd_is_thirdparty | bool - -- name: Create third party etcd service.d directory exists - file: - path: "{{ etcd_systemd_dir }}" - state: directory - when: etcd_is_thirdparty | bool - -- name: Configure third part etcd service unit file - template: - dest: "{{ etcd_systemd_dir }}/custom.conf" - src: custom.conf.j2 + - name: Copy service file for etcd instance + copy: + src: /usr/lib/systemd/system/etcd.service + dest: "/etc/systemd/system/{{ etcd_service }}.service" + remote_src: True + + - name: Create third party etcd service.d directory exists + file: + path: "{{ etcd_systemd_dir }}" + state: directory + + - name: Configure third part etcd service unit file + template: + dest: "{{ etcd_systemd_dir }}/custom.conf" + src: custom.conf.j2 when: etcd_is_thirdparty # TODO: this task may not be needed with Validate permissions @@ -80,28 +74,28 @@ command: systemctl daemon-reload when: etcd_is_thirdparty | bool -- name: Disable system etcd when containerized - systemd: - name: etcd - state: stopped - enabled: no - masked: yes - daemon_reload: yes - when: - - etcd_is_containerized | bool - - not openshift.common.is_etcd_system_container | bool - register: task_result - failed_when: "task_result|failed and 'could not' not in task_result.msg|lower" - -- name: Install etcd container service file - template: - dest: "/etc/systemd/system/etcd_container.service" - src: etcd.docker.service - when: etcd_is_containerized | bool and not openshift.common.is_etcd_system_container | bool - -- name: Install Etcd system container - include: system_container.yml - when: etcd_is_containerized | bool and openshift.common.is_etcd_system_container | bool +- block: + - name: Disable system etcd when containerized + systemd: + name: etcd + state: stopped + enabled: no + masked: yes + daemon_reload: yes + when: not openshift.common.is_etcd_system_container | bool + register: task_result + failed_when: task_result|failed and 'could not' not in task_result.msg|lower + + - name: Install etcd container service file + template: + dest: "/etc/systemd/system/etcd_container.service" + src: etcd.docker.service + when: not openshift.common.is_etcd_system_container | bool + + - name: Install Etcd system container + include: system_container.yml + when: openshift.common.is_etcd_system_container | bool + when: etcd_is_containerized | bool - name: Validate permissions on the config dir file: @@ -126,7 +120,9 @@ enabled: yes register: start_result -- include: etcdctl.yml +- include_role: + name: etcd_common + tasks_from: etcdctl.yml when: openshift_etcd_etcdctl_profile | default(true) | bool - name: Set fact etcd_service_status_changed diff --git a/roles/etcd/templates/etcd.docker.service b/roles/etcd/templates/etcd.docker.service index ae059b549..c8ceaa6ba 100644 --- a/roles/etcd/templates/etcd.docker.service +++ b/roles/etcd/templates/etcd.docker.service @@ -5,9 +5,9 @@ Requires=docker.service PartOf=docker.service [Service] -EnvironmentFile=/etc/etcd/etcd.conf +EnvironmentFile={{ etcd_conf_file }} ExecStartPre=-/usr/bin/docker rm -f {{ etcd_service }} -ExecStart=/usr/bin/docker run --name {{ etcd_service }} --rm -v /var/lib/etcd:/var/lib/etcd:z -v /etc/etcd:/etc/etcd:ro --env-file=/etc/etcd/etcd.conf --net=host --entrypoint=/usr/bin/etcd {{ openshift.etcd.etcd_image }} +ExecStart=/usr/bin/docker run --name {{ etcd_service }} --rm -v {{ etcd_data_dir }}:{{ etcd_data_dir }}:z -v {{ etcd_conf_dir }}:{{ etcd_conf_dir }}:ro --env-file={{ etcd_conf_file }} --net=host --entrypoint=/usr/bin/etcd {{ openshift.etcd.etcd_image }} ExecStop=/usr/bin/docker stop {{ etcd_service }} SyslogIdentifier=etcd_container Restart=always diff --git a/roles/etcd_common/defaults/main.yml b/roles/etcd_common/defaults/main.yml index c5efb0a0c..d12e6a07f 100644 --- a/roles/etcd_common/defaults/main.yml +++ b/roles/etcd_common/defaults/main.yml @@ -35,3 +35,6 @@ etcd_ip: "{{ ansible_default_ipv4.address }}" etcd_is_atomic: False etcd_is_containerized: False etcd_is_thirdparty: False + +# etcd dir vars +etcd_data_dir: /var/lib/etcd/ diff --git a/roles/etcd/tasks/etcdctl.yml b/roles/etcd_common/tasks/etcdctl.yml index 649ad23c1..6cb456677 100644 --- a/roles/etcd/tasks/etcdctl.yml +++ b/roles/etcd_common/tasks/etcdctl.yml @@ -4,9 +4,9 @@ when: not openshift.common.is_atomic | bool - name: Configure etcd profile.d alises - copy: - src: etcdctl.sh - dest: /etc/profile.d/etcdctl.sh + template: + dest: "/etc/profile.d/etcdctl.sh" + src: etcdctl.sh.j2 mode: 0755 owner: root group: root diff --git a/roles/etcd_common/templates/etcdctl.sh.j2 b/roles/etcd_common/templates/etcdctl.sh.j2 new file mode 100644 index 000000000..ac7d9c72f --- /dev/null +++ b/roles/etcd_common/templates/etcdctl.sh.j2 @@ -0,0 +1,12 @@ +#!/bin/bash +# Sets up handy aliases for etcd, need etcdctl2 and etcdctl3 because +# command flags are different between the two. Should work on stand +# alone etcd hosts and master + etcd hosts too because we use the peer keys. +etcdctl2() { + /usr/bin/etcdctl --cert-file {{ etcd_peer_cert_file }} --key-file {{ etcd_peer_key_file }} --ca-file {{ etcd_peer_ca_file }} -C https://`hostname`:2379 ${@} + +} + +etcdctl3() { + ETCDCTL_API=3 /usr/bin/etcdctl --cert {{ etcd_peer_cert_file }} --key {{ etcd_peer_key_file }} --cacert {{ etcd_peer_ca_file }} --endpoints https://`hostname`:2379 ${@} +} diff --git a/roles/etcd_server_certificates/meta/main.yml b/roles/etcd_server_certificates/meta/main.yml index 98c913dba..b453f2bd8 100644 --- a/roles/etcd_server_certificates/meta/main.yml +++ b/roles/etcd_server_certificates/meta/main.yml @@ -13,4 +13,4 @@ galaxy_info: - cloud - system dependencies: -- role: openshift_etcd_ca +- role: etcd_ca diff --git a/roles/flannel/handlers/main.yml b/roles/flannel/handlers/main.yml index 94d1d18fb..c60c2115a 100644 --- a/roles/flannel/handlers/main.yml +++ b/roles/flannel/handlers/main.yml @@ -5,4 +5,6 @@ - name: restart docker become: yes - systemd: name=docker state=restarted + systemd: + name: "{{ openshift.docker.service_name }}" + state: restarted diff --git a/roles/lib_openshift/library/oc_adm_ca_server_cert.py b/roles/lib_openshift/library/oc_adm_ca_server_cert.py index 8a311cd0f..7039a0cec 100644 --- a/roles/lib_openshift/library/oc_adm_ca_server_cert.py +++ b/roles/lib_openshift/library/oc_adm_ca_server_cert.py @@ -1080,7 +1080,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_adm_manage_node.py b/roles/lib_openshift/library/oc_adm_manage_node.py index 0930faadb..ae5806137 100644 --- a/roles/lib_openshift/library/oc_adm_manage_node.py +++ b/roles/lib_openshift/library/oc_adm_manage_node.py @@ -1066,7 +1066,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_adm_policy_group.py b/roles/lib_openshift/library/oc_adm_policy_group.py index 6a7be65d0..36eb294a8 100644 --- a/roles/lib_openshift/library/oc_adm_policy_group.py +++ b/roles/lib_openshift/library/oc_adm_policy_group.py @@ -1052,7 +1052,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_adm_policy_user.py b/roles/lib_openshift/library/oc_adm_policy_user.py index 44923ecd2..bedd45922 100644 --- a/roles/lib_openshift/library/oc_adm_policy_user.py +++ b/roles/lib_openshift/library/oc_adm_policy_user.py @@ -1052,7 +1052,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_adm_registry.py b/roles/lib_openshift/library/oc_adm_registry.py index 0604f48bb..c6fa85f90 100644 --- a/roles/lib_openshift/library/oc_adm_registry.py +++ b/roles/lib_openshift/library/oc_adm_registry.py @@ -1170,7 +1170,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): @@ -2538,25 +2538,34 @@ class Registry(OpenShiftCLI): def run_ansible(params, check_mode): '''run idempotent ansible code''' + registry_options = {'images': {'value': params['images'], 'include': True}, + 'latest_images': {'value': params['latest_images'], 'include': True}, + 'labels': {'value': params['labels'], 'include': True}, + 'ports': {'value': ','.join(params['ports']), 'include': True}, + 'replicas': {'value': params['replicas'], 'include': True}, + 'selector': {'value': params['selector'], 'include': True}, + 'service_account': {'value': params['service_account'], 'include': True}, + 'mount_host': {'value': params['mount_host'], 'include': True}, + 'env_vars': {'value': params['env_vars'], 'include': False}, + 'volume_mounts': {'value': params['volume_mounts'], 'include': False}, + 'edits': {'value': params['edits'], 'include': False}, + 'tls_key': {'value': params['tls_key'], 'include': True}, + 'tls_certificate': {'value': params['tls_certificate'], 'include': True}, + } + + # Do not always pass the daemonset and enforce-quota parameters because they are not understood + # by old versions of oc. + # Default value is false. So, it's safe to not pass an explicit false value to oc versions which + # understand these parameters. + if params['daemonset']: + registry_options['daemonset'] = {'value': params['daemonset'], 'include': True} + if params['enforce_quota']: + registry_options['enforce_quota'] = {'value': params['enforce_quota'], 'include': True} + rconfig = RegistryConfig(params['name'], params['namespace'], params['kubeconfig'], - {'images': {'value': params['images'], 'include': True}, - 'latest_images': {'value': params['latest_images'], 'include': True}, - 'labels': {'value': params['labels'], 'include': True}, - 'ports': {'value': ','.join(params['ports']), 'include': True}, - 'replicas': {'value': params['replicas'], 'include': True}, - 'selector': {'value': params['selector'], 'include': True}, - 'service_account': {'value': params['service_account'], 'include': True}, - 'mount_host': {'value': params['mount_host'], 'include': True}, - 'env_vars': {'value': params['env_vars'], 'include': False}, - 'volume_mounts': {'value': params['volume_mounts'], 'include': False}, - 'edits': {'value': params['edits'], 'include': False}, - 'enforce_quota': {'value': params['enforce_quota'], 'include': True}, - 'daemonset': {'value': params['daemonset'], 'include': True}, - 'tls_key': {'value': params['tls_key'], 'include': True}, - 'tls_certificate': {'value': params['tls_certificate'], 'include': True}, - }) + registry_options) ocregistry = Registry(rconfig, params['debug']) diff --git a/roles/lib_openshift/library/oc_adm_router.py b/roles/lib_openshift/library/oc_adm_router.py index bdcf94a58..8a4f93372 100644 --- a/roles/lib_openshift/library/oc_adm_router.py +++ b/roles/lib_openshift/library/oc_adm_router.py @@ -1195,7 +1195,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_clusterrole.py b/roles/lib_openshift/library/oc_clusterrole.py index af48ce636..d81c29784 100644 --- a/roles/lib_openshift/library/oc_clusterrole.py +++ b/roles/lib_openshift/library/oc_clusterrole.py @@ -1044,7 +1044,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_configmap.py b/roles/lib_openshift/library/oc_configmap.py index 385ed888b..bdcb3f278 100644 --- a/roles/lib_openshift/library/oc_configmap.py +++ b/roles/lib_openshift/library/oc_configmap.py @@ -1050,7 +1050,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_edit.py b/roles/lib_openshift/library/oc_edit.py index 649de547e..be1b3a01e 100644 --- a/roles/lib_openshift/library/oc_edit.py +++ b/roles/lib_openshift/library/oc_edit.py @@ -1094,7 +1094,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_env.py b/roles/lib_openshift/library/oc_env.py index 74bf63353..4ac6e4aeb 100644 --- a/roles/lib_openshift/library/oc_env.py +++ b/roles/lib_openshift/library/oc_env.py @@ -1061,7 +1061,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_group.py b/roles/lib_openshift/library/oc_group.py index 2dd3d28ec..b6f058340 100644 --- a/roles/lib_openshift/library/oc_group.py +++ b/roles/lib_openshift/library/oc_group.py @@ -1034,7 +1034,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_image.py b/roles/lib_openshift/library/oc_image.py index bb7f97689..c094c9472 100644 --- a/roles/lib_openshift/library/oc_image.py +++ b/roles/lib_openshift/library/oc_image.py @@ -1053,7 +1053,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_label.py b/roles/lib_openshift/library/oc_label.py index ec9abcda7..a76dd44c4 100644 --- a/roles/lib_openshift/library/oc_label.py +++ b/roles/lib_openshift/library/oc_label.py @@ -1070,7 +1070,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_obj.py b/roles/lib_openshift/library/oc_obj.py index 3abd50a2e..e12137b51 100644 --- a/roles/lib_openshift/library/oc_obj.py +++ b/roles/lib_openshift/library/oc_obj.py @@ -1073,7 +1073,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_objectvalidator.py b/roles/lib_openshift/library/oc_objectvalidator.py index bc5245216..aeb4e5686 100644 --- a/roles/lib_openshift/library/oc_objectvalidator.py +++ b/roles/lib_openshift/library/oc_objectvalidator.py @@ -1005,7 +1005,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_process.py b/roles/lib_openshift/library/oc_process.py index de5426c51..f7aa8c0d2 100644 --- a/roles/lib_openshift/library/oc_process.py +++ b/roles/lib_openshift/library/oc_process.py @@ -1062,7 +1062,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_project.py b/roles/lib_openshift/library/oc_project.py index 02cd810ce..b044a47ce 100644 --- a/roles/lib_openshift/library/oc_project.py +++ b/roles/lib_openshift/library/oc_project.py @@ -1059,7 +1059,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_pvc.py b/roles/lib_openshift/library/oc_pvc.py index a9103ebf6..8604cc2f3 100644 --- a/roles/lib_openshift/library/oc_pvc.py +++ b/roles/lib_openshift/library/oc_pvc.py @@ -1054,7 +1054,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_route.py b/roles/lib_openshift/library/oc_route.py index f005adffc..fef48daf0 100644 --- a/roles/lib_openshift/library/oc_route.py +++ b/roles/lib_openshift/library/oc_route.py @@ -1104,7 +1104,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_scale.py b/roles/lib_openshift/library/oc_scale.py index 9dcb38216..384df0ee3 100644 --- a/roles/lib_openshift/library/oc_scale.py +++ b/roles/lib_openshift/library/oc_scale.py @@ -1048,7 +1048,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_secret.py b/roles/lib_openshift/library/oc_secret.py index 2ac0abcec..443750c5d 100644 --- a/roles/lib_openshift/library/oc_secret.py +++ b/roles/lib_openshift/library/oc_secret.py @@ -1094,7 +1094,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_service.py b/roles/lib_openshift/library/oc_service.py index 0af695e08..7537bdb5b 100644 --- a/roles/lib_openshift/library/oc_service.py +++ b/roles/lib_openshift/library/oc_service.py @@ -1100,7 +1100,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_serviceaccount.py b/roles/lib_openshift/library/oc_serviceaccount.py index ba8a1fdac..03a4dd3b9 100644 --- a/roles/lib_openshift/library/oc_serviceaccount.py +++ b/roles/lib_openshift/library/oc_serviceaccount.py @@ -1046,7 +1046,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_serviceaccount_secret.py b/roles/lib_openshift/library/oc_serviceaccount_secret.py index 5bff7621c..db1010694 100644 --- a/roles/lib_openshift/library/oc_serviceaccount_secret.py +++ b/roles/lib_openshift/library/oc_serviceaccount_secret.py @@ -1046,7 +1046,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_user.py b/roles/lib_openshift/library/oc_user.py index 450a30f57..c3885c1ac 100644 --- a/roles/lib_openshift/library/oc_user.py +++ b/roles/lib_openshift/library/oc_user.py @@ -1106,7 +1106,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_version.py b/roles/lib_openshift/library/oc_version.py index 0937df5a1..5c4596c09 100644 --- a/roles/lib_openshift/library/oc_version.py +++ b/roles/lib_openshift/library/oc_version.py @@ -1018,7 +1018,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/library/oc_volume.py b/roles/lib_openshift/library/oc_volume.py index d0e7e77e1..5a507348c 100644 --- a/roles/lib_openshift/library/oc_volume.py +++ b/roles/lib_openshift/library/oc_volume.py @@ -1083,7 +1083,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/src/class/oc_adm_registry.py b/roles/lib_openshift/src/class/oc_adm_registry.py index 720b44cdc..3c130fe28 100644 --- a/roles/lib_openshift/src/class/oc_adm_registry.py +++ b/roles/lib_openshift/src/class/oc_adm_registry.py @@ -331,25 +331,34 @@ class Registry(OpenShiftCLI): def run_ansible(params, check_mode): '''run idempotent ansible code''' + registry_options = {'images': {'value': params['images'], 'include': True}, + 'latest_images': {'value': params['latest_images'], 'include': True}, + 'labels': {'value': params['labels'], 'include': True}, + 'ports': {'value': ','.join(params['ports']), 'include': True}, + 'replicas': {'value': params['replicas'], 'include': True}, + 'selector': {'value': params['selector'], 'include': True}, + 'service_account': {'value': params['service_account'], 'include': True}, + 'mount_host': {'value': params['mount_host'], 'include': True}, + 'env_vars': {'value': params['env_vars'], 'include': False}, + 'volume_mounts': {'value': params['volume_mounts'], 'include': False}, + 'edits': {'value': params['edits'], 'include': False}, + 'tls_key': {'value': params['tls_key'], 'include': True}, + 'tls_certificate': {'value': params['tls_certificate'], 'include': True}, + } + + # Do not always pass the daemonset and enforce-quota parameters because they are not understood + # by old versions of oc. + # Default value is false. So, it's safe to not pass an explicit false value to oc versions which + # understand these parameters. + if params['daemonset']: + registry_options['daemonset'] = {'value': params['daemonset'], 'include': True} + if params['enforce_quota']: + registry_options['enforce_quota'] = {'value': params['enforce_quota'], 'include': True} + rconfig = RegistryConfig(params['name'], params['namespace'], params['kubeconfig'], - {'images': {'value': params['images'], 'include': True}, - 'latest_images': {'value': params['latest_images'], 'include': True}, - 'labels': {'value': params['labels'], 'include': True}, - 'ports': {'value': ','.join(params['ports']), 'include': True}, - 'replicas': {'value': params['replicas'], 'include': True}, - 'selector': {'value': params['selector'], 'include': True}, - 'service_account': {'value': params['service_account'], 'include': True}, - 'mount_host': {'value': params['mount_host'], 'include': True}, - 'env_vars': {'value': params['env_vars'], 'include': False}, - 'volume_mounts': {'value': params['volume_mounts'], 'include': False}, - 'edits': {'value': params['edits'], 'include': False}, - 'enforce_quota': {'value': params['enforce_quota'], 'include': True}, - 'daemonset': {'value': params['daemonset'], 'include': True}, - 'tls_key': {'value': params['tls_key'], 'include': True}, - 'tls_certificate': {'value': params['tls_certificate'], 'include': True}, - }) + registry_options) ocregistry = Registry(rconfig, params['debug']) diff --git a/roles/lib_openshift/src/lib/base.py b/roles/lib_openshift/src/lib/base.py index fc1b6f1ec..2bf795e25 100644 --- a/roles/lib_openshift/src/lib/base.py +++ b/roles/lib_openshift/src/lib/base.py @@ -256,7 +256,7 @@ class OpenShiftCLI(object): stdout, stderr = proc.communicate(input_data) - return proc.returncode, stdout.decode(), stderr.decode() + return proc.returncode, stdout.decode('utf-8'), stderr.decode('utf-8') # pylint: disable=too-many-arguments,too-many-branches def openshift_cmd(self, cmd, oadm=False, output=False, output_type='json', input_data=None): diff --git a/roles/lib_openshift/src/test/integration/oc_label.yml b/roles/lib_openshift/src/test/integration/oc_label.yml index b4e721407..22cf687c5 100755 --- a/roles/lib_openshift/src/test/integration/oc_label.yml +++ b/roles/lib_openshift/src/test/integration/oc_label.yml @@ -15,7 +15,7 @@ - name: ensure needed vars are defined fail: msg: "{{ item }} not defined" - when: "{{ item }} is not defined" + when: item is not defined with_items: - cli_master_test # ansible inventory instance to run playbook against diff --git a/roles/lib_openshift/src/test/integration/oc_user.yml b/roles/lib_openshift/src/test/integration/oc_user.yml index ad1f9d188..9b4290052 100755 --- a/roles/lib_openshift/src/test/integration/oc_user.yml +++ b/roles/lib_openshift/src/test/integration/oc_user.yml @@ -14,7 +14,7 @@ - name: ensure needed vars are defined fail: msg: "{{ item }} no defined" - when: "{{ item}} is not defined" + when: item is not defined with_items: - cli_master_test # ansible inventory instance to run playbook against diff --git a/roles/lib_openshift/src/test/unit/test_oc_adm_registry.py b/roles/lib_openshift/src/test/unit/test_oc_adm_registry.py index 30e13ce4b..97cf86170 100755 --- a/roles/lib_openshift/src/test/unit/test_oc_adm_registry.py +++ b/roles/lib_openshift/src/test/unit/test_oc_adm_registry.py @@ -254,7 +254,7 @@ class RegistryTest(unittest.TestCase): mock_cmd.assert_has_calls([ mock.call(['oc', 'get', 'dc', 'docker-registry', '-o', 'json', '-n', 'default'], None), mock.call(['oc', 'get', 'svc', 'docker-registry', '-o', 'json', '-n', 'default'], None), - mock.call(['oc', 'adm', 'registry', '--daemonset=False', '--enforce-quota=False', + mock.call(['oc', 'adm', 'registry', '--ports=5000', '--replicas=1', '--selector=type=infra', '--service-account=registry', '--dry-run=True', '-o', 'json', '-n', 'default'], None), mock.call(['oc', 'create', '-f', mock.ANY, '-n', 'default'], None), diff --git a/roles/openshift_ca/tasks/main.yml b/roles/openshift_ca/tasks/main.yml index 3b17d9ed6..c7b906949 100644 --- a/roles/openshift_ca/tasks/main.yml +++ b/roles/openshift_ca/tasks/main.yml @@ -95,7 +95,7 @@ {% for legacy_ca_certificate in g_master_legacy_ca_result.files | default([]) | oo_collect('path') %} --certificate-authority {{ legacy_ca_certificate }} {% endfor %} - --hostnames={{ openshift.common.all_hostnames | join(',') }} + --hostnames={{ hostvars[openshift_ca_host].openshift.common.all_hostnames | join(',') }} --master={{ openshift.master.api_url }} --public-master={{ openshift.master.public_api_url }} --cert-dir={{ openshift_ca_config_dir }} diff --git a/roles/openshift_certificate_expiry/filter_plugins/oo_cert_expiry.py b/roles/openshift_certificate_expiry/filter_plugins/oo_cert_expiry.py index 5f102e960..577a14b9a 100644 --- a/roles/openshift_certificate_expiry/filter_plugins/oo_cert_expiry.py +++ b/roles/openshift_certificate_expiry/filter_plugins/oo_cert_expiry.py @@ -35,7 +35,7 @@ Example playbook usage: become: no run_once: yes delegate_to: localhost - when: "{{ openshift_certificate_expiry_save_json_results|bool }}" + when: openshift_certificate_expiry_save_json_results|bool copy: content: "{{ hostvars|oo_cert_expiry_results_to_json() }}" dest: "{{ openshift_certificate_expiry_json_results_path }}" diff --git a/roles/openshift_certificate_expiry/library/openshift_cert_expiry.py b/roles/openshift_certificate_expiry/library/openshift_cert_expiry.py index c204b5341..0242f5b43 100644 --- a/roles/openshift_certificate_expiry/library/openshift_cert_expiry.py +++ b/roles/openshift_certificate_expiry/library/openshift_cert_expiry.py @@ -135,7 +135,7 @@ platforms missing the Python OpenSSL library. continue elif l.startswith('Subject:'): - # O=system:nodes, CN=system:node:m01.example.com + # O = system:nodes, CN = system:node:m01.example.com self.subject = FakeOpenSSLCertificateSubjects(l.partition(': ')[-1]) def get_serial_number(self): @@ -202,7 +202,7 @@ object""" """ self.subjects = [] for s in subject_string.split(', '): - name, _, value = s.partition('=') + name, _, value = s.partition(' = ') self.subjects.append((name, value)) def get_components(self): diff --git a/roles/openshift_certificate_expiry/tasks/main.yml b/roles/openshift_certificate_expiry/tasks/main.yml index 139d5de6e..b5234bd1e 100644 --- a/roles/openshift_certificate_expiry/tasks/main.yml +++ b/roles/openshift_certificate_expiry/tasks/main.yml @@ -13,12 +13,12 @@ src: cert-expiry-table.html.j2 dest: "{{ openshift_certificate_expiry_html_report_path }}" delegate_to: localhost - when: "{{ openshift_certificate_expiry_generate_html_report|bool }}" + when: openshift_certificate_expiry_generate_html_report|bool - name: Generate the result JSON string run_once: yes set_fact: json_result_string="{{ hostvars|oo_cert_expiry_results_to_json(play_hosts) }}" - when: "{{ openshift_certificate_expiry_save_json_results|bool }}" + when: openshift_certificate_expiry_save_json_results|bool - name: Generate results JSON file become: no @@ -27,4 +27,4 @@ src: save_json_results.j2 dest: "{{ openshift_certificate_expiry_json_results_path }}" delegate_to: localhost - when: "{{ openshift_certificate_expiry_save_json_results|bool }}" + when: openshift_certificate_expiry_save_json_results|bool diff --git a/roles/openshift_certificate_expiry/test/test_fakeopensslclasses.py b/roles/openshift_certificate_expiry/test/test_fakeopensslclasses.py index ccdd48fa8..8a521a765 100644 --- a/roles/openshift_certificate_expiry/test/test_fakeopensslclasses.py +++ b/roles/openshift_certificate_expiry/test/test_fakeopensslclasses.py @@ -17,7 +17,8 @@ from openshift_cert_expiry import FakeOpenSSLCertificate # noqa: E402 @pytest.fixture(scope='module') def fake_valid_cert(valid_cert): - cmd = ['openssl', 'x509', '-in', str(valid_cert['cert_file']), '-text'] + cmd = ['openssl', 'x509', '-in', str(valid_cert['cert_file']), '-text', + '-nameopt', 'oneline'] cert = subprocess.check_output(cmd) return FakeOpenSSLCertificate(cert.decode('utf8')) diff --git a/roles/openshift_cloud_provider/tasks/openstack.yml b/roles/openshift_cloud_provider/tasks/openstack.yml index f22dd4520..5788e6d74 100644 --- a/roles/openshift_cloud_provider/tasks/openstack.yml +++ b/roles/openshift_cloud_provider/tasks/openstack.yml @@ -7,4 +7,4 @@ template: dest: "{{ openshift.common.config_base }}/cloudprovider/openstack.conf" src: openstack.conf.j2 - when: "openshift_cloudprovider_openstack_auth_url is defined and openshift_cloudprovider_openstack_username is defined and openshift_cloudprovider_openstack_password is defined and (openshift_cloudprovider_openstack_tenant_id is defined or openshift_cloudprovider_openstack_tenant_name is defined)" + when: openshift_cloudprovider_openstack_auth_url is defined and openshift_cloudprovider_openstack_username is defined and openshift_cloudprovider_openstack_password is defined and (openshift_cloudprovider_openstack_tenant_id is defined or openshift_cloudprovider_openstack_tenant_name is defined) diff --git a/roles/openshift_docker_facts/tasks/main.yml b/roles/openshift_docker_facts/tasks/main.yml index 049ceffe0..350512452 100644 --- a/roles/openshift_docker_facts/tasks/main.yml +++ b/roles/openshift_docker_facts/tasks/main.yml @@ -16,6 +16,7 @@ disable_push_dockerhub: "{{ openshift_disable_push_dockerhub | default(None) }}" hosted_registry_insecure: "{{ openshift_docker_hosted_registry_insecure | default(openshift.docker.hosted_registry_insecure | default(False)) }}" hosted_registry_network: "{{ openshift_docker_hosted_registry_network | default(None) }}" + use_system_container: "{{ openshift_docker_use_system_container | default(False) }}" - set_fact: docker_additional_registries: "{{ openshift.docker.additional_registries diff --git a/roles/openshift_etcd_ca/tasks/main.yml b/roles/openshift_etcd_ca/tasks/main.yml deleted file mode 100644 index ed97d539c..000000000 --- a/roles/openshift_etcd_ca/tasks/main.yml +++ /dev/null @@ -1 +0,0 @@ ---- diff --git a/playbooks/common/openshift-cluster/upgrades/pre/validate_excluder.yml b/roles/openshift_excluder/tasks/verify_excluder.yml index 6de1ed061..24a05d56e 100644 --- a/playbooks/common/openshift-cluster/upgrades/pre/validate_excluder.yml +++ b/roles/openshift_excluder/tasks/verify_excluder.yml @@ -11,7 +11,7 @@ failed_when: false changed_when: false - - name: Docker excluder version detected + - name: "{{ excluder }} version detected" debug: msg: "{{ excluder }}: {{ excluder_version.stdout }}" diff --git a/roles/openshift_excluder/tasks/verify_upgrade.yml b/roles/openshift_excluder/tasks/verify_upgrade.yml new file mode 100644 index 000000000..6ea2130ac --- /dev/null +++ b/roles/openshift_excluder/tasks/verify_upgrade.yml @@ -0,0 +1,15 @@ +--- +# input variables +# - repoquery_cmd +# - openshift_upgrade_target +- include: init.yml + +- include: verify_excluder.yml + vars: + excluder: "{{ openshift.common.service_type }}-docker-excluder" + when: docker_excluder_on + +- include: verify_excluder.yml + vars: + excluder: "{{ openshift.common.service_type }}-excluder" + when: openshift_excluder_on diff --git a/roles/openshift_expand_partition/tasks/main.yml b/roles/openshift_expand_partition/tasks/main.yml index 00603f4fa..4cb5418c6 100644 --- a/roles/openshift_expand_partition/tasks/main.yml +++ b/roles/openshift_expand_partition/tasks/main.yml @@ -6,7 +6,7 @@ - name: Determine if growpart is installed command: "rpm -q cloud-utils-growpart" register: has_growpart - failed_when: "has_growpart.cr != 0 and 'package cloud-utils-growpart is not installed' not in has_growpart.stdout" + failed_when: has_growpart.cr != 0 and 'package cloud-utils-growpart is not installed' not in has_growpart.stdout changed_when: false when: openshift.common.is_containerized | bool diff --git a/roles/openshift_facts/library/openshift_facts.py b/roles/openshift_facts/library/openshift_facts.py index ca0279426..5ea902e2b 100755 --- a/roles/openshift_facts/library/openshift_facts.py +++ b/roles/openshift_facts/library/openshift_facts.py @@ -1792,6 +1792,12 @@ def set_container_facts_if_unset(facts): deployer_image = 'openshift/origin-deployer' facts['common']['is_atomic'] = os.path.isfile('/run/ostree-booted') + # If openshift_docker_use_system_container is set and is True .... + if 'use_system_container' in list(facts['docker'].keys()): + if facts['docker']['use_system_container']: + # ... set the service name to container-engine + facts['docker']['service_name'] = 'container-engine' + if 'is_containerized' not in facts['common']: facts['common']['is_containerized'] = facts['common']['is_atomic'] if 'cli_image' not in facts['common']: @@ -1911,14 +1917,16 @@ class OpenShiftFacts(object): ) self.role = role + # Collect system facts and preface each fact with 'ansible_'. try: - # ansible-2.1 # pylint: disable=too-many-function-args,invalid-name self.system_facts = ansible_facts(module, ['hardware', 'network', 'virtual', 'facter']) # noqa: F405 + additional_facts = {} for (k, v) in self.system_facts.items(): - self.system_facts["ansible_%s" % k.replace('-', '_')] = v + additional_facts["ansible_%s" % k.replace('-', '_')] = v + self.system_facts.update(additional_facts) except UnboundLocalError: - # ansible-2.2 + # ansible-2.2,2.3 self.system_facts = get_all_facts(module)['ansible_facts'] # noqa: F405 self.facts = self.generate_facts(local_facts, @@ -2072,6 +2080,7 @@ class OpenShiftFacts(object): hosted_registry_insecure = get_hosted_registry_insecure() if hosted_registry_insecure is not None: docker['hosted_registry_insecure'] = hosted_registry_insecure + docker['service_name'] = 'docker' defaults['docker'] = docker if 'clock' in roles: diff --git a/roles/openshift_hosted_logging/tasks/deploy_logging.yaml b/roles/openshift_hosted_logging/tasks/deploy_logging.yaml index afd82766f..78b624109 100644 --- a/roles/openshift_hosted_logging/tasks/deploy_logging.yaml +++ b/roles/openshift_hosted_logging/tasks/deploy_logging.yaml @@ -36,7 +36,7 @@ command: > {{ openshift.common.client_binary }} --config={{ mktemp.stdout }}/admin.kubeconfig secrets new logging-deployer {{ openshift_hosted_logging_secret_vars | default('nothing=/dev/null') }} register: secret_output - failed_when: "secret_output.rc == 1 and 'exists' not in secret_output.stderr" + failed_when: secret_output.rc == 1 and 'exists' not in secret_output.stderr - name: "Create templates for logging accounts and the deployer" command: > @@ -60,21 +60,21 @@ {{ openshift.common.client_binary }} adm --config={{ mktemp.stdout }}/admin.kubeconfig policy add-cluster-role-to-user oauth-editor system:serviceaccount:logging:logging-deployer register: permiss_output - failed_when: "permiss_output.rc == 1 and 'exists' not in permiss_output.stderr" + failed_when: permiss_output.rc == 1 and 'exists' not in permiss_output.stderr - name: "Set permissions for fluentd" command: > {{ openshift.common.client_binary }} adm --config={{ mktemp.stdout }}/admin.kubeconfig policy add-scc-to-user privileged system:serviceaccount:logging:aggregated-logging-fluentd register: fluentd_output - failed_when: "fluentd_output.rc == 1 and 'exists' not in fluentd_output.stderr" + failed_when: fluentd_output.rc == 1 and 'exists' not in fluentd_output.stderr - name: "Set additional permissions for fluentd" command: > {{ openshift.common.client_binary }} adm policy --config={{ mktemp.stdout }}/admin.kubeconfig add-cluster-role-to-user cluster-reader system:serviceaccount:logging:aggregated-logging-fluentd register: fluentd2_output - failed_when: "fluentd2_output.rc == 1 and 'exists' not in fluentd2_output.stderr" + failed_when: fluentd2_output.rc == 1 and 'exists' not in fluentd2_output.stderr - name: "Add rolebinding-reader to aggregated-logging-elasticsearch" command: > @@ -82,13 +82,13 @@ policy add-cluster-role-to-user rolebinding-reader \ system:serviceaccount:logging:aggregated-logging-elasticsearch register: rolebinding_reader_output - failed_when: "rolebinding_reader_output == 1 and 'exists' not in rolebinding_reader_output.stderr" + failed_when: rolebinding_reader_output == 1 and 'exists' not in rolebinding_reader_output.stderr - name: "Create ConfigMap for deployer parameters" command: > {{ openshift.common.client_binary}} --config={{ mktemp.stdout }}/admin.kubeconfig create configmap logging-deployer {{ deployer_cmap_params }} register: deployer_configmap_output - failed_when: "deployer_configmap_output.rc == 1 and 'exists' not in deployer_configmap_output.stderr" + failed_when: deployer_configmap_output.rc == 1 and 'exists' not in deployer_configmap_output.stderr - name: "Process the deployer template" shell: "{{ openshift.common.client_binary }} --config={{ mktemp.stdout }}/admin.kubeconfig new-app logging-deployer-template {{ oc_new_app_values }}" diff --git a/roles/openshift_hosted_metrics/tasks/install.yml b/roles/openshift_hosted_metrics/tasks/install.yml index 6a442cefc..15dd1bd54 100644 --- a/roles/openshift_hosted_metrics/tasks/install.yml +++ b/roles/openshift_hosted_metrics/tasks/install.yml @@ -81,7 +81,7 @@ secrets new metrics-deployer nothing=/dev/null register: metrics_deployer_secret changed_when: metrics_deployer_secret.rc == 0 - failed_when: "metrics_deployer_secret.rc == 1 and 'already exists' not in metrics_deployer_secret.stderr" + failed_when: metrics_deployer_secret.rc == 1 and 'already exists' not in metrics_deployer_secret.stderr # TODO: extend this to allow user passed in certs or generating cert with # OpenShift CA diff --git a/roles/openshift_logging/README.md b/roles/openshift_logging/README.md index 42f4fc72e..cba0f2de8 100644 --- a/roles/openshift_logging/README.md +++ b/roles/openshift_logging/README.md @@ -91,8 +91,6 @@ same as above for their non-ops counterparts, but apply to the OPS cluster insta - `openshift_logging_es_ops_pvc_prefix`: logging-es-ops - `openshift_logging_es_ops_recover_after_time`: 5m - `openshift_logging_es_ops_storage_group`: 65534 -- `openshift_logging_es_ops_number_of_shards`: The number of primary shards for every new index created in ES. Defaults to '1'. -- `openshift_logging_es_ops_number_of_replicas`: The number of replica shards per primary shard for every new index. Defaults to '0'. - `openshift_logging_kibana_ops_hostname`: The Operations Kibana hostname. Defaults to 'kibana-ops.example.com'. - `openshift_logging_kibana_ops_cpu_limit`: The amount of CPU to allocate to Kibana or unset if not specified. - `openshift_logging_kibana_ops_memory_limit`: The amount of memory to allocate to Kibana or unset if not specified. diff --git a/roles/openshift_logging/defaults/main.yml b/roles/openshift_logging/defaults/main.yml index 5ee8d1e2a..c05cc5f98 100644 --- a/roles/openshift_logging/defaults/main.yml +++ b/roles/openshift_logging/defaults/main.yml @@ -3,6 +3,10 @@ openshift_logging_use_ops: "{{ openshift_hosted_logging_enable_ops_cluster | def openshift_logging_master_url: "https://kubernetes.default.svc.{{ openshift.common.dns_domain }}" openshift_logging_master_public_url: "{{ openshift_hosted_logging_master_public_url | default('https://' + openshift.common.public_hostname + ':' ~ (openshift_master_api_port | default('8443', true))) }}" openshift_logging_namespace: logging +openshift_logging_nodeselector: null +openshift_logging_labels: {} +openshift_logging_label_key: "" +openshift_logging_label_value: "" openshift_logging_install_logging: True openshift_logging_image_pull_secret: "{{ openshift_hosted_logging_image_pull_secret | default('') }}" @@ -113,8 +117,6 @@ openshift_logging_es_ops_pvc_prefix: "{{ openshift_hosted_logging_elasticsearch_ openshift_logging_es_ops_recover_after_time: 5m openshift_logging_es_ops_storage_group: "{{ openshift_hosted_logging_elasticsearch_storage_group | default('65534') }}" openshift_logging_es_ops_nodeselector: "{{ openshift_hosted_logging_elasticsearch_ops_nodeselector | default('') | map_from_pairs }}" -openshift_logging_es_ops_number_of_shards: 1 -openshift_logging_es_ops_number_of_replicas: 0 # storage related defaults openshift_logging_storage_access_modes: "{{ openshift_hosted_logging_storage_access_modes | default(['ReadWriteOnce']) }}" diff --git a/roles/openshift_logging/tasks/generate_configmaps.yaml b/roles/openshift_logging/tasks/generate_configmaps.yaml index 44bd0058a..b047eb35a 100644 --- a/roles/openshift_logging/tasks/generate_configmaps.yaml +++ b/roles/openshift_logging/tasks/generate_configmaps.yaml @@ -21,6 +21,8 @@ dest="{{local_tmp.stdout}}/elasticsearch-gen-template.yml" vars: - allow_cluster_reader: "{{openshift_logging_es_ops_allow_cluster_reader | lower | default('false')}}" + - es_number_of_shards: "{{ openshift_logging_es_number_of_shards | default(1) }}" + - es_number_of_replicas: "{{ openshift_logging_es_number_of_replicas | default(0) }}" when: es_config_contents is undefined changed_when: no diff --git a/roles/openshift_logging/tasks/generate_routes.yaml b/roles/openshift_logging/tasks/generate_routes.yaml index e77da7a24..f76bb3a0a 100644 --- a/roles/openshift_logging/tasks/generate_routes.yaml +++ b/roles/openshift_logging/tasks/generate_routes.yaml @@ -1,14 +1,14 @@ --- - set_fact: kibana_key={{ lookup('file', openshift_logging_kibana_key) | b64encode }} - when: "{{ openshift_logging_kibana_key | trim | length > 0 }}" + when: openshift_logging_kibana_key | trim | length > 0 changed_when: false - set_fact: kibana_cert={{ lookup('file', openshift_logging_kibana_cert)| b64encode }} - when: "{{openshift_logging_kibana_cert | trim | length > 0}}" + when: openshift_logging_kibana_cert | trim | length > 0 changed_when: false - set_fact: kibana_ca={{ lookup('file', openshift_logging_kibana_ca)| b64encode }} - when: "{{openshift_logging_kibana_ca | trim | length > 0}}" + when: openshift_logging_kibana_ca | trim | length > 0 changed_when: false - set_fact: kibana_ca={{key_pairs | entry_from_named_pair('ca_file') }} diff --git a/roles/openshift_logging/tasks/install_elasticsearch.yaml b/roles/openshift_logging/tasks/install_elasticsearch.yaml index add2f77cf..a981e7f7f 100644 --- a/roles/openshift_logging/tasks/install_elasticsearch.yaml +++ b/roles/openshift_logging/tasks/install_elasticsearch.yaml @@ -3,7 +3,7 @@ set_fact: openshift_logging_current_es_size={{ openshift_logging_facts.elasticsearch.deploymentconfigs.keys() | length }} - set_fact: openshift_logging_es_pvc_prefix="logging-es" - when: "not openshift_logging_es_pvc_prefix or openshift_logging_es_pvc_prefix == ''" + when: not openshift_logging_es_pvc_prefix or openshift_logging_es_pvc_prefix == '' - set_fact: es_indices={{ es_indices | default([]) + [item | int - 1] }} with_sequence: count={{ openshift_logging_facts.elasticsearch.deploymentconfigs.keys() | count }} @@ -24,8 +24,6 @@ es_pv_selector: "{{ openshift_logging_es_pv_selector }}" es_cpu_limit: "{{ openshift_logging_es_cpu_limit }}" es_memory_limit: "{{ openshift_logging_es_memory_limit }}" - es_number_of_shards: "{{ openshift_logging_es_number_of_shards }}" - es_number_of_replicas: "{{ openshift_logging_es_number_of_replicas }}" with_together: - "{{ openshift_logging_facts.elasticsearch.deploymentconfigs.keys() }}" - "{{ openshift_logging_facts.elasticsearch.deploymentconfigs.values() }}" @@ -49,8 +47,6 @@ es_pv_selector: "{{ openshift_logging_es_pv_selector }}" es_cpu_limit: "{{ openshift_logging_es_cpu_limit }}" es_memory_limit: "{{ openshift_logging_es_memory_limit }}" - es_number_of_shards: "{{ openshift_logging_es_number_of_shards }}" - es_number_of_replicas: "{{ openshift_logging_es_number_of_replicas }}" with_sequence: count={{ openshift_logging_es_cluster_size | int - openshift_logging_facts.elasticsearch.deploymentconfigs | count }} # --------- Tasks for Operation clusters --------- @@ -71,7 +67,7 @@ check_mode: no - set_fact: openshift_logging_es_ops_pvc_prefix="logging-es-ops" - when: "not openshift_logging_es_ops_pvc_prefix or openshift_logging_es_ops_pvc_prefix == ''" + when: not openshift_logging_es_ops_pvc_prefix or openshift_logging_es_ops_pvc_prefix == '' - set_fact: es_ops_indices={{ es_ops_indices | default([]) + [item | int - 1] }} with_sequence: count={{ openshift_logging_facts.elasticsearch_ops.deploymentconfigs.keys() | count }} @@ -92,8 +88,6 @@ es_pv_selector: "{{ openshift_logging_es_ops_pv_selector }}" es_cpu_limit: "{{ openshift_logging_es_ops_cpu_limit }}" es_memory_limit: "{{ openshift_logging_es_ops_memory_limit }}" - es_number_of_shards: "{{ openshift_logging_es_ops_number_of_shards }}" - es_number_of_replicas: "{{ openshift_logging_es_ops_number_of_replicas }}" with_together: - "{{ openshift_logging_facts.elasticsearch_ops.deploymentconfigs.keys() }}" - "{{ openshift_logging_facts.elasticsearch_ops.deploymentconfigs.values() }}" @@ -119,8 +113,6 @@ es_pv_selector: "{{ openshift_logging_es_ops_pv_selector }}" es_cpu_limit: "{{ openshift_logging_es_ops_cpu_limit }}" es_memory_limit: "{{ openshift_logging_es_ops_memory_limit }}" - es_number_of_shards: "{{ openshift_logging_es_ops_number_of_shards }}" - es_number_of_replicas: "{{ openshift_logging_es_ops_number_of_replicas }}" with_sequence: count={{ openshift_logging_es_ops_cluster_size | int - openshift_logging_facts.elasticsearch_ops.deploymentconfigs | count }} when: - openshift_logging_use_ops | bool diff --git a/roles/openshift_logging/tasks/install_fluentd.yaml b/roles/openshift_logging/tasks/install_fluentd.yaml index 35273829c..6bc405819 100644 --- a/roles/openshift_logging/tasks/install_fluentd.yaml +++ b/roles/openshift_logging/tasks/install_fluentd.yaml @@ -32,7 +32,7 @@ {{ openshift.common.admin_binary}} --config={{ mktemp.stdout }}/admin.kubeconfig policy add-scc-to-user privileged system:serviceaccount:{{openshift_logging_namespace}}:aggregated-logging-fluentd register: fluentd_output - failed_when: "fluentd_output.rc == 1 and 'exists' not in fluentd_output.stderr" + failed_when: fluentd_output.rc == 1 and 'exists' not in fluentd_output.stderr check_mode: no when: fluentd_privileged.stdout.find("system:serviceaccount:{{openshift_logging_namespace}}:aggregated-logging-fluentd") == -1 @@ -49,6 +49,6 @@ {{ openshift.common.admin_binary}} --config={{ mktemp.stdout }}/admin.kubeconfig policy add-cluster-role-to-user cluster-reader system:serviceaccount:{{openshift_logging_namespace}}:aggregated-logging-fluentd register: fluentd2_output - failed_when: "fluentd2_output.rc == 1 and 'exists' not in fluentd2_output.stderr" + failed_when: fluentd2_output.rc == 1 and 'exists' not in fluentd2_output.stderr check_mode: no when: fluentd_cluster_reader.stdout.find("system:serviceaccount:{{openshift_logging_namespace}}:aggregated-logging-fluentd") == -1 diff --git a/roles/openshift_logging/tasks/install_mux.yaml b/roles/openshift_logging/tasks/install_mux.yaml index 296da626f..91eeb95a1 100644 --- a/roles/openshift_logging/tasks/install_mux.yaml +++ b/roles/openshift_logging/tasks/install_mux.yaml @@ -45,7 +45,7 @@ {{ openshift.common.admin_binary}} --config={{ mktemp.stdout }}/admin.kubeconfig policy add-scc-to-user hostmount-anyuid system:serviceaccount:{{openshift_logging_namespace}}:aggregated-logging-fluentd register: mux_output - failed_when: "mux_output.rc == 1 and 'exists' not in mux_output.stderr" + failed_when: mux_output.rc == 1 and 'exists' not in mux_output.stderr check_mode: no when: mux_hostmount_anyuid.stdout.find("system:serviceaccount:{{openshift_logging_namespace}}:aggregated-logging-fluentd") == -1 @@ -62,6 +62,6 @@ {{ openshift.common.admin_binary}} --config={{ mktemp.stdout }}/admin.kubeconfig policy add-cluster-role-to-user cluster-reader system:serviceaccount:{{openshift_logging_namespace}}:aggregated-logging-fluentd register: mux2_output - failed_when: "mux2_output.rc == 1 and 'exists' not in mux2_output.stderr" + failed_when: mux2_output.rc == 1 and 'exists' not in mux2_output.stderr check_mode: no when: mux_cluster_reader.stdout.find("system:serviceaccount:{{openshift_logging_namespace}}:aggregated-logging-fluentd") == -1 diff --git a/roles/openshift_logging/tasks/install_support.yaml b/roles/openshift_logging/tasks/install_support.yaml index da0bbb627..877ce3149 100644 --- a/roles/openshift_logging/tasks/install_support.yaml +++ b/roles/openshift_logging/tasks/install_support.yaml @@ -1,17 +1,36 @@ --- # This is the base configuration for installing the other components -- name: Check for logging project already exists - command: > - {{ openshift.common.client_binary }} --config={{ mktemp.stdout }}/admin.kubeconfig get project {{openshift_logging_namespace}} --no-headers - register: logging_project_result - ignore_errors: yes - when: not ansible_check_mode - changed_when: no +- name: Set logging project + oc_project: + state: present + name: "{{ openshift_logging_namespace }}" + node_selector: "{{ openshift_logging_nodeselector | default(null) }}" + +- name: Labelling logging project + oc_label: + state: present + kind: namespace + name: "{{ openshift_logging_namespace }}" + labels: + - key: "{{ item.key }}" + value: "{{ item.value }}" + with_dict: "{{ openshift_logging_labels | default({}) }}" + when: + - openshift_logging_labels is defined + - openshift_logging_labels is dict -- name: "Create logging project" - command: > - {{ openshift.common.admin_binary }} --config={{ mktemp.stdout }}/admin.kubeconfig new-project {{openshift_logging_namespace}} - when: not ansible_check_mode and "not found" in logging_project_result.stderr +- name: Labelling logging project + oc_label: + state: present + kind: namespace + name: "{{ openshift_logging_namespace }}" + labels: + - key: "{{ openshift_logging_label_key }}" + value: "{{ openshift_logging_label_value }}" + when: + - openshift_logging_label_key is defined + - openshift_logging_label_key != "" + - openshift_logging_label_value is defined - name: Create logging cert directory file: path={{openshift.common.config_base}}/logging state=directory mode=0755 diff --git a/roles/openshift_logging/tasks/main.yaml b/roles/openshift_logging/tasks/main.yaml index c7f4a2f93..387da618d 100644 --- a/roles/openshift_logging/tasks/main.yaml +++ b/roles/openshift_logging/tasks/main.yaml @@ -1,7 +1,7 @@ --- - fail: msg: Only one Fluentd nodeselector key pair should be provided - when: "{{ openshift_logging_fluentd_nodeselector.keys() | count }} > 1" + when: openshift_logging_fluentd_nodeselector.keys() | count > 1 - name: Set default image variables based on deployment_type include_vars: "{{ item }}" diff --git a/roles/openshift_logging/tasks/set_es_storage.yaml b/roles/openshift_logging/tasks/set_es_storage.yaml index b2961aed4..4afe4e641 100644 --- a/roles/openshift_logging/tasks/set_es_storage.yaml +++ b/roles/openshift_logging/tasks/set_es_storage.yaml @@ -76,7 +76,5 @@ es_memory_limit: "{{ es_memory_limit }}" es_node_selector: "{{ es_node_selector }}" es_storage: "{{ openshift_logging_facts | es_storage( es_name, es_storage_claim ) }}" - es_number_of_shards: "{{ es_number_of_shards }}" - es_number_of_replicas: "{{ es_number_of_replicas }}" check_mode: no changed_when: no diff --git a/roles/openshift_logging/tasks/start_cluster.yaml b/roles/openshift_logging/tasks/start_cluster.yaml index 1042b3daa..c1592b830 100644 --- a/roles/openshift_logging/tasks/start_cluster.yaml +++ b/roles/openshift_logging/tasks/start_cluster.yaml @@ -36,10 +36,13 @@ name: "{{ object }}" namespace: "{{openshift_logging_namespace}}" replicas: "{{ openshift_logging_mux_replica_count | default (1) }}" - with_items: "{{ mux_dc.results.results[0]['items'] | map(attribute='metadata.name') | list }}" + with_items: "{{ mux_dc.results.results[0]['items'] | map(attribute='metadata.name') | list if 'results' in mux_dc else [] }}" loop_control: loop_var: object - when: openshift_logging_use_mux + when: + - mux_dc.results is defined + - mux_dc.results.results is defined + - openshift_logging_use_mux - name: Retrieve elasticsearch oc_obj: diff --git a/roles/openshift_logging/tasks/stop_cluster.yaml b/roles/openshift_logging/tasks/stop_cluster.yaml index d20c57cc1..f4b419d84 100644 --- a/roles/openshift_logging/tasks/stop_cluster.yaml +++ b/roles/openshift_logging/tasks/stop_cluster.yaml @@ -36,7 +36,7 @@ name: "{{ object }}" namespace: "{{openshift_logging_namespace}}" replicas: 0 - with_items: "{{ mux_dc.results.results[0]['items'] | map(attribute='metadata.name') | list }}" + with_items: "{{ mux_dc.results.results[0]['items'] | map(attribute='metadata.name') | list if 'results' in mux_dc else [] }}" loop_control: loop_var: object when: openshift_logging_use_mux diff --git a/roles/openshift_master/tasks/main.yml b/roles/openshift_master/tasks/main.yml index 98e0da1a2..5522fef26 100644 --- a/roles/openshift_master/tasks/main.yml +++ b/roles/openshift_master/tasks/main.yml @@ -194,7 +194,7 @@ state: stopped when: openshift_master_ha | bool register: task_result - failed_when: "task_result|failed and 'could not' not in task_result.msg|lower" + failed_when: task_result|failed and 'could not' not in task_result.msg|lower - set_fact: master_service_status_changed: "{{ start_result | changed }}" diff --git a/roles/openshift_master/tasks/systemd_units.yml b/roles/openshift_master/tasks/systemd_units.yml index 506c8b129..58fabddeb 100644 --- a/roles/openshift_master/tasks/systemd_units.yml +++ b/roles/openshift_master/tasks/systemd_units.yml @@ -90,6 +90,7 @@ dest: /etc/sysconfig/{{ openshift.common.service_type }}-master-api line: "{{ item }}" with_items: "{{ master_api_aws.stdout_lines | default([]) }}" + no_log: True - name: Preserve Master Controllers Proxy Config options command: grep PROXY /etc/sysconfig/{{ openshift.common.service_type }}-master-controllers diff --git a/roles/openshift_master_certificates/tasks/main.yml b/roles/openshift_master_certificates/tasks/main.yml index d4c9a96ca..33a0af07f 100644 --- a/roles/openshift_master_certificates/tasks/main.yml +++ b/roles/openshift_master_certificates/tasks/main.yml @@ -64,10 +64,10 @@ --signer-key={{ openshift_ca_key }} --signer-serial={{ openshift_ca_serial }} --overwrite=false + when: inventory_hostname != openshift_ca_host with_items: "{{ hostvars | oo_select_keys(groups['oo_masters_to_config']) - | oo_collect(attribute='inventory_hostname', filters={'master_certs_missing':True}) - | difference([openshift_ca_host])}}" + | oo_collect(attribute='inventory_hostname', filters={'master_certs_missing':True}) }}" delegate_to: "{{ openshift_ca_host }}" run_once: true @@ -94,8 +94,8 @@ creates: "{{ openshift_generated_configs_dir }}/master-{{ hostvars[item].openshift.common.hostname }}/openshift-master.kubeconfig" with_items: "{{ hostvars | oo_select_keys(groups['oo_masters_to_config']) - | oo_collect(attribute='inventory_hostname', filters={'master_certs_missing':True}) - | difference([openshift_ca_host])}}" + | oo_collect(attribute='inventory_hostname', filters={'master_certs_missing':True}) }}" + when: inventory_hostname != openshift_ca_host delegate_to: "{{ openshift_ca_host }}" run_once: true diff --git a/roles/openshift_master_facts/tasks/main.yml b/roles/openshift_master_facts/tasks/main.yml index 6f8f09b22..f048e0aef 100644 --- a/roles/openshift_master_facts/tasks/main.yml +++ b/roles/openshift_master_facts/tasks/main.yml @@ -128,10 +128,10 @@ - name: Test if scheduler config is readable fail: msg: "Unknown scheduler config apiVersion {{ openshift_master_scheduler_config.apiVersion }}" - when: "{{ openshift_master_scheduler_current_config.apiVersion | default(None) != 'v1' }}" + when: openshift_master_scheduler_current_config.apiVersion | default(None) != 'v1' - name: Set current scheduler predicates and priorities set_fact: openshift_master_scheduler_current_predicates: "{{ openshift_master_scheduler_current_config.predicates }}" openshift_master_scheduler_current_priorities: "{{ openshift_master_scheduler_current_config.priorities }}" - when: "{{ scheduler_config_stat.stat.exists }}" + when: scheduler_config_stat.stat.exists diff --git a/roles/openshift_metrics/tasks/generate_hawkular_certificates.yaml b/roles/openshift_metrics/tasks/generate_hawkular_certificates.yaml index 07b7eca33..fb4fe2f03 100644 --- a/roles/openshift_metrics/tasks/generate_hawkular_certificates.yaml +++ b/roles/openshift_metrics/tasks/generate_hawkular_certificates.yaml @@ -14,20 +14,22 @@ changed_when: no - name: generate password for hawkular metrics - local_action: copy dest="{{ local_tmp.stdout}}/{{ item }}.pwd" content="{{ 15 | oo_random_word }}" + local_action: copy dest="{{ local_tmp.stdout }}/{{ item }}.pwd" content="{{ 15 | oo_random_word }}" with_items: - hawkular-metrics +- local_action: slurp src="{{ local_tmp.stdout }}/hawkular-metrics.pwd" + register: hawkular_metrics_pwd + no_log: true + - name: generate htpasswd file for hawkular metrics - local_action: > - shell htpasswd -ci - '{{ local_tmp.stdout }}/hawkular-metrics.htpasswd' hawkular - < '{{ local_tmp.stdout }}/hawkular-metrics.pwd' + local_action: htpasswd path="{{ local_tmp.stdout }}/hawkular-metrics.htpasswd" name=hawkular password="{{ hawkular_metrics_pwd.content | b64decode }}" + no_log: true - name: copy local generated passwords to target copy: - src: "{{local_tmp.stdout}}/{{item}}" - dest: "{{mktemp.stdout}}/{{item}}" + src: "{{ local_tmp.stdout }}/{{ item }}" + dest: "{{ mktemp.stdout }}/{{ item }}" with_items: - hawkular-metrics.pwd - hawkular-metrics.htpasswd diff --git a/roles/openshift_metrics/tasks/install_cassandra.yaml b/roles/openshift_metrics/tasks/install_cassandra.yaml index a467c1a51..3b4e8560f 100644 --- a/roles/openshift_metrics/tasks/install_cassandra.yaml +++ b/roles/openshift_metrics/tasks/install_cassandra.yaml @@ -23,7 +23,7 @@ changed_when: false - set_fact: openshift_metrics_cassandra_pvc_prefix="hawkular-metrics" - when: "not openshift_metrics_cassandra_pvc_prefix or openshift_metrics_cassandra_pvc_prefix == ''" + when: not openshift_metrics_cassandra_pvc_prefix or openshift_metrics_cassandra_pvc_prefix == '' - name: generate hawkular-cassandra persistent volume claims template: diff --git a/roles/openshift_metrics/tasks/install_heapster.yaml b/roles/openshift_metrics/tasks/install_heapster.yaml index d13b96be1..0eb852d91 100644 --- a/roles/openshift_metrics/tasks/install_heapster.yaml +++ b/roles/openshift_metrics/tasks/install_heapster.yaml @@ -22,7 +22,7 @@ with_items: - hawkular-metrics-certs - hawkular-metrics-account - when: "not {{ openshift_metrics_heapster_standalone | bool }}" + when: not openshift_metrics_heapster_standalone | bool - name: Generating serviceaccount for heapster template: src=serviceaccount.j2 dest={{mktemp.stdout}}/templates/metrics-{{obj_name}}-sa.yaml diff --git a/roles/openshift_metrics/tasks/install_metrics.yaml b/roles/openshift_metrics/tasks/install_metrics.yaml index ffe6f63a2..74eb56713 100644 --- a/roles/openshift_metrics/tasks/install_metrics.yaml +++ b/roles/openshift_metrics/tasks/install_metrics.yaml @@ -10,11 +10,11 @@ - cassandra loop_control: loop_var: include_file - when: "not {{ openshift_metrics_heapster_standalone | bool }}" + when: not openshift_metrics_heapster_standalone | bool - name: Install Heapster Standalone include: install_heapster.yaml - when: "{{ openshift_metrics_heapster_standalone | bool }}" + when: openshift_metrics_heapster_standalone | bool - find: paths={{ mktemp.stdout }}/templates patterns=*.yaml register: object_def_files @@ -48,7 +48,7 @@ - name: Scaling down cluster to recognize changes include: stop_metrics.yaml - when: "{{ existing_metrics_rc.stdout_lines | length > 0 }}" + when: existing_metrics_rc.stdout_lines | length > 0 - name: Scaling up cluster include: start_metrics.yaml diff --git a/roles/openshift_metrics/tasks/main.yaml b/roles/openshift_metrics/tasks/main.yaml index c8d222c60..e8b7bea5c 100644 --- a/roles/openshift_metrics/tasks/main.yaml +++ b/roles/openshift_metrics/tasks/main.yaml @@ -19,7 +19,7 @@ - name: Create temp directory for all our templates file: path={{mktemp.stdout}}/templates state=directory mode=0755 changed_when: False - when: "{{ openshift_metrics_install_metrics | bool }}" + when: openshift_metrics_install_metrics | bool - name: Create temp directory local on control node local_action: command mktemp -d diff --git a/roles/openshift_metrics/tasks/start_metrics.yaml b/roles/openshift_metrics/tasks/start_metrics.yaml index b5a1c8f06..2037e8dc3 100644 --- a/roles/openshift_metrics/tasks/start_metrics.yaml +++ b/roles/openshift_metrics/tasks/start_metrics.yaml @@ -20,7 +20,7 @@ loop_control: loop_var: object when: metrics_cassandra_rc is defined - changed_when: "{{metrics_cassandra_rc | length > 0 }}" + changed_when: metrics_cassandra_rc | length > 0 - command: > {{openshift.common.client_binary}} @@ -42,7 +42,7 @@ with_items: "{{metrics_metrics_rc.stdout_lines}}" loop_control: loop_var: object - changed_when: "{{metrics_metrics_rc | length > 0 }}" + changed_when: metrics_metrics_rc | length > 0 - command: > {{openshift.common.client_binary}} diff --git a/roles/openshift_metrics/tasks/stop_metrics.yaml b/roles/openshift_metrics/tasks/stop_metrics.yaml index f69bb0f11..9a2ce9267 100644 --- a/roles/openshift_metrics/tasks/stop_metrics.yaml +++ b/roles/openshift_metrics/tasks/stop_metrics.yaml @@ -41,7 +41,7 @@ with_items: "{{metrics_hawkular_rc.stdout_lines}}" loop_control: loop_var: object - changed_when: "{{metrics_hawkular_rc | length > 0 }}" + changed_when: metrics_hawkular_rc | length > 0 - command: > {{openshift.common.client_binary}} --config={{mktemp.stdout}}/admin.kubeconfig @@ -63,4 +63,4 @@ loop_control: loop_var: object when: metrics_cassandra_rc is defined - changed_when: "{{metrics_cassandra_rc | length > 0 }}" + changed_when: metrics_cassandra_rc | length > 0 diff --git a/roles/openshift_metrics/tasks/uninstall_metrics.yaml b/roles/openshift_metrics/tasks/uninstall_metrics.yaml index 8a6be6237..9a5d52eb6 100644 --- a/roles/openshift_metrics/tasks/uninstall_metrics.yaml +++ b/roles/openshift_metrics/tasks/uninstall_metrics.yaml @@ -8,7 +8,7 @@ delete --ignore-not-found --selector=metrics-infra all,sa,secrets,templates,routes,pvc,rolebindings,clusterrolebindings register: delete_metrics - changed_when: "delete_metrics.stdout != 'No resources found'" + changed_when: delete_metrics.stdout != 'No resources found' - name: remove rolebindings command: > @@ -16,4 +16,4 @@ delete --ignore-not-found rolebinding/hawkular-view clusterrolebinding/heapster-cluster-reader - changed_when: "delete_metrics.stdout != 'No resources found'" + changed_when: delete_metrics.stdout != 'No resources found' diff --git a/roles/openshift_node/tasks/main.yml b/roles/openshift_node/tasks/main.yml index 98139cac2..656874f56 100644 --- a/roles/openshift_node/tasks/main.yml +++ b/roles/openshift_node/tasks/main.yml @@ -63,7 +63,7 @@ when: - swap_result.stdout_lines | length > 0 - - openshift_disable_swap | default(true) + - openshift_disable_swap | default(true) | bool # End Disable Swap Block # We have to add tuned-profiles in the same transaction otherwise we run into depsolving @@ -147,7 +147,7 @@ - regex: '^AWS_SECRET_ACCESS_KEY=' line: "AWS_SECRET_ACCESS_KEY={{ openshift_cloudprovider_aws_secret_key | default('') }}" no_log: True - when: "openshift_cloudprovider_kind is defined and openshift_cloudprovider_kind == 'aws' and openshift_cloudprovider_aws_access_key is defined and openshift_cloudprovider_aws_secret_key is defined" + when: openshift_cloudprovider_kind is defined and openshift_cloudprovider_kind == 'aws' and openshift_cloudprovider_aws_access_key is defined and openshift_cloudprovider_aws_secret_key is defined notify: - restart node diff --git a/roles/openshift_node_certificates/handlers/main.yml b/roles/openshift_node_certificates/handlers/main.yml index 1aa826c09..502f80434 100644 --- a/roles/openshift_node_certificates/handlers/main.yml +++ b/roles/openshift_node_certificates/handlers/main.yml @@ -6,6 +6,6 @@ - name: restart docker after updating ca trust systemd: - name: docker + name: "{{ openshift.docker.service_name }}" state: restarted when: not openshift_certificates_redeploy | default(false) | bool diff --git a/roles/openshift_node_upgrade/tasks/main.yml b/roles/openshift_node_upgrade/tasks/main.yml index e725f4a5d..94c97d0a5 100644 --- a/roles/openshift_node_upgrade/tasks/main.yml +++ b/roles/openshift_node_upgrade/tasks/main.yml @@ -124,7 +124,7 @@ when: - swap_result.stdout_lines | length > 0 - - openshift_disable_swap | default(true) + - openshift_disable_swap | default(true) | bool # End Disable Swap Block # Restart all services diff --git a/roles/openshift_node_upgrade/tasks/restart.yml b/roles/openshift_node_upgrade/tasks/restart.yml index a9fab74e1..e576228ba 100644 --- a/roles/openshift_node_upgrade/tasks/restart.yml +++ b/roles/openshift_node_upgrade/tasks/restart.yml @@ -6,7 +6,9 @@ # - openshift.master.api_port - name: Restart docker - service: name=docker state=restarted + service: + name: "{{ openshift.docker.service_name }}" + state: restarted - name: Update docker facts openshift_facts: diff --git a/roles/openshift_provisioners/tasks/install_efs.yaml b/roles/openshift_provisioners/tasks/install_efs.yaml index 57279c665..b53b6afa1 100644 --- a/roles/openshift_provisioners/tasks/install_efs.yaml +++ b/roles/openshift_provisioners/tasks/install_efs.yaml @@ -65,6 +65,6 @@ {{ openshift.common.admin_binary}} --config={{ mktemp.stdout }}/admin.kubeconfig policy add-scc-to-user anyuid system:serviceaccount:{{openshift_provisioners_project}}:provisioners-efs register: efs_output - failed_when: "efs_output.rc == 1 and 'exists' not in efs_output.stderr" + failed_when: efs_output.rc == 1 and 'exists' not in efs_output.stderr check_mode: no when: efs_anyuid.stdout.find("system:serviceaccount:{{openshift_provisioners_project}}:provisioners-efs") == -1 diff --git a/roles/openshift_repos/tasks/main.yaml b/roles/openshift_repos/tasks/main.yaml index 84a0905cc..9a9436fcb 100644 --- a/roles/openshift_repos/tasks/main.yaml +++ b/roles/openshift_repos/tasks/main.yaml @@ -40,4 +40,21 @@ - openshift_deployment_type == 'origin' - openshift_enable_origin_repo | default(true) | bool + # Singleton block + - when: r_osr_first_run | default(true) + block: + - name: Ensure clean repo cache in the event repos have been changed manually + debug: + msg: "First run of openshift_repos" + changed_when: true + notify: refresh cache + + - name: Set fact r_osr_first_run false + set_fact: + r_osr_first_run: false + + # Force running ALL handlers now, because we expect repo cache to be cleared + # if changes have been made. + - meta: flush_handlers + when: not ostree_booted.stat.exists diff --git a/roles/openshift_storage_glusterfs/tasks/glusterfs_registry.yml b/roles/openshift_storage_glusterfs/tasks/glusterfs_registry.yml index 9f092d5d5..6d02d2090 100644 --- a/roles/openshift_storage_glusterfs/tasks/glusterfs_registry.yml +++ b/roles/openshift_storage_glusterfs/tasks/glusterfs_registry.yml @@ -45,4 +45,4 @@ - name: Create GlusterFS registry volume command: "heketi-cli -s http://{{ openshift_storage_glusterfs_heketi_url }} --user admin --secret '{{ openshift_storage_glusterfs_heketi_admin_key }}' volume create --size={{ openshift.hosted.registry.storage.volume.size | replace('Gi','') }} --name={{ openshift.hosted.registry.storage.glusterfs.path }}" - when: "'{{ openshift.hosted.registry.storage.glusterfs.path }}' not in registry_volume.stdout" + when: "'openshift.hosted.registry.storage.glusterfs.path' not in registry_volume.stdout" diff --git a/roles/openshift_storage_glusterfs/tasks/heketi_deploy_part2.yml b/roles/openshift_storage_glusterfs/tasks/heketi_deploy_part2.yml index 84b85e95d..778b5a673 100644 --- a/roles/openshift_storage_glusterfs/tasks/heketi_deploy_part2.yml +++ b/roles/openshift_storage_glusterfs/tasks/heketi_deploy_part2.yml @@ -14,7 +14,7 @@ # Need `command` here because heketi-storage.json contains multiple objects. - name: Copy heketi DB to GlusterFS volume command: "{{ openshift.common.client_binary }} --config={{ mktemp.stdout }}/admin.kubeconfig create -f {{ mktemp.stdout }}/heketi-storage.json -n {{ openshift_storage_glusterfs_namespace }}" - when: "setup_storage.rc == 0" + when: setup_storage.rc == 0 - name: Wait for copy job to finish oc_obj: @@ -34,7 +34,7 @@ - "heketi_job.results.results | count > 0" # Fail when pod's 'Failed' status is True - "heketi_job.results.results | oo_collect(attribute='status.conditions') | oo_collect(attribute='status', filters={'type': 'Failed'}) | map('bool') | select | list | count == 1" - when: "setup_storage.rc == 0" + when: setup_storage.rc == 0 - name: Delete deploy resources oc_obj: diff --git a/roles/openshift_storage_glusterfs/tasks/main.yml b/roles/openshift_storage_glusterfs/tasks/main.yml index 265a3cc6e..71c4a2732 100644 --- a/roles/openshift_storage_glusterfs/tasks/main.yml +++ b/roles/openshift_storage_glusterfs/tasks/main.yml @@ -163,7 +163,7 @@ - name: Load heketi topology command: "heketi-cli -s http://{{ openshift_storage_glusterfs_heketi_url }} --user admin --secret '{{ openshift_storage_glusterfs_heketi_admin_key }}' topology load --json={{ mktemp.stdout }}/topology.json 2>&1" register: topology_load - failed_when: "topology_load.rc != 0 or 'Unable' in topology_load.stdout" + failed_when: topology_load.rc != 0 or 'Unable' in topology_load.stdout when: - openshift_storage_glusterfs_is_native - openshift_storage_glusterfs_heketi_topology_load @@ -172,7 +172,7 @@ when: openshift_storage_glusterfs_heketi_is_native and openshift_storage_glusterfs_heketi_is_missing - include: glusterfs_registry.yml - when: "openshift.hosted.registry.storage.kind == 'glusterfs'" + when: openshift.hosted.registry.storage.kind == 'glusterfs' - name: Delete temp directory file: diff --git a/roles/os_firewall/tasks/firewall/firewalld.yml b/roles/os_firewall/tasks/firewall/firewalld.yml index 4b2979887..509655b0c 100644 --- a/roles/os_firewall/tasks/firewall/firewalld.yml +++ b/roles/os_firewall/tasks/firewall/firewalld.yml @@ -14,7 +14,7 @@ - iptables - ip6tables register: task_result - failed_when: "task_result|failed and 'could not' not in task_result.msg|lower" + failed_when: task_result|failed and 'could not' not in task_result.msg|lower - name: Wait 10 seconds after disabling iptables pause: diff --git a/roles/os_firewall/tasks/firewall/iptables.yml b/roles/os_firewall/tasks/firewall/iptables.yml index 38ea2477c..55f2fc471 100644 --- a/roles/os_firewall/tasks/firewall/iptables.yml +++ b/roles/os_firewall/tasks/firewall/iptables.yml @@ -7,7 +7,7 @@ enabled: no masked: yes register: task_result - failed_when: "task_result|failed and 'could not' not in task_result.msg|lower" + failed_when: task_result|failed and 'could not' not in task_result.msg|lower - name: Wait 10 seconds after disabling firewalld pause: diff --git a/utils/src/ooinstall/variants.py b/utils/src/ooinstall/variants.py index f25266f29..1574d447a 100644 --- a/utils/src/ooinstall/variants.py +++ b/utils/src/ooinstall/variants.py @@ -39,18 +39,19 @@ class Variant(object): # WARNING: Keep the versions ordered, most recent first: OSE = Variant('openshift-enterprise', 'OpenShift Container Platform', [ - Version('3.5', 'openshift-enterprise'), + Version('3.6', 'openshift-enterprise'), ]) REG = Variant('openshift-enterprise', 'Registry', [ - Version('3.4', 'openshift-enterprise', 'registry'), + Version('3.6', 'openshift-enterprise', 'registry'), ]) origin = Variant('origin', 'OpenShift Origin', [ - Version('1.4', 'origin'), + Version('3.6', 'origin'), ]) LEGACY = Variant('openshift-enterprise', 'OpenShift Container Platform', [ + Version('3.5', 'openshift-enterprise'), Version('3.4', 'openshift-enterprise'), Version('3.3', 'openshift-enterprise'), Version('3.2', 'openshift-enterprise'), |