| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|\ \ \ \ \
| |/ / / /
|/| | | | |
Fix HA etcd upgrade when facts cache has been deleted.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Simplest way to reproduce this issue is to attempt to upgrade having
removed /etc/ansible/facts.d/openshift.fact. Actual cause in the field
is not entirely known but critically it is possible for embedded_etcd to
default to true, causing the etcd fact lookup to check the wrong file
and fail silently, resulting in no etcd_data_dir fact being set.
|
| | | | | |
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Revert openshift.node.nodename changes
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
This reverts commit aaaf82ba6032d0b1e9c36a39a7eda25b8c5f4b84.
|
| |/ / / /
| | | | |
| | | | |
| | | | | |
This reverts commit 1f2276fff1e41c1d9440ee8b589042ee249b95d7.
|
|/ / / / |
|
|/ / / |
|
|/ /
| |
| |
| |
| |
| |
| | |
curl, prior to RHEL 7.2, did not properly negotiate up the TLS protocol, so
force it to use tlsv1.2
Fixes bug 1390869
|
|\ \
| | |
| | | |
Bug 1388016 - The insecure-registry address was removed during upgrade
|
| | |
| | |
| | |
| | | |
existing /etc/sysconfig/docker.
|
|\ \ \
| | | |
| | | | |
Update link to latest versions upgrade README
|
| |/ / |
|
|\ \ \
| | | |
| | | | |
Add support for 3.4 upgrade.
|
| |/ /
| | |
| | |
| | |
| | | |
This is a direct copy of 3.3 upgrade playbooks, with 3.3 specific hooks
removed and version numbers adjusted appropriately.
|
|\ \ \
| | | |
| | | | |
Fix and reorder control plane service restart.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This was missed in the standalone upgrade control plane playbook.
However it also looks to be out of order, we should restart before
reconciling and upgrading nodes. As such moved the restart directly into
the control plane upgrade common code, and placed it before
reconciliation.
|
| |/ /
|/| |
| | |
| | | |
This file was removed and no longer used
|
| | | |
|
| | | |
|
|\ \ \
| | | |
| | | | |
Fix typos
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | | |
Drop pacemaker restart logic.
|
| | |/ /
| |/| |
| | | |
| | | |
| | | | |
Pacemaker clusters are no longer supported, and in some cases bugs here
were causing upgrade failures.
|
|\ \ \ \
| |_|/ /
|/| | | |
Switch from "oadm" to "oc adm" and fix bug in binary sync.
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Found bug syncing binaries to containerized hosts where if a symlink was
pre-existing, but pointing to the wrong destination, it would not be
corrected.
Switched to using oc adm instead of oadm.
|
| | | |
|
|\ \ \
| |/ /
|/| | |
Template with_items for upstream ansible-2.2 compat.
|
| | | |
|
|\ \ \
| |/ /
|/| | |
[logging] Use inventory variables rather than facts
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Error in commit 245fef16573757b6e691c448075d8564f5d569f4.
As it turns out this is the only place a rpm based node can be restarted
in upgrade. Restoring the restart but making it conditional to avoid the
two issues reported with out of sync node restarts.
|
| | | |
|
|\ \ \
| | | |
| | | | |
update handling of use_dnsmasq
|
| | | | |
|
|\ \ \ \
| |_|/ /
|/| | | |
Fix standalone docker upgrade playbook skipping nodes.
|
| |/ /
| | |
| | |
| | |
| | | |
Transition to being able to specify nodes to upgrade caused standalone
nodes to get skipped in this playbook.
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This looks to be causing a customer issue where some HA upgrades fail,
due to a missing EgressNetworkPolicy API. We update master rpms, we
don't restart services yet, but then restart node service which tries to
talk to an API that does not yet exist. (pending restart)
Restarting node here is very out of place and appears to not be
required.
|
| | |
|
| | |
|
|\ \
| | |
| | | |
Changes for Nuage HA
|
| | |
| | |
| | |
| | | |
frontends/backends.
|
|\ \ \
| | | |
| | | | |
3.4 Upgrade Improvements
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
It is invalid Ansible to use a when on an include that contains plays,
as it cannot be applied to plays. Issue filed upstream for a better
error, or to get it working.
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This can fail with a transient "object has been modified" error asking
you to re-try your changes on the latest version of the object.
Allow up to three retries to see if we can get the change to take
effect.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This improves the situation further and prevents configuration changes
from accidentally triggering docker restarts, before we've evacuated
nodes. Now in two places, we skip the role entirely, instead of previous
implementation which only skipped upgrading the installed version.
(which did not catch config issues)
|
| | | | |
|
| | | | |
|