diff options
author | Peter Portante <peter.portante@redhat.com> | 2017-06-15 14:08:45 -0400 |
---|---|---|
committer | Peter Portante <peter.portante@redhat.com> | 2017-06-16 10:37:32 -0400 |
commit | fd165fe201abb5fbd76306a16febaf1cb3c8ad0b (patch) | |
tree | dad357c31f092177aa5265a4b574158d2a685e7c | |
parent | 0862f7b6f1448d6ea1fe6c836b3ba1de0afb4485 (diff) | |
download | openshift-fd165fe201abb5fbd76306a16febaf1cb3c8ad0b.tar.gz openshift-fd165fe201abb5fbd76306a16febaf1cb3c8ad0b.tar.bz2 openshift-fd165fe201abb5fbd76306a16febaf1cb3c8ad0b.tar.xz openshift-fd165fe201abb5fbd76306a16febaf1cb3c8ad0b.zip |
Ensure only one ES pod per PV
bug 1460564. Fixes [BZ #1460564](https://bugzilla.redhat.com/show_bug.cgi?id=1460564).
Unfortunately, the defaults for Elasticsearch prior to v5 allow more
than one "node" to access the same configured storage volume(s).
This change forces this value to 1 to ensure we don't have an ES pod
starting up accessing a volume while another ES pod is shutting down
when reploying. This can lead to "1" directories being created in
`/elasticsearch/persistent/${CLUSTER_NAME}/data/${CLUSTER_NAME}/nodes/`.
By default ES uses a "0" directory there when only one node is accessing
it.
-rw-r--r-- | roles/openshift_logging_elasticsearch/templates/elasticsearch.yml.j2 | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/roles/openshift_logging_elasticsearch/templates/elasticsearch.yml.j2 b/roles/openshift_logging_elasticsearch/templates/elasticsearch.yml.j2 index 58c325c8a..409e564c2 100644 --- a/roles/openshift_logging_elasticsearch/templates/elasticsearch.yml.j2 +++ b/roles/openshift_logging_elasticsearch/templates/elasticsearch.yml.j2 @@ -16,6 +16,7 @@ index: node: master: ${IS_MASTER} data: ${HAS_DATA} + max_local_storage_nodes: 1 network: host: 0.0.0.0 |