mirror of
https://bitbucket.org/atlassian/dc-deployments-automation.git
synced 2025-12-14 00:43:06 -06:00
Merge branch 'master' into bugfix/ITOPSENG-101-3-bugfix-for-clones
This commit is contained in:
100
README.md
100
README.md
@@ -1,15 +1,16 @@
|
|||||||
|
# Atlassian Data Center Installation Automation
|
||||||
# Atlassian Data-Center Installation Automation
|
|
||||||
|
|
||||||
## Introduction
|
|
||||||
|
|
||||||
This repository is a suite of Ansible roles, playbooks and support scripts to
|
This repository is a suite of Ansible roles, playbooks and support scripts to
|
||||||
automate the installation and maintenance of Atlassian Data Center products in
|
automate the installation and maintenance of Atlassian Data Center products in
|
||||||
cloud environments.
|
cloud environments.
|
||||||
|
|
||||||
## Usage
|
On this page:
|
||||||
|
|
||||||
### Cloud DC-node deployment playbooks
|
[TOC]
|
||||||
|
|
||||||
|
# Usage
|
||||||
|
|
||||||
|
## Configuring Data Center nodes on cloud deployments
|
||||||
|
|
||||||
The usual scenario for usage as part of a cloud deployment is to invoke the
|
The usual scenario for usage as part of a cloud deployment is to invoke the
|
||||||
script as part of post-creation actions invoked while a new product node is
|
script as part of post-creation actions invoked while a new product node is
|
||||||
@@ -23,16 +24,15 @@ In practice, the Ansible roles require some information about the infrastructure
|
|||||||
that was deployed (e.g. RDS endpoint/password). The way this is currently
|
that was deployed (e.g. RDS endpoint/password). The way this is currently
|
||||||
achieved (on AWS) is that have the CloudFormation template dump this information
|
achieved (on AWS) is that have the CloudFormation template dump this information
|
||||||
into the file `/etc/atl` as `RESOURCE_VAR=<resource>` lines. This can be then
|
into the file `/etc/atl` as `RESOURCE_VAR=<resource>` lines. This can be then
|
||||||
sourced as environment variables to be retrieved at runtime . See the
|
sourced as environment variables to be retrieved at runtime. See the
|
||||||
helper-script `bin/ansible-with-atl-env` and the corresponding
|
helper-script `bin/ansible-with-atl-env` and the corresponding
|
||||||
`groups_vars/aws_node_local.yml` var-file.
|
`groups_vars/aws_node_local.yml` var-file.
|
||||||
|
|
||||||
#### Overriding parameters
|
## Customizing your deployment
|
||||||
|
|
||||||
If you want to customise the playbook behaviour the simplest method is to fork
|
To customise playbook behaviour, you can fork this repository and edit it as
|
||||||
this repository and add your own. However, for some one-off tasks you can also
|
needed. However, for one-off tasks you can also override the default and
|
||||||
override the default and calculated settings with special values. To do this, provide
|
calculated settings with special values. To do this, provide command-line overrides to
|
||||||
command-line overrides to
|
|
||||||
[ansible-playbook](https://docs.ansible.com/ansible/latest/cli/ansible-playbook.html).
|
[ansible-playbook](https://docs.ansible.com/ansible/latest/cli/ansible-playbook.html).
|
||||||
|
|
||||||
The most likely use-case for this is to download a custom product distribution
|
The most likely use-case for this is to download a custom product distribution
|
||||||
@@ -47,11 +47,11 @@ directly, the command for this would look like the following:
|
|||||||
-i inv/aws_node_local aws_jira_dc_node.yml
|
-i inv/aws_node_local aws_jira_dc_node.yml
|
||||||
|
|
||||||
You can also do this on a CloudFormation template where the stack details are in `/etc/atl`.
|
You can also do this on a CloudFormation template where the stack details are in `/etc/atl`.
|
||||||
On such templates, the variable `ATL_ANSIBLE_EXTRA_PARAMS` is added to the
|
On such templates, the variable `ATL_DEPLOYMENT_REPOSITORY_CUSTOM_PARAMS` is added to the
|
||||||
`ansible-playbook` parameters in `bin/ansible-with-alt-env`. In this case you
|
`ansible-playbook` parameters in `bin/ansible-with-alt-env`. In this case you
|
||||||
need to set it to:
|
need to set it to:
|
||||||
|
ATL_DEPLOYMENT_REPOSITORY_CUSTOM_PARAMS="-e atl_product_download_url=http://s3.amazon.com/atlassian/jira-9.0.0-PRE-TEST.tar.gz -e atl_use_system_jdk=true -e atl_download_format=tarball"
|
||||||
|
|
||||||
ATL_ANSIBLE_EXTRA_PARAMS="-e atl_product_download_url=http://s3.amazon.com/atlassian/jira-9.0.0-PRE-TEST.tar.gz -e atl_use_system_jdk=true -e atl_download_format=tarball"
|
|
||||||
|
|
||||||
To set the same parameters in the AWS Quick Starts for
|
To set the same parameters in the AWS Quick Starts for
|
||||||
[Jira Data Center](https://aws.amazon.com/quickstart/architecture/jira/),
|
[Jira Data Center](https://aws.amazon.com/quickstart/architecture/jira/),
|
||||||
@@ -61,53 +61,65 @@ them in the `Custom command-line parameters for Ansible` field:
|
|||||||
|
|
||||||
-e atl_product_download_url=http://s3.amazon.com/atlassian/jira-9.0.0-PRE-TEST.tar.gz -e atl_use_system_jdk=true -e atl_download_format=tarball
|
-e atl_product_download_url=http://s3.amazon.com/atlassian/jira-9.0.0-PRE-TEST.tar.gz -e atl_use_system_jdk=true -e atl_download_format=tarball
|
||||||
|
|
||||||
## Reporting issues
|
### Other customizable parameters
|
||||||
|
|
||||||
If you find any bugs in this repository, or have feature requests or use cases
|
For more deployment customization options, consult the following files for parameters you can
|
||||||
for us, please raise them in our [public Jira project](https://jira.atlassian.com/projects/SCALE/summary).
|
override:
|
||||||
|
|
||||||
## Development
|
- [`/roles/product_install/defaults/main.yml`](roles/product_install/defaults/main.yml)
|
||||||
|
- [`/group_vars/aws_node_local.yml`](group_vars/aws_node_local.yml)
|
||||||
|
|
||||||
### Development philosophy
|
More customizable parameters are defined in specific roles -- specifically, in the
|
||||||
|
role's `defaults/main.yml` file. Most of these parameters use the `atl_` prefix. You can
|
||||||
|
use the following [Bitbucket code search query](https://confluence.atlassian.com/bitbucket/search-873876782.html)
|
||||||
|
to find them:
|
||||||
|
|
||||||
The suite is intended to consist of a number of small, composable roles that can
|
repo:dc-deployments-automation repo:dc-deployments-automation path:*/defaults/main.yml atl
|
||||||
be combined together into playbooks. Wherever possible the roles should be
|
|
||||||
platform-agnostic as possible, with platform-specific functionality broken out
|
|
||||||
into more specific roles.
|
|
||||||
|
|
||||||
Where possible the roles are also product-agnostic (e.g. downloads), with more
|
# Development and testing
|
||||||
specific functionality added in later product-specific roles.
|
|
||||||
|
|
||||||
Roles should be reasonably self-contained, with sensible defaults configured in
|
|
||||||
`<role>/defaults/main.yml` and overridden by the playbook at runtime. Roles may
|
|
||||||
implicitly depend on variables being defined elsewhere where they cannot define
|
|
||||||
them natively (e.g. the `jira_config` role depends on the `atl_cluster_node_id`
|
|
||||||
var being defined; on AWS this is provided by the `aws_common` role, which
|
|
||||||
should be run first).
|
|
||||||
|
|
||||||
### Development and testing
|
|
||||||
|
|
||||||
See [Development](DEVELOPMENT.md) for details on setting up a development
|
See [Development](DEVELOPMENT.md) for details on setting up a development
|
||||||
environment and running tests.
|
environment and running tests.
|
||||||
|
|
||||||
## Ansible layout
|
# Roles philosophy
|
||||||
|
|
||||||
|
This suite is intended to consist of many small, composable roles that can
|
||||||
|
be combined together into playbooks. Wherever possible, roles should be product-agnostic
|
||||||
|
(e.g. downloads) and platform-agnostic. Functions that are product-specific or
|
||||||
|
platform-specific are split off into separate roles.
|
||||||
|
|
||||||
|
Roles should be reasonably self-contained, with sensible defaults configured in
|
||||||
|
`/roles/<role>/defaults/main.yml`. Like all playbook parameters, you can override
|
||||||
|
them at runtime.
|
||||||
|
|
||||||
|
Some roles implicitly depend on other variables beind defined elsewhere.
|
||||||
|
For example, the `jira_config` role depends on the `atl_cluster_node_id`
|
||||||
|
var being defined; on AWS this is provided by the `aws_common` role, which
|
||||||
|
should be run first.
|
||||||
|
|
||||||
|
|
||||||
|
# Ansible layout
|
||||||
|
|
||||||
* Helper scripts are in `bin/`. In particular the `bin/ansible-with-atl-env`
|
* Helper scripts are in `bin/`. In particular the `bin/ansible-with-atl-env`
|
||||||
wrapper is of use during AWS node initialisation. See _Usage_ above for more
|
wrapper is of use during AWS node initialisation. Refer to the [Usage](#markdown-header-usage) section for
|
||||||
information.
|
more information.
|
||||||
* Inventory files are under `inv/`. For AWS `cfn-init` the inventory
|
* Inventory files are under `inv/`. For AWS `cfn-init` the inventory
|
||||||
`inv/aws_node_local` inventory is probably what you want.
|
`inv/aws_node_local` inventory is probably what you want.
|
||||||
* Note that this expects the environment to be setup with infrastructure
|
* Note that this expects the environment to be setup with infrastructure information.
|
||||||
information; see _Usage_ above.
|
Refer to the [Usage](#markdown-header-usage) section for more information.
|
||||||
* Global group vars loaded automatically from `group_vars/<group>.yml`. In
|
* Global group vars loaded automatically from `group_vars/<group>.yml`. In
|
||||||
particular note `group_vars/aws_node_local.yml` which loads infrastructure
|
particular note `group_vars/aws_node_local.yml` which loads infrastructure
|
||||||
information from the environment.
|
information from the environment.
|
||||||
* Roles are under `roles/`
|
* Roles are defined in `roles/`
|
||||||
* Platform specific roles start with `<platform-shortname>_...`,
|
* Platform specific roles start with `<platform-shortname>_...`, e.g. `roles/aws_common/`.
|
||||||
e.g. `roles/aws_common/`.
|
* Similarly, product-specific roles should start with `<product>_...`.
|
||||||
* Similarly, product-specific roles should start with `<product>_...`.
|
|
||||||
|
|
||||||
## License
|
# Reporting issues
|
||||||
|
|
||||||
|
If you find any bugs in this repository, or have feature requests or use cases
|
||||||
|
for us, please raise them in our [public Jira project](https://jira.atlassian.com/projects/SCALE/summary).
|
||||||
|
|
||||||
|
# License
|
||||||
|
|
||||||
Copyright © 2019 Atlassian Corporation Pty Ltd.
|
Copyright © 2019 Atlassian Corporation Pty Ltd.
|
||||||
Licensed under the Apache License, Version 2.0.
|
Licensed under the Apache License, Version 2.0.
|
||||||
|
|||||||
@@ -31,5 +31,6 @@
|
|||||||
- role: product_common
|
- role: product_common
|
||||||
- role: product_install
|
- role: product_install
|
||||||
- role: database_init
|
- role: database_init
|
||||||
|
- role: restore_backups
|
||||||
- role: bitbucket_config
|
- role: bitbucket_config
|
||||||
- role: product_startup
|
- role: product_startup
|
||||||
|
|||||||
@@ -18,6 +18,7 @@
|
|||||||
- role: product_common
|
- role: product_common
|
||||||
- role: product_install
|
- role: product_install
|
||||||
- role: database_init
|
- role: database_init
|
||||||
|
- role: restore_backups
|
||||||
- role: confluence_common
|
- role: confluence_common
|
||||||
- role: confluence_config
|
- role: confluence_config
|
||||||
- role: product_startup
|
- role: product_startup
|
||||||
|
|||||||
@@ -14,7 +14,7 @@
|
|||||||
- "EnvironmentFile=/etc/atl.synchrony"
|
- "EnvironmentFile=/etc/atl.synchrony"
|
||||||
- "WorkingDirectory={{ atl_product_installation_current }}/logs/"
|
- "WorkingDirectory={{ atl_product_installation_current }}/logs/"
|
||||||
atl_startup_exec_options: []
|
atl_startup_exec_options: []
|
||||||
atl_startup_exec_path: "{{ atl_installation_base }}/bin/start-synchrony"
|
atl_startup_exec_path: "{{ atl_product_installation_current }}/bin/start-synchrony"
|
||||||
atl_systemd_service_name: "synchrony.service"
|
atl_systemd_service_name: "synchrony.service"
|
||||||
|
|
||||||
roles:
|
roles:
|
||||||
|
|||||||
@@ -20,7 +20,8 @@ fi
|
|||||||
export PATH=/usr/local/bin:$PATH
|
export PATH=/usr/local/bin:$PATH
|
||||||
|
|
||||||
pip3 install pipenv
|
pip3 install pipenv
|
||||||
pipenv sync
|
echo "Installing ansible and dependencies..."
|
||||||
|
PIPENV_NOSPIN=1 PIPENV_HIDE_EMOJIS=1 pipenv sync 2>&1 | iconv -c -f utf-8 -t ascii
|
||||||
|
|
||||||
if [[ $1 == "--dev" ]]; then
|
if [[ $1 == "--dev" ]]; then
|
||||||
pipenv sync --dev
|
pipenv sync --dev
|
||||||
|
|||||||
@@ -1,4 +1,8 @@
|
|||||||
---
|
---
|
||||||
|
# This file was generated; to regnerated `cd` to `pipeline_generator`
|
||||||
|
# and run:
|
||||||
|
#
|
||||||
|
# make > ../bitbucket-pipelines.yml
|
||||||
|
|
||||||
image: debian:buster
|
image: debian:buster
|
||||||
options:
|
options:
|
||||||
@@ -14,7 +18,7 @@ pipelines:
|
|||||||
- step:
|
- step:
|
||||||
name: Pre Parallelization stage
|
name: Pre Parallelization stage
|
||||||
script:
|
script:
|
||||||
- echo "Running tests in 32 batches"
|
- echo "Running tests in 33 batches"
|
||||||
- step:
|
- step:
|
||||||
name: Check if number of batches match actual number of scenarios
|
name: Check if number of batches match actual number of scenarios
|
||||||
script:
|
script:
|
||||||
@@ -283,4 +287,10 @@ pipelines:
|
|||||||
- apt-get update && ./bin/install-ansible --dev
|
- apt-get update && ./bin/install-ansible --dev
|
||||||
- ./bin/run-tests-in-batches --batch 32
|
- ./bin/run-tests-in-batches --batch 32
|
||||||
|
|
||||||
|
- step:
|
||||||
|
name: Molecule Test Batch - 33
|
||||||
|
services:
|
||||||
|
- docker
|
||||||
|
script:
|
||||||
|
- apt-get update && ./bin/install-ansible --dev
|
||||||
|
- ./bin/run-tests-in-batches --batch 33
|
||||||
|
|||||||
@@ -66,6 +66,7 @@ atl_aws_enable_cloudwatch_logs: "{{ lookup('env', 'ATL_AWS_ENABLE_CLOUDWATCH_LOG
|
|||||||
atl_db_engine: "{{ lookup('env', 'ATL_DB_ENGINE') }}"
|
atl_db_engine: "{{ lookup('env', 'ATL_DB_ENGINE') }}"
|
||||||
atl_db_host: "{{ lookup('env', 'ATL_DB_HOST') }}"
|
atl_db_host: "{{ lookup('env', 'ATL_DB_HOST') }}"
|
||||||
atl_db_port: "{{ lookup('env', 'ATL_DB_PORT') or '5432' }}"
|
atl_db_port: "{{ lookup('env', 'ATL_DB_PORT') or '5432' }}"
|
||||||
|
atl_db_root_db_name: "{{ lookup('env', 'ATL_DB_ROOT_DB_NAME') or 'postgres' }}"
|
||||||
atl_db_root_user: "{{ lookup('env', 'ATL_DB_ROOT_USER') or 'postgres' }}"
|
atl_db_root_user: "{{ lookup('env', 'ATL_DB_ROOT_USER') or 'postgres' }}"
|
||||||
atl_db_root_password: "{{ lookup('env', 'ATL_DB_ROOT_PASSWORD') }}"
|
atl_db_root_password: "{{ lookup('env', 'ATL_DB_ROOT_PASSWORD') }}"
|
||||||
atl_db_driver: "{{ lookup('env', 'ATL_DB_DRIVER') or 'org.postgresql.Driver' }}"
|
atl_db_driver: "{{ lookup('env', 'ATL_DB_DRIVER') or 'org.postgresql.Driver' }}"
|
||||||
|
|||||||
@@ -1,4 +1,8 @@
|
|||||||
---
|
---
|
||||||
|
# This file was generated; to regnerated `cd` to `pipeline_generator`
|
||||||
|
# and run:
|
||||||
|
#
|
||||||
|
# make > ../bitbucket-pipelines.yml
|
||||||
|
|
||||||
image: debian:buster
|
image: debian:buster
|
||||||
options:
|
options:
|
||||||
|
|||||||
@@ -28,9 +28,9 @@ atl_catalina_opts_extra: >-
|
|||||||
-XX:+PrintGCDetails
|
-XX:+PrintGCDetails
|
||||||
-XX:+PrintTenuringDistribution
|
-XX:+PrintTenuringDistribution
|
||||||
-Dsynchrony.proxy.enabled=false
|
-Dsynchrony.proxy.enabled=false
|
||||||
-Dsynchrony.service.url={{ atl_synchrony_service_url }}
|
|
||||||
-Dconfluence.cluster.node.name={{ atl_local_ipv4 }}
|
-Dconfluence.cluster.node.name={{ atl_local_ipv4 }}
|
||||||
-Dconfluence.cluster.hazelcast.max.no.heartbeat.seconds=60
|
-Dconfluence.cluster.hazelcast.max.no.heartbeat.seconds=60
|
||||||
|
{% if atl_synchrony_service_url|string|length %}-Dsynchrony.service.url={{ atl_synchrony_service_url }}{% endif %}
|
||||||
|
|
||||||
atl_tomcat_port: "8080"
|
atl_tomcat_port: "8080"
|
||||||
atl_tomcat_mgmt_port: "8005"
|
atl_tomcat_mgmt_port: "8005"
|
||||||
|
|||||||
@@ -14,6 +14,9 @@
|
|||||||
atl_cluster_node_id: 'FAKEID'
|
atl_cluster_node_id: 'FAKEID'
|
||||||
atl_autologin_cookie_age: "COOKIEAGE"
|
atl_autologin_cookie_age: "COOKIEAGE"
|
||||||
atl_local_ipv4: "1.1.1.1"
|
atl_local_ipv4: "1.1.1.1"
|
||||||
|
atl_tomcat_scheme: "http"
|
||||||
|
atl_proxy_name: "localhost"
|
||||||
|
atl_proxy_port: "80"
|
||||||
|
|
||||||
roles:
|
roles:
|
||||||
- role: linux_common
|
- role: linux_common
|
||||||
|
|||||||
@@ -16,6 +16,16 @@ def test_conf_init_file(host):
|
|||||||
assert f.exists
|
assert f.exists
|
||||||
assert f.contains('confluence.home = /var/atlassian/application-data/confluence')
|
assert f.contains('confluence.home = /var/atlassian/application-data/confluence')
|
||||||
|
|
||||||
|
def test_conf_attachment_symlinks(host):
|
||||||
|
assert host.file('/var/atlassian/application-data/confluence').is_directory
|
||||||
|
assert host.file('/media/atl/confluence/shared-home/attachments/').is_directory
|
||||||
|
|
||||||
|
f = host.file('/var/atlassian/application-data/confluence/attachments')
|
||||||
|
assert f.is_symlink and f.linked_to == '/media/atl/confluence/shared-home/attachments'
|
||||||
|
|
||||||
|
f = host.file('/var/atlassian/application-data/confluence/shared-home')
|
||||||
|
assert f.is_symlink and f.linked_to == '/media/atl/confluence/shared-home'
|
||||||
|
|
||||||
def test_setenv_file(host):
|
def test_setenv_file(host):
|
||||||
f = host.file('/opt/atlassian/confluence/current/bin/setenv.sh')
|
f = host.file('/opt/atlassian/confluence/current/bin/setenv.sh')
|
||||||
assert f.exists
|
assert f.exists
|
||||||
@@ -38,8 +48,8 @@ def test_server_file(host):
|
|||||||
assert f.contains('acceptCount="10"')
|
assert f.contains('acceptCount="10"')
|
||||||
assert f.contains('secure="false"')
|
assert f.contains('secure="false"')
|
||||||
assert f.contains('scheme="http"')
|
assert f.contains('scheme="http"')
|
||||||
assert not f.contains('proxyName=')
|
assert f.contains('proxyName=')
|
||||||
assert not f.contains('proxyPort=')
|
assert f.contains('proxyPort=')
|
||||||
|
|
||||||
def test_install_permissions(host):
|
def test_install_permissions(host):
|
||||||
assert host.file('/opt/atlassian/confluence/current/conf/server.xml').user == 'root'
|
assert host.file('/opt/atlassian/confluence/current/conf/server.xml').user == 'root'
|
||||||
|
|||||||
@@ -10,9 +10,27 @@
|
|||||||
with_items:
|
with_items:
|
||||||
- "{{ atl_product_home }}"
|
- "{{ atl_product_home }}"
|
||||||
- "{{ atl_product_home_shared }}"
|
- "{{ atl_product_home_shared }}"
|
||||||
|
- "{{ atl_product_home_shared }}/attachments"
|
||||||
- "{{ atl_product_shared_plugins }}"
|
- "{{ atl_product_shared_plugins }}"
|
||||||
changed_when: false # For Molecule idempotence check
|
changed_when: false # For Molecule idempotence check
|
||||||
|
|
||||||
|
# Create symlink to force single (unclustered) Confluence to store
|
||||||
|
# shared-data and attachments in the shared drive.
|
||||||
|
- name: Symlink local attachments to shared storage
|
||||||
|
file:
|
||||||
|
src: "{{ item.0 }}"
|
||||||
|
dest: "{{ item.1 }}"
|
||||||
|
force: false
|
||||||
|
state: link
|
||||||
|
mode: 0750
|
||||||
|
owner: "{{ atl_product_user }}"
|
||||||
|
group: "{{ atl_product_user }}"
|
||||||
|
vars:
|
||||||
|
- links:
|
||||||
|
- ["{{ atl_product_home_shared }}/", "{{ atl_product_home }}/shared-home"]
|
||||||
|
- ["{{ atl_product_home_shared }}/attachments/", "{{ atl_product_home }}/attachments"]
|
||||||
|
with_nested:
|
||||||
|
- "{{ links }}"
|
||||||
|
|
||||||
- name: Create Tomcat server config
|
- name: Create Tomcat server config
|
||||||
template:
|
template:
|
||||||
@@ -52,7 +70,6 @@
|
|||||||
owner: "{{ atl_product_user }}"
|
owner: "{{ atl_product_user }}"
|
||||||
group: "{{ atl_product_user }}"
|
group: "{{ atl_product_user }}"
|
||||||
|
|
||||||
|
|
||||||
- name: Limit permissions on the installation directory
|
- name: Limit permissions on the installation directory
|
||||||
file:
|
file:
|
||||||
path: "{{ atl_product_installation_versioned }}"
|
path: "{{ atl_product_installation_versioned }}"
|
||||||
@@ -79,3 +96,20 @@
|
|||||||
- "{{ atl_product_installation_versioned }}/temp"
|
- "{{ atl_product_installation_versioned }}/temp"
|
||||||
- "{{ atl_product_installation_versioned }}/work"
|
- "{{ atl_product_installation_versioned }}/work"
|
||||||
changed_when: false # For Molecule idempotence check
|
changed_when: false # For Molecule idempotence check
|
||||||
|
|
||||||
|
- name: Assert baseurl to same as atl_proxy_name
|
||||||
|
postgresql_query:
|
||||||
|
login_host: "{{ atl_db_host }}"
|
||||||
|
login_user: "{{ atl_jdbc_user }}"
|
||||||
|
login_password: "{{ atl_jdbc_password }}"
|
||||||
|
db: "{{ atl_jdbc_db_name }}"
|
||||||
|
query: >
|
||||||
|
update bandana set bandanavalue=regexp_replace(bandanavalue, %s, %s)
|
||||||
|
where bandanacontext = '_GLOBAL' and bandanakey = 'atlassian.confluence.settings';
|
||||||
|
positional_args:
|
||||||
|
- "<baseUrl>.*</baseUrl>"
|
||||||
|
- "<baseUrl>{{ atl_tomcat_scheme }}://{{ atl_proxy_name }}</baseUrl>"
|
||||||
|
when:
|
||||||
|
- atl_proxy_name is defined
|
||||||
|
- atl_tomcat_scheme is defined
|
||||||
|
ignore_errors: yes # For Molecule as it has no db test framework included
|
||||||
|
|||||||
@@ -17,7 +17,7 @@
|
|||||||
<param-value>seraph.confluence</param-value>
|
<param-value>seraph.confluence</param-value>
|
||||||
</init-param>
|
</init-param>
|
||||||
|
|
||||||
{% if atl_autologin_cookie_age is defined and atl_autologin_cookie_age|length %}
|
{% if atl_autologin_cookie_age is defined and atl_autologin_cookie_age is not none %}
|
||||||
<init-param>
|
<init-param>
|
||||||
<param-name>autologin.cookie.age</param-name>
|
<param-name>autologin.cookie.age</param-name>
|
||||||
<param-value>{{ atl_autologin_cookie_age }}</param-value>
|
<param-value>{{ atl_autologin_cookie_age }}</param-value>
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
---
|
---
|
||||||
|
|
||||||
atl_db_port: '5432'
|
atl_db_port: '5432'
|
||||||
|
atl_db_root_db_name: 'postgres'
|
||||||
atl_db_root_user: 'postgres'
|
atl_db_root_user: 'postgres'
|
||||||
atl_jdbc_encoding: 'UTF-8'
|
atl_jdbc_encoding: 'UTF-8'
|
||||||
atl_jdbc_collation: 'C'
|
atl_jdbc_collation: 'C'
|
||||||
|
|||||||
@@ -1,16 +1,25 @@
|
|||||||
---
|
---
|
||||||
|
|
||||||
- block:
|
- name: Create application DB user
|
||||||
|
postgresql_user:
|
||||||
|
login_host: "{{ atl_db_host }}"
|
||||||
|
login_user: "{{ atl_db_root_user }}"
|
||||||
|
login_password: "{{ atl_db_root_password }}"
|
||||||
|
port: "{{ atl_db_port }}"
|
||||||
|
name: "{{ atl_jdbc_user }}"
|
||||||
|
password: "{{ atl_jdbc_password }}"
|
||||||
|
expires: 'infinity'
|
||||||
|
|
||||||
- name: Create application DB user
|
- name: Collect dbcluster db_names
|
||||||
postgresql_user:
|
postgresql_query:
|
||||||
login_host: "{{ atl_db_host }}"
|
login_host: "{{ atl_db_host }}"
|
||||||
login_user: "{{ atl_db_root_user }}"
|
login_user: "{{ atl_db_root_user }}"
|
||||||
login_password: "{{ atl_db_root_password }}"
|
login_password: "{{ atl_db_root_password }}"
|
||||||
port: "{{ atl_db_port }}"
|
db: "{{ atl_db_root_db_name }}"
|
||||||
name: "{{ atl_jdbc_user }}"
|
query: "SELECT datname FROM pg_database;"
|
||||||
password: "{{ atl_jdbc_password }}"
|
register: dbcluster_db_names
|
||||||
expires: 'infinity'
|
|
||||||
|
- block:
|
||||||
|
|
||||||
- name: Update root privs for new user
|
- name: Update root privs for new user
|
||||||
postgresql_privs:
|
postgresql_privs:
|
||||||
@@ -22,6 +31,7 @@
|
|||||||
objs: "{{ atl_jdbc_user }}"
|
objs: "{{ atl_jdbc_user }}"
|
||||||
type: group
|
type: group
|
||||||
|
|
||||||
|
# RDS does not allow changing the collation on an existing DB, it only allows collation change on creation of db. If the db already exists, we need the “create new application database” task to be skipped, idempotence can not be relied upon as we cant be certain the collation of the existing db
|
||||||
- name: Create new application database
|
- name: Create new application database
|
||||||
postgresql_db:
|
postgresql_db:
|
||||||
login_host: "{{ atl_db_host }}"
|
login_host: "{{ atl_db_host }}"
|
||||||
@@ -35,6 +45,31 @@
|
|||||||
lc_ctype: "{{ atl_jdbc_ctype }}"
|
lc_ctype: "{{ atl_jdbc_ctype }}"
|
||||||
template: "{{ atl_jdbc_template }}"
|
template: "{{ atl_jdbc_template }}"
|
||||||
register: db_created
|
register: db_created
|
||||||
|
when: "atl_jdbc_db_name not in (dbcluster_db_names.query_result | map(attribute='datname') )"
|
||||||
|
|
||||||
tags:
|
tags:
|
||||||
- new_only
|
- new_only
|
||||||
|
|
||||||
|
- name: Assert ownership of public schema
|
||||||
|
postgresql_query:
|
||||||
|
login_host: "{{ atl_db_host }}"
|
||||||
|
login_user: "{{ atl_db_root_user }}"
|
||||||
|
login_password: "{{ atl_db_root_password }}"
|
||||||
|
db: "{{ atl_jdbc_db_name }}"
|
||||||
|
query: "ALTER SCHEMA public OWNER to {{ atl_db_root_user }};"
|
||||||
|
|
||||||
|
- name: Grant privs to root user on public schema
|
||||||
|
postgresql_query:
|
||||||
|
login_host: "{{ atl_db_host }}"
|
||||||
|
login_user: "{{ atl_db_root_user }}"
|
||||||
|
login_password: "{{ atl_db_root_password }}"
|
||||||
|
db: "{{ atl_jdbc_db_name }}"
|
||||||
|
query: "GRANT ALL ON SCHEMA public TO {{ atl_db_root_user }};"
|
||||||
|
|
||||||
|
- name: Grant privs to application user on public schema
|
||||||
|
postgresql_query:
|
||||||
|
login_host: "{{ atl_db_host }}"
|
||||||
|
login_user: "{{ atl_db_root_user }}"
|
||||||
|
login_password: "{{ atl_db_root_password }}"
|
||||||
|
db: "{{ atl_jdbc_db_name }}"
|
||||||
|
query: "GRANT ALL ON SCHEMA public TO {{ atl_jdbc_user }};"
|
||||||
|
|||||||
@@ -6,3 +6,4 @@
|
|||||||
- shadow-utils
|
- shadow-utils
|
||||||
- libxml2
|
- libxml2
|
||||||
- git-{{ git_version }}
|
- git-{{ git_version }}
|
||||||
|
- dejavu-sans-fonts
|
||||||
|
|||||||
@@ -6,3 +6,4 @@
|
|||||||
- python3-psycopg2
|
- python3-psycopg2
|
||||||
- libxml2-utils
|
- libxml2-utils
|
||||||
- git
|
- git
|
||||||
|
- fontconfig
|
||||||
|
|||||||
@@ -1,11 +0,0 @@
|
|||||||
---
|
|
||||||
- name: Converge
|
|
||||||
hosts: all
|
|
||||||
vars:
|
|
||||||
atl_backup_manifest_url: 's3://dcd-slingshot-test/dummy_manifest.json'
|
|
||||||
atl_product_user: 'jira'
|
|
||||||
atl_backup_home_restore_canary_path: '/tmp/canary.tmp'
|
|
||||||
|
|
||||||
tasks:
|
|
||||||
- name: Install distro-specific restore support packages
|
|
||||||
include_tasks: "../../tasks/{{ ansible_distribution|lower }}.yml"
|
|
||||||
@@ -1,20 +0,0 @@
|
|||||||
import os
|
|
||||||
import pytest
|
|
||||||
|
|
||||||
import testinfra.utils.ansible_runner
|
|
||||||
|
|
||||||
testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
|
|
||||||
os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('all')
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize('exe', [
|
|
||||||
'/usr/bin/pg_dump',
|
|
||||||
'/usr/bin/pg_restore',
|
|
||||||
'/usr/bin/psql'
|
|
||||||
])
|
|
||||||
def test_postgresql_amazon_linux_extras_exes(host, exe):
|
|
||||||
assert host.file(exe).exists
|
|
||||||
|
|
||||||
def test_postgresql_version(host):
|
|
||||||
pg_dump_version_output = host.check_output('pg_dump --version')
|
|
||||||
assert '(PostgreSQL) 9.6' in pg_dump_version_output
|
|
||||||
@@ -0,0 +1,74 @@
|
|||||||
|
---
|
||||||
|
- name: Converge
|
||||||
|
hosts: all
|
||||||
|
vars:
|
||||||
|
atl_backup_home_dest: "{{ test_archive }}"
|
||||||
|
atl_backup_id: 'test-backup'
|
||||||
|
atl_backup_manifest_url: 'fake_manifest'
|
||||||
|
atl_backup_home_is_server: 'true'
|
||||||
|
|
||||||
|
atl_product_home_shared: '/media/atl/confluence/shared-home'
|
||||||
|
atl_backup_home_restore_canary_path: "{{ atl_product_home_shared }}/canary.tmp"
|
||||||
|
atl_product_edition: 'confluence'
|
||||||
|
atl_product_user: 'confluence'
|
||||||
|
atl_product_user_uid: '2001'
|
||||||
|
atl_product_version_cache: "{{ atl_product_home_shared }}/{{ atl_product_edition }}.version"
|
||||||
|
|
||||||
|
test_archive: '/tmp/hello.tar.gz'
|
||||||
|
test_archive_file: 'hello.txt'
|
||||||
|
test_archive_source: '/tmp/hello'
|
||||||
|
|
||||||
|
test_pre_step_prefix: '[PRE-TEST]'
|
||||||
|
test_product_version_file: "/tmp/{{ atl_product_edition }}.version"
|
||||||
|
|
||||||
|
pre_tasks:
|
||||||
|
- name: "{{ test_pre_step_prefix }} Install tar and useradd/groupadd binaries"
|
||||||
|
package:
|
||||||
|
state: present
|
||||||
|
name:
|
||||||
|
- tar
|
||||||
|
- shadow-utils
|
||||||
|
|
||||||
|
- name: "{{ test_pre_step_prefix }} Create application group"
|
||||||
|
group:
|
||||||
|
name: "{{ atl_product_user }}"
|
||||||
|
gid: "{{ atl_product_user_uid }}"
|
||||||
|
|
||||||
|
- name: "{{ test_pre_step_prefix }} Create application user"
|
||||||
|
user:
|
||||||
|
name: "{{ atl_product_user }}"
|
||||||
|
uid: "{{ atl_product_user_uid }}"
|
||||||
|
group: "{{ atl_product_user }}"
|
||||||
|
|
||||||
|
- name: "{{ test_pre_step_prefix }} Create a Conf server home directory structure"
|
||||||
|
file:
|
||||||
|
path: "{{ item }}"
|
||||||
|
state: directory
|
||||||
|
mode: 0755
|
||||||
|
with_items:
|
||||||
|
- "{{ test_archive_source }}"
|
||||||
|
- "{{ test_archive_source }}/attachments"
|
||||||
|
- "{{ test_archive_source }}/shared-home"
|
||||||
|
|
||||||
|
- name: "{{ test_pre_step_prefix }} Create files"
|
||||||
|
copy:
|
||||||
|
dest: "{{ item }}"
|
||||||
|
content: "content"
|
||||||
|
with_items:
|
||||||
|
- "{{ test_archive_source }}/unwanted.txt"
|
||||||
|
- "{{ test_archive_source }}/attachments/image.jpg"
|
||||||
|
- "{{ test_archive_source }}/shared-home/shared-content.txt"
|
||||||
|
|
||||||
|
- name: "{{ test_pre_step_prefix }} Archive the shared home"
|
||||||
|
archive:
|
||||||
|
path:
|
||||||
|
- "{{ test_archive_source }}/*"
|
||||||
|
dest: "{{ test_archive }}"
|
||||||
|
owner: "{{ atl_product_user }}"
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
- name: Install distro-specific restore support packages
|
||||||
|
include_tasks: "../../tasks/{{ ansible_distribution|lower }}.yml"
|
||||||
|
|
||||||
|
- name: Restore shared home
|
||||||
|
include_tasks: "../../tasks/home_restore.yml"
|
||||||
@@ -0,0 +1,15 @@
|
|||||||
|
import os
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
import testinfra.utils.ansible_runner
|
||||||
|
|
||||||
|
testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
|
||||||
|
os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('all')
|
||||||
|
|
||||||
|
def test_conf_server_converted(host):
|
||||||
|
assert host.file('/media/atl/confluence/shared-home').is_directory
|
||||||
|
assert host.file('/media/atl/confluence/shared-home/shared-content.txt').is_file
|
||||||
|
assert host.file('/media/atl/confluence/shared-home/attachments').is_directory
|
||||||
|
assert host.file('/media/atl/confluence/shared-home/attachments/image.jpg').is_file
|
||||||
|
|
||||||
|
assert not host.file('/media/atl/confluence/shared-home/unwanted.txt').is_file
|
||||||
@@ -0,0 +1,14 @@
|
|||||||
|
# Molecule managed
|
||||||
|
|
||||||
|
{% if item.registry is defined %}
|
||||||
|
FROM {{ item.registry.url }}/{{ item.image }}
|
||||||
|
{% else %}
|
||||||
|
FROM {{ item.image }}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
RUN if [ $(command -v apt-get) ]; then apt-get update && apt-get install -y python sudo bash ca-certificates && apt-get clean; \
|
||||||
|
elif [ $(command -v dnf) ]; then dnf makecache && dnf --assumeyes install python sudo python-devel python*-dnf bash && dnf clean all; \
|
||||||
|
elif [ $(command -v yum) ]; then yum makecache fast && yum install -y python sudo yum-plugin-ovl bash && sed -i 's/plugins=0/plugins=1/g' /etc/yum.conf && yum clean all; \
|
||||||
|
elif [ $(command -v zypper) ]; then zypper refresh && zypper install -y python sudo bash python-xml && zypper clean -a; \
|
||||||
|
elif [ $(command -v apk) ]; then apk update && apk add --no-cache python sudo bash ca-certificates; \
|
||||||
|
elif [ $(command -v xbps-install) ]; then xbps-install -Syu && xbps-install -y python sudo bash ca-certificates && xbps-remove -O; fi
|
||||||
@@ -0,0 +1,30 @@
|
|||||||
|
---
|
||||||
|
dependency:
|
||||||
|
name: galaxy
|
||||||
|
driver:
|
||||||
|
name: docker
|
||||||
|
lint:
|
||||||
|
name: yamllint
|
||||||
|
platforms:
|
||||||
|
- name: amazon_linux2
|
||||||
|
image: amazonlinux:2
|
||||||
|
groups:
|
||||||
|
- aws_node_local
|
||||||
|
ulimits:
|
||||||
|
- nofile:262144:262144
|
||||||
|
provisioner:
|
||||||
|
name: ansible
|
||||||
|
options:
|
||||||
|
skip-tags: runtime_pkg
|
||||||
|
lint:
|
||||||
|
name: ansible-lint
|
||||||
|
options:
|
||||||
|
x: ["701"]
|
||||||
|
inventory:
|
||||||
|
links:
|
||||||
|
group_vars: ../../../../group_vars/
|
||||||
|
verifier:
|
||||||
|
name: testinfra
|
||||||
|
lint:
|
||||||
|
name: flake8
|
||||||
|
enabled: false
|
||||||
@@ -0,0 +1,76 @@
|
|||||||
|
---
|
||||||
|
- name: Converge
|
||||||
|
hosts: all
|
||||||
|
vars:
|
||||||
|
atl_backup_home_dest: "{{ test_archive }}"
|
||||||
|
atl_backup_home_restore_canary_path: '/tmp/canary.tmp'
|
||||||
|
atl_backup_id: 'test-backup'
|
||||||
|
atl_backup_manifest_url: 'fake_manifest'
|
||||||
|
atl_backup_home_is_server: 'false'
|
||||||
|
|
||||||
|
atl_product_edition: 'jira-software'
|
||||||
|
atl_product_home_shared: '/media/atl/jira/shared'
|
||||||
|
atl_product_user: 'jira'
|
||||||
|
atl_product_user_uid: '2001'
|
||||||
|
atl_product_version_cache: "{{ atl_product_home_shared }}/{{ atl_product_edition }}.version"
|
||||||
|
|
||||||
|
test_archive: '/tmp/hello.tar.gz'
|
||||||
|
test_archive_file: 'hello.txt'
|
||||||
|
test_archive_source: '/tmp/hello'
|
||||||
|
test_pre_step_prefix: '[PRE-TEST]'
|
||||||
|
test_product_version_file: "/tmp/{{ atl_product_edition }}.version"
|
||||||
|
|
||||||
|
pre_tasks:
|
||||||
|
- name: "{{ test_pre_step_prefix }} Install tar"
|
||||||
|
package:
|
||||||
|
state: present
|
||||||
|
name: tar
|
||||||
|
|
||||||
|
- name: "{{ test_pre_step_prefix }} Install useradd and groupadd binaries"
|
||||||
|
package:
|
||||||
|
state: present
|
||||||
|
name: shadow-utils
|
||||||
|
|
||||||
|
- name: "{{ test_pre_step_prefix }} Create application group"
|
||||||
|
group:
|
||||||
|
name: "{{ atl_product_user }}"
|
||||||
|
gid: "{{ atl_product_user_uid }}"
|
||||||
|
|
||||||
|
- name: "{{ test_pre_step_prefix }} Create application user"
|
||||||
|
user:
|
||||||
|
name: "{{ atl_product_user }}"
|
||||||
|
uid: "{{ atl_product_user_uid }}"
|
||||||
|
group: "{{ atl_product_user }}"
|
||||||
|
|
||||||
|
- block:
|
||||||
|
- name: "{{ test_pre_step_prefix }} Create a directory for the shared home archive"
|
||||||
|
file:
|
||||||
|
path: "{{ test_archive_source }}"
|
||||||
|
state: directory
|
||||||
|
mode: 0755
|
||||||
|
- name: "{{ test_pre_step_prefix }} Create a file in the shared home"
|
||||||
|
lineinfile:
|
||||||
|
create: yes
|
||||||
|
line: 'Hello, world!'
|
||||||
|
path: "{{ test_archive_source }}/{{ test_archive_file }}"
|
||||||
|
mode: 0640
|
||||||
|
- name: "{{ test_pre_step_prefix }} Create the version file in the shared home"
|
||||||
|
lineinfile:
|
||||||
|
create: yes
|
||||||
|
line: '8.5'
|
||||||
|
path: "{{ test_product_version_file }}"
|
||||||
|
mode: 0640
|
||||||
|
- name: "{{ test_pre_step_prefix }} Archive the shared home"
|
||||||
|
archive:
|
||||||
|
path:
|
||||||
|
- "{{ test_archive_source }}"
|
||||||
|
- "{{ test_product_version_file }}"
|
||||||
|
dest: "{{ test_archive }}"
|
||||||
|
owner: "{{ atl_product_user }}"
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
- name: Install distro-specific restore support packages
|
||||||
|
include_tasks: "../../tasks/{{ ansible_distribution|lower }}.yml"
|
||||||
|
|
||||||
|
- name: Restore shared home
|
||||||
|
include_tasks: "../../tasks/home_restore.yml"
|
||||||
@@ -0,0 +1,39 @@
|
|||||||
|
import os
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
import testinfra.utils.ansible_runner
|
||||||
|
|
||||||
|
testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
|
||||||
|
os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('all')
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.parametrize('exe', [
|
||||||
|
'/usr/bin/pg_dump',
|
||||||
|
'/usr/bin/pg_restore',
|
||||||
|
'/usr/bin/psql'
|
||||||
|
])
|
||||||
|
def test_postgresql_amazon_linux_extras_exes(host, exe):
|
||||||
|
assert host.file(exe).exists
|
||||||
|
|
||||||
|
def test_postgresql_version(host):
|
||||||
|
pg_dump_version_output = host.check_output('pg_dump --version')
|
||||||
|
assert '(PostgreSQL) 9.6' in pg_dump_version_output
|
||||||
|
|
||||||
|
@pytest.mark.parametrize('file', [
|
||||||
|
'/media/atl/jira/shared',
|
||||||
|
'/media/atl/jira/shared/hello',
|
||||||
|
'/media/atl/jira/shared/hello/hello.txt'
|
||||||
|
])
|
||||||
|
def test_shared_home_owner(host, file):
|
||||||
|
assert host.file(file).exists
|
||||||
|
assert host.file(file).user == 'jira'
|
||||||
|
assert host.file(file).group == 'jira'
|
||||||
|
|
||||||
|
def test_file_modes(host):
|
||||||
|
assert host.file('/media/atl/jira/shared/hello').mode == 0o755
|
||||||
|
assert host.file('/media/atl/jira/shared/hello/hello.txt').mode == 0o640
|
||||||
|
|
||||||
|
def test_version_file_owned_by_root(host):
|
||||||
|
assert host.file('/media/atl/jira/shared/jira-software.version').exists
|
||||||
|
assert host.file('/media/atl/jira/shared/jira-software.version').user == 'root'
|
||||||
|
assert host.file('/media/atl/jira/shared/jira-software.version').group == 'root'
|
||||||
75
roles/restore_backups/tasks/home_restore.yml
Normal file
75
roles/restore_backups/tasks/home_restore.yml
Normal file
@@ -0,0 +1,75 @@
|
|||||||
|
---
|
||||||
|
- name: Check for the restore canary file
|
||||||
|
stat:
|
||||||
|
path: "{{ atl_backup_home_restore_canary_path }}"
|
||||||
|
register: restore_canary
|
||||||
|
|
||||||
|
- block:
|
||||||
|
- name: Create shared home if necessary
|
||||||
|
file:
|
||||||
|
path: "{{ atl_product_home_shared }}"
|
||||||
|
state: directory
|
||||||
|
mode: 0750
|
||||||
|
owner: "{{ atl_product_user }}"
|
||||||
|
group: "{{ atl_product_user }}"
|
||||||
|
|
||||||
|
# We also need to use `tar` here as `unarchive` runs `tar` three times doing
|
||||||
|
# idempotence checks, which we can skip.
|
||||||
|
- name: Restore the shared-home backup
|
||||||
|
command:
|
||||||
|
argv:
|
||||||
|
- "tar"
|
||||||
|
- "--extract"
|
||||||
|
- "--file"
|
||||||
|
- "{{ atl_backup_home_dest }}"
|
||||||
|
- "--directory"
|
||||||
|
- "{{ atl_product_home_shared }}"
|
||||||
|
warn: false
|
||||||
|
when: atl_backup_home_is_server is not defined or not atl_backup_home_is_server|bool
|
||||||
|
|
||||||
|
# Use tar transform to convert the Confluence Server (unclustered)
|
||||||
|
# layout to shared-home version. What occurs is:
|
||||||
|
#
|
||||||
|
# * --transform runs first, moving attachments into the shared home.
|
||||||
|
# * --strip-components removes the top-level directory
|
||||||
|
#
|
||||||
|
# NOTE: Also see the `confluence_config` role, which uses
|
||||||
|
# symlinks to support server and clustered layouts
|
||||||
|
# concurrently.
|
||||||
|
- name: Restore a Confluence server home to share-home layout
|
||||||
|
command:
|
||||||
|
argv:
|
||||||
|
- "tar"
|
||||||
|
- "--extract"
|
||||||
|
- "--transform=s,^attachments,shared-home/attachments,"
|
||||||
|
- "--strip-components=1"
|
||||||
|
- "--file"
|
||||||
|
- "{{ atl_backup_home_dest }}"
|
||||||
|
- "--directory"
|
||||||
|
- "{{ atl_product_home_shared }}"
|
||||||
|
warn: false
|
||||||
|
when: atl_backup_home_is_server is defined and atl_backup_home_is_server|bool
|
||||||
|
|
||||||
|
- name: Set shared home owner and group to application user
|
||||||
|
file:
|
||||||
|
path: "{{ atl_product_home_shared }}"
|
||||||
|
recurse: yes
|
||||||
|
group: "{{ atl_product_user }}"
|
||||||
|
owner: "{{ atl_product_user }}"
|
||||||
|
state: directory
|
||||||
|
|
||||||
|
- name: Set version file owner and group to root
|
||||||
|
file:
|
||||||
|
path: "{{ atl_product_version_cache }}"
|
||||||
|
group: root
|
||||||
|
owner: root
|
||||||
|
state: file
|
||||||
|
# Ignore the error in case there is no product version file in the backup
|
||||||
|
ignore_errors: yes
|
||||||
|
|
||||||
|
- name: Create restore-canary if necessary
|
||||||
|
copy:
|
||||||
|
dest: "{{ atl_backup_home_restore_canary_path }}"
|
||||||
|
content: "{{ atl_backup_id }}"
|
||||||
|
|
||||||
|
when: not restore_canary.stat.exists
|
||||||
@@ -58,6 +58,7 @@
|
|||||||
atl_backup_id: "{{ atl_backup_manifest.name }}"
|
atl_backup_id: "{{ atl_backup_manifest.name }}"
|
||||||
atl_backup_db_dest: "{{ atl_installer_temp }}/{{ atl_backup_manifest.artifacts.db.location.location | basename }}"
|
atl_backup_db_dest: "{{ atl_installer_temp }}/{{ atl_backup_manifest.artifacts.db.location.location | basename }}"
|
||||||
atl_backup_home_dest: "{{ atl_installer_temp }}/{{ atl_backup_manifest.artifacts.sharedHome.location.location | basename }}"
|
atl_backup_home_dest: "{{ atl_installer_temp }}/{{ atl_backup_manifest.artifacts.sharedHome.location.location | basename }}"
|
||||||
|
atl_backup_home_is_server: "{{ atl_backup_manifest.artifacts.sharedHome.serverHome | default(false, true) | bool }}"
|
||||||
|
|
||||||
# FIXME: Here we fetch the backups. However we may wish to stream
|
# FIXME: Here we fetch the backups. However we may wish to stream
|
||||||
# these directly from S3 to the target DB/FS to avoid requiring
|
# these directly from S3 to the target DB/FS to avoid requiring
|
||||||
@@ -84,6 +85,8 @@
|
|||||||
include_tasks: "{{ ansible_distribution|lower }}.yml"
|
include_tasks: "{{ ansible_distribution|lower }}.yml"
|
||||||
|
|
||||||
|
|
||||||
|
# Restores the application database. If a var with name `atl_force_db_restore` is set to true, the database will be restored even when the database has not been created in the same playbook run.
|
||||||
|
# This is done to accommodate running the restore role independent of the database_init role.
|
||||||
- name: Restore application database
|
- name: Restore application database
|
||||||
postgresql_db:
|
postgresql_db:
|
||||||
login_host: "{{ atl_db_host }}"
|
login_host: "{{ atl_db_host }}"
|
||||||
@@ -105,37 +108,11 @@
|
|||||||
failed_when:
|
failed_when:
|
||||||
- result.rc != 0
|
- result.rc != 0
|
||||||
- '"COMMENT ON EXTENSION" not in result.msg'
|
- '"COMMENT ON EXTENSION" not in result.msg'
|
||||||
when: db_created.changed and atl_backup_db_dest is defined
|
# default('false', true) filter makes the default filter return the specified default value for python False-y values (like an empty string)
|
||||||
|
when: atl_backup_db_dest is defined and (db_created.changed or (atl_force_db_restore | default('false', true) | bool))
|
||||||
|
|
||||||
|
- name: Restore shared home
|
||||||
- name: Check for the restore canary file
|
include_tasks: "home_restore.yml"
|
||||||
stat:
|
|
||||||
path: "{{ atl_backup_home_restore_canary_path }}"
|
|
||||||
register: restore_canary
|
|
||||||
|
|
||||||
- block:
|
|
||||||
|
|
||||||
- name: Create shared home if necessary
|
|
||||||
file:
|
|
||||||
path: "{{ atl_product_home_shared }}"
|
|
||||||
state: directory
|
|
||||||
mode: 0750
|
|
||||||
owner: "{{ atl_product_user }}"
|
|
||||||
group: "{{ atl_product_user }}"
|
|
||||||
|
|
||||||
- name: Restore the shared-home backup
|
|
||||||
unarchive:
|
|
||||||
src: "{{ atl_backup_home_dest }}"
|
|
||||||
dest: "{{ atl_product_home_shared }}"
|
|
||||||
owner: "{{ atl_product_user }}"
|
|
||||||
group: "{{ atl_product_user }}"
|
|
||||||
|
|
||||||
- name: Create restore-canary if necessary
|
|
||||||
copy:
|
|
||||||
dest: "{{ atl_backup_home_restore_canary_path }}"
|
|
||||||
content: "{{ atl_backup_id }}"
|
|
||||||
|
|
||||||
when: not restore_canary.stat.exists
|
|
||||||
|
|
||||||
|
|
||||||
when: atl_restore_required
|
when: atl_restore_required
|
||||||
|
|||||||
@@ -3,7 +3,7 @@
|
|||||||
- name: Install the startup wrapper script
|
- name: Install the startup wrapper script
|
||||||
copy:
|
copy:
|
||||||
src: start-synchrony
|
src: start-synchrony
|
||||||
dest: "{{ atl_installation_base }}/bin/start-synchrony"
|
dest: "{{ atl_product_installation_current }}/bin/start-synchrony"
|
||||||
group: "{{ atl_product_user }}"
|
group: "{{ atl_product_user }}"
|
||||||
mode: "0750"
|
mode: "0750"
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user