Installation/Amazon EC2/Maintenance
Contents |
Backups
Full backups and cloned instances
The Openbravo AMIs (Appliance and Ubuntu) use the EBS backed storage. This means you can create full machine snapshots or launch a cloned instance in a matter of minutes and with a few clicks.
To create a new AMI from a running instance, go to the AWS Management Console, click Sign in to the AWS Console and log in.
Next, click on Instances, in the left navigation panel. Right click on your instance and select Create Image (EBS AMI). Choose a name and a description and click on Create image. This process will create a new private AMI you can use to start a new instance. This instance will be a clone of the original instance.
WARNING: It is highly recommended to stop PostgreSQL and Tomcat before doing a snapshot. Otherwise you might face database corruption issues. |
Data backups and storing them in S3
![]() | This section only applies when using the Openbravo Appliance as it reuses the backup mechanism builtin into the web based management console only available in this appliance. |
The following describes how to use the builtin backup feature of the Openbravo Appliance together with the Amazon S3 service.
These two systems together allow saving a data backup of the Openbravo installation to a S3 storage area which can be seperate from the region in which the appliance itself is running.
This keeps the stored backups safe both in case of problems with a running instance or loss of data stored in Amazon snapshots which is not very like but does happen.
Also this allows for easy access to the backup files to be downloaded to a local system for extra safety.
Configure backups to a local temporary location
First step is to configure scheduled backups of the Openbravo installation into a local folder which then is configured to be synced with an S3 bucket in the next chapter.
Login into the appliance via ssh and execute the following command to change to the root user, assign a label to the filesystem and create a folder to store the backups in
su - e2label /dev/sda1 root mkdir /backups
Next login to the web-based administration console which can be accessed at
https://<your-ip>:8003
In there navigate to the following location to configure the backup settings:
Maintenance -> Backup & Restore -> Backup Settings
Configure the following settings to do a nightly backup into the local folder created above.
- Enable Backups: yes
- Enable backup schedule? yes
- Backup schedule daily
- Daily options 2 am (Note: This exact time is only an example)
- Number of backups to keep 7 (Note: Exact value is only an example)
- Backup type Mountable File System (Label)
- Disk label root
- Connection path: backups
Save the changes. After doing those configs the backup settings should look like in this image.
and switch to the Backup and Restore tab.
Use the Backup now button to launch one backup job immediately.
The backups should start directly and run for a while. When it is finished a 'Completed successfully' message should appear and be confirmed with OK.
Now check if that backup file was created successfully in the folder '/backups' created earlier.
ls -lh /backups
More information on configuring those backups can be found in the Appliance Administration page.
Upload those backups to Amazon S3
This chapter explains how to configure automated upload of the backups created in the previously to the Amazon S3 storage service.
The basic steps to configure this will be:
- Download & configure the s3cmd tool to upload data to Amazon S3
- Create a new bucket in S3 to hold the backup files
- Test the upload once manually
- Configure the system to upload the files every night
The upload will be done with a tool called s3cmd.
First step is to download and install this tool inside the appliance.
Open this URL in your browser and copy the location of the direct link shown in the top area of the page.
After this login into the appliance via ssh and execute the following commands:
su- cd /usr/local wget -O s3cmd-1.0.1.tar.gz "paste here the url copied above between quotes" tar xzvf s3cmd-1.0.1.tar.gz /usr/local/s3cmd-1.0.1/s3cmd --version
After running the last line the output should be:
s3cmd version 1.0.1
which verifies that the download & install steps where successful.
The next step is to configure the tool with the credentials of your Amazon S3 account.
To do this open this link and login into your Amazon AWS account.
There you will find an Access Key which has been created on initial AWS account creation. Note the values of the Access Key ID and the Secret Access Key
Now run the following command to configure the s3cmd program:
/usr/local/s3cmd-1.0.1/s3cmd --configure
Enter the following details:
- Access Key: <the access key from your account>
- Secret Key: <the secret key from your account>
- Encryption Password: <enter, no value>
- Path to GPG program: <enter, no value>
- Use HTTPS protocol: yes
- Test access with the supplied credentials: Y
- Save settings? Y
The output of the test step should read that the test was sucessful.
The next step is to create an S3 bucket, which is the Amazon name for a folder inside your S3 account.
The name of this bucket must be globally unique among all users of S3 worldwide so a name should be chosen with identifies the customer or its domain.
Example: example.org-backups-server1
To create the bucket with that example name use the following command:
/usr/local/s3cmd-1.0.1/s3cmd --bucket-location=eu-west-1 mb s3://example.org-backups-server1
The --bucket--location parameter here is used to specify in wich AWS region the files should be stored. In this example we specify the eu-west-1 region located in Ireland/Europe.
After this preparation we are now ready to test-upload the earlier created backup.
To be more precise the we will synchronize the local /backups directory with the S3 bucket. This means that new files will be uploaded to S3 and files which have been removed locally will be deleted also from S3.
Together with the configuration done in the previous chapter (keep 7 backups) this allows to only have 7 backups also in S3 which helps to not grow the S3 storage beyond limit and help to keep storage costs limited.
To run this synchronization manually and also verify that it will only upload the files the first time run the following two commands.
/usr/local/s3cmd-1.0.1/s3cmd sync --delete-removed /backups s3://example.org-backups-server1 /usr/local/s3cmd-1.0.1/s3cmd sync --delete-removed /backups s3://example.org-backups-server1
The first command will upload the previousy created backup file to S3. The second run of the same command will not upload anything to S3 as no local file did change.
The last step is to configure this command to be also run nightly. To do this execute the following:
su - crontab -e
This open an editor in which one line must be created. By default the 'vi' editor will be opened.
Execute the following sequence of commands to insert this one line and save the changes:
Important note: In this line below two values are only examples and need to be adjusted:
- 5 means that the job will run every morning a 5am. This time needs to be chosen to be sufficiently after the backup job configured above (in that example 2am was used). This is needed to ensure that the backup job completes before this upload is started
- s3://example.org-backups-server1 is the upload bucket-url which needs to be the same as the bucket created earlier.
i 0 5 * * * /usr/local/s3cmd-1.0.1/s3cmd sync --delete-removed /backups/ s3://example.org-backups-server1 <Escape> :wq <Enter>
After this the editor will be closed again and the changes are saved.
To verify if this was done correctly use the following command:
crontab -l
The output should be: 0 5 * * * /usr/local/s3cmd-1.0.1/s3cmd sync --delete-removed /backups/ s3://example.org-backups-server1
This completes the configuration to do an nightly upload of the backup files to Amazon S3.
The following chapter explain how to easily download the files to a local system using a browser and how to restore the files from S3 to an new appliance to use then for recovery.
Retrieval of the files from S3
Download to a local system
To check on the uploaded files or download any of them to a local system visit his URL to open the S3 tab of the AWS Management Console.
To the left you will find the bucket create previously.
Marking this will show the 'backups' folder in it to the right. Inside that folder all the uploaded backup-files can be found and easily downloaded by marking an entry for the file and choosing download from the context-menu.
This makes it easy for download the backup files and keep a copy safe locally.
Retrieval into an appliance
This part assumes that the s3cmd tool is installed + configured as described above.
To list all backup files available in the S3 bucket the following command can be used:
/usr/local/s3cmd-1.0.1/s3cmd ls s3://example.org-backups-server1/backups/
This will show a list of files in this bucket and for each a url starting with s3://
To download a file and place it again in the /backups folder use the following command, changing the example s3-url to the one of the file which should be downloaded:
su - cd /backups /usr/local/s3cmd-1.0.1/s3cmd get s3://example.org-backups-server1/backups/backup-20120125-1448-UTC.tar
After this in the Appliance Web-based Administration console a Scan for Available Backups to let the system find the downloaded file.
Alternatively the administration console also allows to directly upload a backup file using the browser.
Static IPs - Elastic IPs
Elastic IP addresses are static IP addresses that you can dynamically assign to the instance you choose. This is very useful for fast instance recovery. Example scenario:
- Your instance has a problem. So you [[ | launch a new instance from a backup]].
- Your new instance will have a different IP address. With an Elastic IP you can assign your old instance's IP address to this new one, in a matter of seconds.
To allocate a new static IP click on the Elastic IP item in the left navigation menu. And then select click on Allocate new Address.
You can then associate this IP address to the instance you choose, by right clicking in that IP address:
WARNING: using Elastic IPs is free, as long as the IPs are associated to an instance. Otherwise you'll be penalized with an hourly fee. |