Module:Integration With Azure Blob Storage
This module provides an attachment method alternative to core attachments that stores attachments in Azure Blob Storage service.
It is possible to add, update and remove attachments by handling the as objects stored in Azure Blob Storage.
The configuration to handle the integration with Azure Storage is done in the same way as any attachment method: Through the Attachment Configuration window. This configuration is done at Client level.
To configure the new attachment method, the Files stored in Azure Storage platform option must be selected as the attachment method and the active checkbox must be checked.
When selecting this option a new field will be shown and needs to be filled in:
- Azure Storage Configuration: The Azure Storage configuration that defines the container where the attachments should be stored, together with the required connection settings.
To create an Azure Storage Configuration, a new record should be created in the Azure Storage Configuration window, this is accessible from the top menu.
In this window there are some common fields and others that depend on the chosen authentication method:
- Container: Azure Blob Storage stores data as objects within resources called containers, this is very similar to Amazon S3 buckets. It is possible to write, read and delete objects within a container. This represents the name of the container where the attachments will be stored. Container naming has some restrictions, those can be found here.
- Azure account name: This is the Azure account name and is needed for endpoint generation and to handle authentication.
- Authentication Method: This sets the authentication method to be used, for now there are 2 available authentication methods:
- Shared Key Authentication: For this type of authentication, an Azure account access key must be provided. This key is used by several services, and should be kept safe.
- SAS Authentication: This type of authentication needs an Azure SAS token. This authentication method is more secure than Shared Key because SAS tokens can be created and given permission to different parts of an azure account and to different containers, both for read and write on those containers. It is also possible to make SAS token expire, providing another layer of security in case of need. This authentication method is highly recommended.
- Azure account access key: This field is shown when Shared Key Authentication method has been selected. A valid azure account shared access key must be provided in this field.
- Azure SAS Token: This field is shown when SAS Authentication method has been selected. A valid SAS token must be provided in this field.
- Azure endpoint: Azure handles every request on a given endpoint through a REST API. This field must contain the domain URL to that given endpoint where Azure blob storage is available. By default it is set to, "https://%s.blob.core.windows.net", this should be enough for almost all configurations, but if a custom domain is needed this can be changed to that domains URL.
- Active: This checks if the record is or not active. If unchecked the configuration will not be shown in Attachments configuration window.
- Verify configuration: After successfully saving the configuration, a process button will be shown on the top bar of the Azure Storage Configuration window. This button verifies if the given configuration is correct and if it is possible to connect to the Azure Blob Storage using the provided configuration. If configuration is correct it usually shows that is valid very fast, if not correct it may take a while because it rechecks the connection with Azure.
There is a new process that is able to migrate attachments from core attachments to Azure storage attachments.
A new field Azure Migration Process Max. Time (minutes) has been added to choose the maximum time a migration process can take in minutes.
By default the maximum time is set to 1 hour. If set to 0 there is no maximum time, so it will keep migrating until it finishes or the process is stopped manually or by a server timeout/error.
If a migration takes more time than the maximum time set, it stops at the given maximum time and can be rescheduled to keep going from where it left when stopped. The same can be applied in case of timeout or error in the migration process, no core attachments get lost on timeout or failure because those don't get deleted until they are already uploaded to Azure Blob Storage.