Skip to main content
  1. All Posts/

moodle-tool_objectfs

Tools PHP

moodle-tool_objectfs

A remote object storage file system for Moodle. Intended to provide a plug-in that can be installed and configured to work with any supported remote object storage solution.

Use cases

There are a number of different ways you can use this plug in. See Recommended use case settings for recommended settings for each one.

Offloading large and old files to save money

Disk can be expensive, so a simple use case is we simply want to move some of the largest and oldest files off local disk to somewhere cheaper. But we still want the convenience and performance of having the majority of files local, especially if you are hosting on-prem where the latency or bandwidth to the remote filesystem may not be great.

Sharing files across moodles to save disk

Many of our clients have multiple moodle instances, and there is much duplicated content across instances. By pointing multiple moodles at the same remote filesystem, and not allowing deletes, then large amounts of content can be de-duplicated.

Sharing files across environments to save time

Some of our clients moodles are truly massive. We also have multiple environments for various types of testing, and often have ad hoc environments created on demand. Not only do we not want to have to store duplicated files, but we also want refreshing data to new environments to be as fast as possible.
Using this plugin we can configure production to have full read write to the remote filesystem and store the vast bulk of content remotely. In this setup the latency and bandwidth isn’t an issue as they are colocated. The local filedir on disk would only consist of small or fast churning files such as course backups. A refresh of the production data back to a staging environment can be much quicker now as we skip the sitedir clone completely and stage is simple configured with readonly access to the production filesystem. Any files it creates would only be written to it’s local filesystem which can then be discarded when next refreshed.

Sharing files with data washed environments

Often you want a sanitised version of the data for giving to developers or other 3rd parties to remove or obfuscate sensitive content. This plugin is designed to work in this scenario too where the 3rd party gets a ‘cleaned’ DB, and can still point to the production remote filesystem with readonly credentials. As they cannot query the filesystem directly and must know the content hash of any content in order to access a file, there is very low risk of them accessing sensitive content.
https://github.com/catalyst/moodle-local_datacleaner

GDPR

This plugin is GDPR complient if you enable the deletion of remote objects.

Branches

Moodle version
Totara version
Branch
PHP

Moodle 3.10 +

MOODLE_310_STABLE
7.2+

Moodle 3.3 – 3.9
Totara 12
MOODLE_33_STABLE
7.1+

Moodle 2.7 – 3.2
Totara 2.7 – 2.9, 9 – 11
27-32-STABLE
5.5+

Installation

  1. If not on Moodle 3.3, backport the file system API. See Backporting
  2. Setup your remote object storage. See Remote object storage setup
  3. Clone this repository into admin/tool/objectfs
  4. Install one of the required SDK libraries for the storage file system that you will be using
    1. Clone moodle-local_aws into local/aws for S3 or DigitalOcean Spaces, or
    2. Clone moodle-local_azure_storage into local/azure_storage for Azure Blob Storage, or
    3. Clone moodle-local_openstack into local/openstack for openstack(swift) storage
  5. Install the plugins through the moodle GUI.
  6. Configure the plugin. See Moodle configuration
  7. Place of the following lines inside your Moodle config.php:
  • Amazon S3
$CFG->alternative_file_system_class = 'tool_objectfss3_file_system';
  • Azure Blob Storage
$CFG->alternative_file_system_class = 'tool_objectfsazure_file_system';
  • DigitalOcean Spaces
$CFG->alternative_file_system_class = 'tool_objectfsdigitalocean_file_system';
  • Openstack Object Storage (swift)
$CFG->alternative_file_system_class = 'tool_objectfsswift_file_system';

Currently supported object stores

Roadmap

There is support for more object stores planed.

Amazon S3

Amazon S3 bucket setup

  • Create an Amazon S3 bucket.
  • The AWS Users access policy should mirror the policy listed below.
  • Replace ‘bucketname’ with the name of your S3 bucket.
  • If you intend to allow deletion of objects in S3, Add ‘s3:DeleteObject’ to the actions below.
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:ListBucket"],
      "Resource": ["arn:aws:s3:::bucketname"]
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject"
      ],
      "Resource": ["arn:aws:s3:::bucketname/*"]
    }
  ]
}

Google GCS

Google gcs setup

  • Create an gcs bucket.
  • Go to the storage page, settings, interoperability. select create a key for a service account
    • choose create new account to create a service account
    • choose your new service account and press create key
      Use these for your secret and key options
  • Replace ‘bucketname’ with the name of your S3 bucket.
  • Add your service account as a member under the permissions tab for your new bucket with the storage object admin role
  • set the bucket to use fine-grained access control
  • You will need to set ‘base_url’ to https://storage.googleapis.com in your config

Azure Blob Storage

Azure Storage container guide with the CLI
It is possible to install the Azure CLI locally to administer the storage account. The Azure CLI can be obtained here.
Visit the Online Azure Portal or use the Azure CLI to obtain the storage account keys. These keys are used to setup the container, configure an access policy and acquire a Shared Access Signature that has Read and Write capabilities on the container.
It will be assumed at this point that a resource group and blob storage account exists.

  • Obtain the account keys.
az login

az storage account keys list 
  --resource-group <resource_group_name> 
  --account-name <storage_account_name>
  • Create a private container in a storage account.
az storage container create 
    --name <container_name> 
    --account-name <storage_account_name> 
    --account-key <storage_account_key> 
    --public-access off 
    --fail-on-exist
  • Create a stored access policy on the containing object.
az storage container policy create 
    --account-name <storage_account_name> 
    --account-key <storage_account_key> 
    --container-name <container_name> 
    --name <policy_name> 
    --start <YYYY-MM-DD> 
    --expiry <YYYY-MM-DD> 
    --permissions rw

# Start and Expiry are optional arguments.
  • Generates a shared access signature for the container. This is associated with a policy.
az storage container generate-sas 
    --account-name <storage_account_name> 
    --account-key <storage_account_key> 
    --name <container_name> 
    --policy <policy_name> 
    --output tsv
  • If you wish to revoke access to the container, remove the policy which will invalidate the SAS.
az storage container policy delete 
    --account-name <storage_account_name> 
    --account-key <storage_account_key> 
    --container-name <container_name>
    --name <policy_name>

DigitalOcean Spaces

DigitalOcean Spaces bucket setup

  • Create an DigitalOcean Space.
  • Currently DigitalOcean does not provide an ACL to their Spaces offering.

Openstack Object Storage

Openstack object storage container setup
Create a dedicated user that does…