Data storage guidelines for DNS cases

From KBwiki
Revision as of 09:33, 8 September 2020 by Mike (talk | contribs)
Jump to navigation Jump to search

Storing high fidelity CFD data

High fidelity CFD data is generally to large to be stored directly on the wiki and so a special cloud based storage service has been created. It is based on the AWS S3 service and contributors can request access to upload data to it when creating a new DNS/LES case in the wiki. To do this contact the QNET Editorial team. For more information on the templates for high fidelity data see Article Templates.

How to upload the data

The QNET Editorial team will supply you with a account key, a secret key and a path. Once you have these you can use any S3 compatible tool to upload your data. The instructions below are for the AWS CLI tool.

Install AWS CLI

The installation instructions for AWS CLI can be found here.

Configure AWS CLI

Next configure AWS CLI with the details supplied by the QNET team. For example:

>aws configure
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: eu-west-2
Default output format [None]: json

Upload data

AWS CLI provides several commands to list, upload and download data. For copying an entire directory to the wiki data store we would recommend using the s3 sync command. The QNET team will provide a path for you to upload to, in this example we are uploading from a local "~/data" directory to the path "DNS-23-example" provided by the QNET team. We do a dry-run test first to check it is uploading the data we expect before we do the actual copy.

> cd ~/data
> aws s3 sync . s3://kbwiki-data/DNS-23-example/ --dryrun
> aws s3 sync . s3://kbwiki-data/DNS-23-example/

Add links to you article

Once your data has uploaded you can create links within your article to the S3 data. These will be in the form

Please note that the high fidelity CFD data may be extremely large and so we would encourage that you include the file size next to any links to avoid users downloading large datasets when not required.