[Tutorial] How to use ChainSafe Storage

Introduction

ChainSafe Storage is a suite of APIs and services that allows users to store and retrieve data from IPFS and Filecoin.

Overview

API spec for ChainSafe Storage.

You can find out more about ChainSafe Storage at https://storage.chainsafe.io.

Buckets

The following API spec operates with the concept of Buckets. It is conceptually similar to features in other storage services that use the same term. A Bucket represents a file grouping mechanism that allows all the files belonging to the Bucketto be part of the metadata hierarchy. This is almost like a file system but without per-file access control and instead the access is controlled at a per-bucket level.

All of the files uploaded to IPFS are represented in a flat structure but it would not be sufficient to store them like this so the obvious idea would be to use the IPFS file system. However, such a file system is open to everybody and might reveal file or folder names and relative positioning, this could be something a user may never want to reveal. As a solution to this problem we store the file’s hierarchy in a way that it won’t be revealed to anybody on the public network, it will essentially be stored as an IPFS object.

Unlike the flat structure of IPFS, our file system will preserve original file names, relative paths, content type, size and most importantly create a mapping between this metadata and the real IPFS CIDv0.

As mentioned earlier, we are not doing an access control check on files (because it would not be possible taking into account the nature of IPFS) but rather on the file system and Bucket. So only users with the proper access rights can manage data in a particular bucket and as a consequence make changes to the underlying file system and discover the mappings between metadata and CIDs of the uploaded files.

It would not be fair for us to restrict usage only to the internal file system, so it is also possible to create a Bucket that has an IPFS file system associated.

Summing up everything that we just described, Buckets is a structure that holds:

  • Filesystem (files hierarchy) type: or onechainsafe ipfs
  • Lists or Owners, Writers, Readers
  • Size of all the data associated with it

More formal definition can be found in the spec below:

S3 Compatibility

Chainsafe Storage provides an S3 compatibility layer on top of distributed storage using IPFS. The Storage S3 API can easily integrate with your services using any available S3 client.

If you aren’t currently a Chainsafe Storage user you can create an account today and get 20 GB of free storage.

Authentication

Chainsafe storage S3 compatible APIs only support v4 signatures for authentication. They do not currently support v2 signatures.

Create Access Key ID and Secret Access Key

  • Issue a key pair by clicking on ‘Add S3 Key’:

NOTE: Please make sure you save secret since it will NOT be showed again.

Create Access key through user API

POST /api/v1/user/keys HTTP/1.1
Host: https://api.chainsafe.io
Authorization: Bearer <AUTH_TOKEN>
Content-Type: application/json
{
    "type" : "storage"
}

Response: (Content-Type: application/json)

{
    "id": "YQIGFGKQAHMJCTPEEHXJ",
    "created_at": "2022-09-05T17:48:53.290381647Z",
    "status": "active",
    "type": "s3",
    "secret": "zOdKnLzZQ9gzaCFTbxiomgZbMJi6I1pTIuJ81PEK"
}

You can create or type of api keys.storage s3

SDKs

Chainsafe Storage S3 compatible APIs can be used with existing AWS SDKs. A guide on how to configure the AWS SDKs can be found:

Supported Features

The Storage S3 compatible APIs returns response in same way AWS S3 does. Here are the features supported by Storage S3 compatible APIs

  • Create Bucket
  • Copy Object
  • Delete Bucket
  • Delete Object
  • Get Object
  • Head Bucket
  • Head Object
  • List Bucket
  • List Objects V2
  • Put Object

AWS S3 CLI Guide

AWS S3 CLI is the easiest way to interact with the object storage. The AWS S3 CLI can be configured to take the advantage of the Chainsafe Storage S3 Compatible API.

To configure the AWS S3 CLI

  1. Create a new profile for Chainsafe storage in the AWS credentials file
   $ vi ~/.aws/credentials
   [storage]
   aws_access_key_id = xxx
   aws_secret_access_key = xxx
  1. Now interact with Chainsafe storage S3 compatible API using this above created storage profile and by providing custom endpoint(of chainsafe storage) to AWS S3 CLI
   $ aws s3 mb s3://storage-s3-test --endpoint-url https://buckets.chainsafe.io --profile storage
   make_bucket: storage-s3-test

   $ aws s3 ls --endpoint-url https://buckets.chainsafe.io --profile storage
   2022-05-11 14:36:46 storage-s3-test

Golang Guide

This guide shows you how to integrate the S3 compatible storage layer using Go.

Before moving ahead please make sure you have the storage S3 keys mentioned above.

We will be using the minio S3 client for Go.

Operations:

  1. Make Bucket

Lets create a bucket to start.

  func main() {
     serviceAddress := "buckets.storage.io"

     // pass in your S3 key id and secret here.
     accessKey, secKey := "", ""

     // initiate the minio client
     client, err := minio.New(serviceAddress, &minio.Options{
         Creds:  credentials.NewStaticV4(accessKey, secKey, ""),
         Region: "us-east-1", // make sure to set it
         Secure: true,
     })
     if err != nil {
        log.Panic(err)
     }

     // bucket name to be created
     bucketName := "test-bucket"

     // create bucket
     ctx := context.Background()
     err = client.MakeBucket(ctx, bucketName, minio.MakeBucketOptions{})
     if err != nil {
        log.Panicf("error creating bucket \"%s\": %s", bucketName, err.Error())
     }
  }
  1. List Buckets

We will now list bucket that we created in previous step.

   func main() {
        ctx := context.Background()

        // list buckets
        buckets, err := client.ListBuckets(ctx)
        if err != nil {
            log.Panicf("error list buckets: %s", err.Error())
        }

        for _, b := range buckets {
            log.Printf("i have \"%s\" bucket\n", b.Name)
        }
    }
  1. Upload Objects

We will be uploading the objects to the bucket that we created above.

  func main() {
     ctx := context.Background()

     // bucket name that we created
     bucketName := "test-bucket"

     // present working directory
     pwd, err := os.Getwd()
     if err != nil {
         log.Panicf("can't get PWD: %s", err)
     }

     // document paths
     myDocument1 := filepath.Join(pwd, "document1.pdf")

     // S3 path keys
     document1Key := "letter1/document.pdf"

     // upload objects to the bucket
     _, err = client.FPutObject(
         ctx, bucketName, document1Key, myDocument1, minio.PutObjectOptions{
             DisableMultipart: true,
         })
     if err != nil {
         log.Panicf("error putting object \"%s\" to bucket: %s", myDocument1, err.Error())
     }
 }
  1. Get Object

We will be listing object that we added to the bucket in previous step.

   func main() {
     ctx := context.Background()

     // bucket name that we created
     bucketName := "test-bucket"

     // S3 path key
     document1Key := "letter1/document.pdf"

     // get objects
     _, err = client.GetObject(ctx, bucketName, document1Key, minio.GetObjectOptions{})
     if err != nil {
         log.Panicf("error getting the object itself: %s", err.Error())
     }
   }
  1. List Objects

We will not list all the files in the bucket with different options.

   func main() {
     ctx := context.Background()

     // bucket name that we created
     bucketName := "test-bucket"

     // list files by full paths
     objListOptions := minio.ListObjectsOptions{Prefix: "/", Recursive: true}
     for object := range client.ListObjects(ctx, bucketName, objListOptions) {
         log.Printf("recursive on root object: %s\n", object.Key)
     }

     // list root directories
     objListOptions = minio.ListObjectsOptions{Prefix: "/", Recursive: false}
     for object := range client.ListObjects(ctx, bucketName, objListOptions) {
         log.Printf("non-recursive on root object: %s\n", object.Key)
     }

     // list files in folder `/letter1`
     objListOptions = minio.ListObjectsOptions{Prefix: "/letter1", Recursive: true}
     for object := range client.ListObjects(ctx, bucketName, objListOptions) {
         log.Printf("recursive on subfolder object: %s\n", object.Key)
     }
   }
  1. Check If Bucket Exist

  func main() {
     ctx := context.Background()

     // bucket name that we created
     bucketName := "test-bucket"

     exists, err := client.BucketExists(ctx, bucketName)
     if err != nil {
       log.Panicf("error checking bucket existense \"%s\": %s", bucketName, err.Error())
     }
     if !exists {
       log.Panicf("bucket %s must exist", bucketName)
     }
  }
  1. Remove Bucket

Remove the bucket that we created.

  func main() {
     ctx := context.Background()

     // bucket name that we created
     bucketName := "test-bucket"

     err := client.RemoveBucket(ctx, bucketName)
     if err != nil {
       log.Panicf("error remeving bucket \"%s\": %s", bucketName, err.Error())
     }
  }
  1. Delete Object

   func main() {
     ctx := context.Background()

     // bucket name that we created
     bucketName := "test-bucket"
     document1Key := "letter1/document.pdf"
     err = client.RemoveObject(ctx, bucketName, document1Key, minio.RemoveObjectOptions{})
     if err != nil {
       fmt.Println("error deleting the bucket object: ", err.Error())
     }
  }

Notes

  • We only support a limited set of S3 functionalities as of now but we intend to expand these functionalities in the coming months. If there is a specific functionality that you would like to request, please email [email protected].
  • You need to set endpoint to your library of choice.buckets.chainsafe.io:443
  • You need to set the region for s3 client library to us-east-1
  • The in the example above is Key ID, is Secret.accessKey secKey

NFT Metadata Storage

Non-Fungible Tokens (NFTs) are unique, decentralized assets. One of the most compelling use cases for ChainSafe Storage is the storage of off-chain data associated with NFTs. With Storage, users can rest easy knowing that the associated off-chain data for their NFTs will always be just as available and decentralized as the asset itself!

Prerequisites

  1. Signup on storage
  2. Create an API key
  3. Create a bucket

Signup on storage

Go to https://app.storage.chainsafe.io if you haven’t created an account already.

Create an API key

  1. Go to settings and click on “Add API key”

  2. This will generate a key and a secret. Store your secret somewhere safe like you would back up a private key for a crypto wallet. It won’t be displayed again in the API Key List of the settings page.

  3. All of the following steps we will use this secret as <API_SECRET>

Create a bucket

To store an object in Storage, you need to create a bucket. A bucket is a container for data. When creating a bucket, provide the following two params as a request body:

  1. name - The name of the bucket. Must be unique in case you have multiple buckets
  2. type - For buckets in Storage, this value is always fps

Now let’s create a bucket with the following http request:

POST /api/v1/buckets HTTP/1.1
Host: https://api.chainsafe.io
Authorization: Bearer <API_SECRET>
Content-Type: application/json
{
    "name": "Test Bucket",
    "type": "fps",
}

You will get a json response with ID. We will refer to this ID as in the following steps.BUCKET_ID

Storing NFT data

To store NFT data you must provide the following fields:

  1. BUCKET_ID as path param
  2. path The path where to upload file data. You can provide a non-existing directory path. In that case directories according to the path will be created as well.
  3. file The file that you are uploading

Now, let’s upload a file () on the path example_nft /my_data

POST /api/v1/bucket/<BUCKET_ID>/upload HTTP/1.1
Host: https://api.chainsafe.io
Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW
Authorization: Bearer <API_SECRET>

----WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="file"; filename="example_nft"
Content-Type: application/json

(data)
----WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="path"

/my_data
----WebKitFormBoundary7MA4YWxkTrZu0gW

Retrieving NFT data

NFT data stored on ChainSafe Storage can be accessed from the storage download API or from any public IPFS network from any peer that has the content.

Retrieving via download endpoint

Simply provide this path in the json request body

POST /api/v1/bucket/<BUCKET_ID>/download HTTP/1.1
Host: https://api.chainsafe.io

Authorization: Bearer <API_SECRET>
Content-Type: application/json

{
    "path": "/my_data/example_nft"
}

Retrieving via ipfs gateway

First, get the CID of the uploaded file. Then, perform the following request to get file details in json format:

POST /api/v1/bucket/<BUCKET_ID>/file HTTP/1.1
Host: https://api.chainsafe.io
Authorization: Bearer <API_SECRET>
Content-Type: application/json

{
    "path": "/my_data/example_nft"
}

Response:

{
  "content": {
    "name": "file1.pdf",
    "cid": "QmfPaBnVAR48UbcjF8crcX7TtJKiV8g3DJkTUsBB6pXb7e",
    "size": 10121,
    ...
  },
  ...
}

Copy the CID. Using the CID, data can be fetched directly from any public IPFS gateway. You can also use chainsafe’s IPFS gateway (https://ipfs.chainsafe.io)

The URL should be in this format:

https://{gateway URL}/ipfs/{content ID}/{optional path to resource}

If we want to get our uploaded file through Chainsafe’s IPFS gateway, the URL will be https://ipfs.chainsafe.io/ipfs/QmfPaBnVAR48UbcjF8crcX7TtJKiV8g3DJkTUsBB6pXb7e

And there you have it. You now have everything you need to upload, store and download your NFT’s off-chain data using ChainSafe Storage.

Authentication

bearerAuth

The token can be issued via ‘Settings’ menu in Storage WebIU and used as a standard bearer token:

  • Authorization: Bearer <access-token>
Security Scheme Type HTTP
HTTP Authorization Scheme bearer
Bearer format “JWT”
1 Like