Using the NodeChef object storage service with the AWS S3 SDK's for Node.js

The below document provides examples on working the NodeChef S3 compatible object storage service. You can adopt the examples for your own use case how you see fit. The examples in this document assumes the user is using the express.js framework

Init the S3 SDK

var AWS = require('aws-sdk'); var s3 = new AWS.S3({ accessKeyId: process.env.OSS_ACCESS_KEY, secretAccessKey: process.env.OSS_SECRET_KEY, endpoint: process.env.OSS_ENDPOINT });
After you create the Object storage service from the dashboard, every application you launch will automtically set the environment variables: OSS_ACCESS_KEY, OSS_SECRET_KEY and OSS_ENDPOINT. These variables will be used to initialize the AWS S3 SDK. You could also hardcode these values if you choose to.


Express framework REST POST request example

Note the below example assumes the client made a POST request to your server and you want to upload the body of the request to the object storage service.

var express = require('express'); var zlib = require('zlib'); var AWS = require('aws-sdk'); var s3 = new AWS.S3({ accessKeyId: process.env.OSS_ACCESS_KEY, secretAccessKey: process.env.OSS_SECRET_KEY, endpoint: process.env.OSS_ENDPOINT }); function ProcessMyFileUploadRequest(req, res) { var length = req.headers['content-length']; // validate the content length if you care.. var stream = req; stream.length = length; var params = { Key: 'NewFile', // in your web app you will have a way to generate unique file Ids. Body: stream, Bucket: 'webPics' }; s3.putObject(params, function(err, data) { if (err) res.status(500).send({ ok : 0, err : err }); else { var file_url = 'https://webPics.' + process.env.OSS_ENDPOINT + '/NewFile'; res.status(200).send({ ok : 1, url : file_url }); } }); }
For performance, note in the above example, the request body is not buffered. This provides an efficient approach to stream the data to the object storage service. Buffering can be problematic for concurrent uploads as you will easily run out of memory leading to broken uploads.

In the above example, "NewFile" is hardcoded as the name of the file and webPics is also hard coded as the name of your bucket. In your application, you will most likely want to change this as you will typically generate unique Ids for files uploaded to your server.

Also, the file_url we generated above is the public url of the file and will only work if anonymous is enabled on the bucket. You can enable this from the NodeChef dashboard.



Deleting files from your buckets

var AWS = require('aws-sdk'); var s3 = new AWS.S3({ accessKeyId: process.env.OSS_ACCESS_KEY, secretAccessKey: process.env.OSS_SECRET_KEY, endpoint: process.env.OSS_ENDPOINT }); var params = { Key: 'NewFile', Bucket: 'webPics' }; s3.deleteObject(params, function(err, data){ if(err !== null) { console.error(err); } });