Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a way to retrieve progress for a .addFile request? #28

Open
MeanwhileMedia opened this issue Dec 21, 2011 · 6 comments
Open

Is there a way to retrieve progress for a .addFile request? #28

MeanwhileMedia opened this issue Dec 21, 2011 · 6 comments

Comments

@MeanwhileMedia
Copy link

I thoroughly searched the object returned from a .addFile request, but cannot determine if it is possible to retrieve progress while a file is being sent. Any help would be much appreciated. Happy holidays!

@bmeck
Copy link

bmeck commented Dec 21, 2011

What are you looking for exactly? The amount of data sent?

@MeanwhileMedia
Copy link
Author

Yes sir. I thought maybe I could track it by looking at the data listener on 'readStream', the original write stream that I sent to the .addFile request (since it uses a .pipe). However, the readStream doesn't get paused, the system immediately reads right through it. That is when I started looking at the object returned by the .addFile request (reqStream), but I can't see how I can use this to track the amount of data sent.

var readStream = fs.createReadStream(path+'.'+extension, streamopts);
var upOpts = {
    headers: {
        'content-type': 'video/'+extension,
        'content-length': totalBytes
    },
    remote: CDNfilename, 
    stream: readStream
};

var reqStream = cloudClient.addFile(Container.name, upOpts, function (err, uploaded) {
    if (err) { console.log(err); }

});

@bmeck
Copy link

bmeck commented Dec 21, 2011

Ok, so exposing the stream on the result is one thing so that we reach compatibility with raw filepaths. However, I am unsure what you need. Right now you can do something similar to the following (this is just taken as a simple example without cloudfiles but the implementation is the same)

var http=require('http'),
    fs=require('fs'),
    request = require('request');

var file='bigFile.tgz';
server.listen(8008, function() {
    var req=request({
        method:'POST',
        url:'http://127.0.0.1:8008'
    });
    var stream=fs.createReadStream(file);
    var total=fs.statSync(file).size;
    var sent=0;
    stream.on('data',function(data){
        sent += data.length;
        console.log('sending', data.length, 'progress', Math.floor(sent / total * 100));
    })
    stream.pipe(req);
});

Is this not doable for you?

@MeanwhileMedia
Copy link
Author

So, if I'm understanding correctly, this would be the same as accessing the 'data' listener on my stream 'readStream'. The stream that is being piped to the 'request' module. This is what I tried before, but for some reason, all the data events for a 5mb stream finish in about 15ms (obviously not how long it actually takes to send to cloudfiles). However, if I listen for the 'end' event on the same stream, it does accurately represent the completion time for the upload (although that doesn't really help me in tracking progress).

I assume that readStream is getting paused, since it part of a .pipe, however, it doesn't seem to affect how quickly the system reads through the file. Could this have something to do with my buffer? I have it set at 64kbs.

@bmeck
Copy link

bmeck commented Dec 21, 2011

This may be due to bufferSize on net.Socket . Since socket.write always
works
it means that bufferSize can increase drastically. As of right now,
I know of no method to track the changes in bufferSize outside of the drain
event.

On Wed, Dec 21, 2011 at 2:06 PM, MeanwhileMedia <
[email protected]

wrote:

So, if I'm understanding correctly, this would be the same as accessing
the 'data' listener on my stream 'readStream'. The stream that is being
piped to the 'request' module. This is what I tried before, but for some
reason, all the data events for a 5mb stream finish in about 15ms
(obviously not how long it actually takes to send to cloudfiles). However,
if I listen for the 'end' event on the same stream, it does accurately
represent the completion time for the upload (although that doesn't really
help me in tracking progress).

It assume that readStream is getting paused, since it part of a .pipe,
however, it doesn't seem to affect how quickly the system reads through the
file. Could this have something to do with my buffer? I have it set at
64kbs.


Reply to this email directly or view it on GitHub:
#28 (comment)

@MeanwhileMedia
Copy link
Author

Now that I can send the file in chunks (thanks to you), I won't even need to fetch progress reports in my application. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants