-
-
Notifications
You must be signed in to change notification settings - Fork 421
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running out of memory copying large files #164
Comments
as another test I commented out the |
The backpressure isn't that perfect in StreamSaver
Now, you will likely not use Transferable streams since it's only behind a a flag in browser that has it ATM. So when you write a chunk and use await all you are really doing is sending that piece of chunk with a postMessage and then it's all done. there is nothing that the promise waits for, like "is it done writing to the disk?" - all it really is doing is kinda: async function write (chunk) {
messageChannel.port1.postMessage(chunk)
} ... with some abstraction on top... the postMessage could be improved by using transferable chunks to transfer it faster to the service worker instead of using the default "clonable" algorithm async function write (chunk) {
messageChannel.port1.postMessage(chunk, [chunks])
} but i haven't implemented it. i have no idea what the developers do with the chunk afterwards (they might reuse it - it could be a option doe) the other issue is that the service worker don't poll (ask for more data when it's ready to accept more data) or sending back some kind of message that you are filling the stream bucket too fast. All of this problem can go away if you have support for transferable streams. so i haven't putt in too much effort into making a solid "main > service worker"-pipeline You are not experience a memory leak, you are just simply over flooding the stream bucket faster than what it can write to the disk. |
going to close this since it's pretty much the same as #145 |
if you want to be fancy, you could just do: function copyFile() {
const [ file ] = document.querySelector('#fileInput').files
const writeStream = streamSaver.createWriteStream(file.name)
file.stream().pipeTo(writeStream)
// or
new Response(file).body.pipeTo(writeStream)
} but that has lower browser support unfortunately |
Thank you! |
Hi,
I am doing a PoC using StreamSaver.js in which the user supplies a (very large) file that I sequentially read and then save it using StreamSaver.js to the downloads. The issue is that using 64 GB files the browser runs out of memory (around 60 GB of RAM) and the download fails (the process seems to be terminated). Am I using StreamSaver.js wrong? Here's the code:
The HTML around it is:
The text was updated successfully, but these errors were encountered: