How to realize "multithreading" downloading and breakpoint continuation of large files by JS

when I store a large file in the backend, I split it into n blocks, which are stored in Mongodb. Now JS is used in the front end to achieve multi-thread download, the current idea is to send money in the front en dna jax, and then use promise.all to wait for the return of the n requests, after the return of the data splicing and then set the a tag of the download attribute to download. At present, there are two problems:
1. Because I wait for all the data to come back before I integrate and download it, so the data is first in memory, which is obviously unreasonable for a large file, such as 1G or 2G.
2. As for the problem of resuming upload at breakpoint, the idea is to click to download the file, the front end reads the file first, and then determines whether the I (0 < I < n) block of the file has been downloaded, only the undownloaded part, but the problem is that JS cannot write to the local file.
do you have any good solutions? A friend said to see if he could use the browser"s proxy, that is, every time the browser requested the proxy, and then implemented various operations on the file in the proxy, but he did not specify what the proxy was and how to implement it, so it was not easy to search for materials. I hope you will not hesitate to give us your advice.

Mar.04,2021

you can save it separately, but why download it separately? Is it not possible to download the back-end splicing front end at one time?


HTTP request header has a Range field, which is used to tell the server which part of the file to return . Ali OSS uses this field to download breakpoint continuation (see Range request example section). I think you can also try to do it this way, try to make use of standardized things, and don't build wheels from scratch.

Menu