There are tens or even hundreds of megabytes of data, using HttpClient to obtain remotely, how to ensure that the data is sent and received?

there are two microservices, one in two cities. A service has an api interface open to provide data, and service B calls this interface with HttpClient from time to time to retrieve data. Tens or even hundreds of megabytes of data need to be transmitted at a time. My question is:

1 it is impossible for the body of a Http response to be tens or even hundreds of megabytes, so you need to divide the data into many segment, and send only one segment, at a time. How good is each segment?

2 how to ensure the reliability of transmission and how to set the mechanism of HttpClient retransmission?

Nov.12,2021

  1. is also possible. In addition, the http protocol itself can be transmitted in segments
  2. because the transport layer protocol used by http is tcp, and tcp itself is reliable, http does not need to guarantee reliability

Hello, the demand is tens to hundreds of megabytes, so I personally think it is impractical to use httpclient to deal with this. I also think that if there is a bug, then it is also a weird bug, to recommend using a shell script, which is actually a wget-O xxx.xxx http://127.0.0.1:8080/nihao to a script. There is basically nothing wrong with downloading it in this way


use middleware, redis,mysql,kafka is fine. Using http to send such a large amount of data sounds unreliable.


if the requirement cannot be changed, learn about the "breakpoint (block) continuation" in http protocol

.
Menu