How the Java server perceives the client disconnection

I have two questions:
how does the 1.java server perceive client chain breakage
2. Optimal implementation of a Server supporting multiple client connections
description:
for problem 1: the problem is that in the following code, when the client breaks the chain, the, while (true) always loops, causing the output to be null
for problem 2: my current implementation is that every client connection opens a thread to handle it, which does not feel reasonable, because when thousands of clients go to connect, It is impossible to create a process in this way, seeking the optimal solution.

I hope you will not hesitate to give us your advice. Thank you. The Java server code is as follows. The function is to start a server, connect with other clients, and output directly according to the messages sent by the client:

package MyTCP;

import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.OutputStreamWriter;
import java.net.ServerSocket;
import java.net.Socket;
import java.nio.channels.NonWritableChannelException;

public class Server
{
    public static void main(String[] args)
    {
        
        StartListen();
    }
    
    /**
     * 
     */
    public static void StartListen()
    {
        int port = 10001;
        createTCP(port);
    }
    
    /**
     * 
     * @param port
     * @param address
     */
    public static void createTCP(int port)
    {
        try
        {
            ServerSocket serverSocket = new ServerSocket(port);
            System.out.println("server list on 127.0.0.1:10001");
            
            while (true)
            {
                Socket socket = serverSocket.accept();
                new Thread(new ChildThred(socket)).start();
            }
        } 
        catch (IOException e) 
        {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }
}

class ChildThred implements Runnable
{
    BufferedReader bufrea = null;
    Socket socket = null;
    public ChildThred(Socket socket)
    {
        this.socket = socket;
    }

    @Override
    public void run()
    {
        try
        {
            bufrea = new BufferedReader(new InputStreamReader(socket.getInputStream()));
            
            while(true)
            {    
                System.out.println(" : " + bufrea.readLine());    
            }
            
        }
        catch (Exception e)
        {
            // TODO: handle exception
            e.printStackTrace();
        }
        finally 
        {
            try
            {
                bufrea.close();
                socket.close();
            }
            catch (Exception e) 
            {
                System.out.println(e.getMessage());
            }
        }
    }
}
Feb.21,2022

  • for the first question, refer to this article . The client is divided into normal and abnormal cases. The subject says that the loop has been endless because the client closes the sending channel and the server cannot read the data. Your program should judge when reading the data and jump out of the cycle when reading null . Send the data that needs to be sent, and then close the channel. This is a normal situation, and there are two kinds of exceptions. One is that the client program crashes or exits abnormally. When the server read , the server will throw a connection reset by peer exception. In this case, you only need to catch the exception to close the channel. Another situation is a network outage or a client power outage, which requires a heartbeat mechanism.
  • for the second question, it's complicated. The main topic is the blocking model (BIO),. You can refer to the processing mechanism of Tomcat (BIO model is outdated) .

clipboard.png

Tomcat first use LimitLatch to limit the flow. When the connection arrives, the Socket is encapsulated as a task queue that is thrown into the thread pool, instead of creating a thread immediately. This takes advantage of the task queue to act as a buffer, but because of the IO model, this approach still cannot make full use of server resources, because the thread is blocked while waiting for IO, and the thread cannot do anything else during the blocking period. This results in a waste of thread resources.

blocking model is not the best choice under high concurrency, but multiplexing is the best choice, and it is quite mature. The subject can first understand the five major IO models, and then come back to look at this second problem.


  • question 1: heartbeat mechanism is generally used to detect user survival
  • problem 2: each user connection will open a new thread, but in large-scale servers, load balancer will be used to build a server cluster, which makes users mistakenly perceive that there is only one server, but after balancing, it is a server cluster, and each server is only responsible for the number of users within its own capacity (that is, the number of threads).
Menu