Can kafka-dependent Flink tasks forcibly increase resources dynamically by copying job?

due to the current version, it seems that flink does not support dynamic scaling,. If you need to increase resource allocation, you must first stop the running job. Currently I have a flink job that is used to consume data from kafka topic and then sink to another topic. So I would like to ask if we directly copy the new flink job (consumer"s group_id remains the same) can it achieve the original goal of dynamically increasing resources? If you do this, can you use yarn or kubernetes to schedule resources dynamically?

Mar.18,2022

doesn't feel right.
messages whose Key was originally key1 will be consumed by the original flink job , resulting in State . Now key= "key1" messages will be sent to the new flink job , so what about State in the old flink job ? At the same time, State in the new flink job is not calculated on the basis of State in the old flink job .

  • How flink initializes spring

    I have a few questions about using flink, just now. I would like to ask all of you to help me. flink how to initialize spring context flink how to use scheduled tasks @ codecraft @ Ali Yunqi Community @ Fan Xuan Thank you ...

    Jan.08,2022
  • Flink standalone cluster registration failed

    Environment 2 machines, 203204 plans to run jobmanager and taskmanager on each of these two machines. master hz203:9081 hz204:9081 slaves hz203 hz204 flink-conf.yaml jobmanager.rpc.port: 6123 rest.port: 9081 blob.server.port: 6124 query.se...

    Mar.06,2022
Menu