Handling these complicated data pipelines is called
data plumbing. It takes the lion¡¯s share of data engineer¡¯s time and effort, and naturally engineer dependecy get very high. Sadly, no matter how much effort is invested, it is still extremely difficult to add flexibility to data distribution unless fundamental structural issue of N-to-N connection is resolved.

Data hub is a measure to switch N-to-N structure into N-to-1-to-N structure by adding a mid-layer between data producer and data consumer. Data producer publishes data to data hub, and data consumers subscribe to topics they are interested stored in data hub. Data engineers can manage all data pipelines in a single view by connecting to data hub.
In N-to-N relation, data engineers have to consigure both ends of each pipeline separately. It is not hard to imagine their workload get easily balooned up as the number of producers or consumers increase (max N x N pipelines). Considering that both consumers and producers are consisted of heterogeneous systems, the level of compexity goes even higher.
On the other hand, in N-to-1-to-N relation, regardless of the number of producers and consumers, data engineers manage producers-to-hub and hub-to-consumers data pipelines, which can reduce their quantitive workload significantly (max N + N pipelines). This also lowers the hurdle qualititively because engineers do not have to make connections between two different systems. Plus, data hub provides an unified view over all data pipelines.

Kafka, developed by LinkedIn, is a widely known open source platform which implemented this idea. One of the commercial versions of Kafka is Informatica Data Integraion Hub.