Does DDS have a Broker?
Asked Answered
B

3

6

I've been trying to read up on the DDS standard, and OpenSplice in particular and I'm left wondering about the architecture.

Does DDS require that a broker be running, or any particular daemon to manage message exchange and coordination between different parties? If I just launch a single process publishing data for a topic, and launch another process subscribing for the same topic, is this sufficient? Is there any reason one might need another process running?

In the alternative, does it use UDP multicasting to have some sort of automated discovery between publishers and subscribers?

In general, I'm trying to contrast this to traditional queue architectures such as MQ Series or EMS.

I'd really appreciate it if anybody could help shed some light on this.

Thanks,

Faheem

Bier answered 29/12, 2011 at 20:39 Comment(0)
W
7

DDS doesn't have a central broker, it uses a multicast based discovery protocol. OpenSplice has a model with a service for each node, but that is an implementation detail, if you check for example RTI DDS, they don't have that.

Womb answered 30/12, 2011 at 21:41 Comment(2)
Could you please tell me what is a central broker ?Jackfish
With a central broker I mean that there is one central process that administrates everything in the system, if that process crashes than nobody can find each other. With DDS this is not the case, it is fully distributed.Womb
B
4

Think its indeed good to differentiate between a 'centralized broker' architecture (where that broker could be/become a single-point of failure) and a service/daemon on each machine that manages the traffic-flows based on DDS-QoS's such as importance (DDS:transport-priority) and urgency (DDS: latency-budget).

Its interesting to notice that most people think its absolutely necessary to have a (real-time) process-scheduler on a machine that manages the CPU as a critical/shared resource (based on timeslicing, priority-classes etc.) yet that when it comes to DDS, which is all about distributing information (rather than processing of application-code), people find it often 'strange' that a 'network-scheduler' would come in 'handy' (the least) that manages the network(-interface) as a shared-resource and schedules traffic (based on QoS-policy driven 'packing' and utilization of multiple traffic-shaped priority-lanes).

And this is exactly what OpenSplice does when utilizing its (optional) federated-architecture mode where multiple applications that run on a single-machine can share data using a shared-memory segment and where there's a networking-service (daemon) for each physical network-interface that schedules the in- and out-bound traffic based on its actual QoS policies w.r.t. urgency and importance. The fact that such a service has 'access' to all nodal information also facilitates combining different samples from different topics from different applications into (potentially large) UDP-frames, maybe even exploiting some of the available latency-budget for this 'packing' and thus allowing to properly balance between efficiency (throughput) and determinism (latency/jitter). End-to-End determinism is furthermore facilitated by scheduling the traffic over pre-configured traffic-shaped 'priority-lanes' with 'private' Rx/Tx threads and DIFSERV settings.

So having a network-scheduling-daemon per node certainly has some advantages (also as it decouples the network from faulty-applications that could be either 'over-productive' i.e. blowing up the system or 'under-reactive' causing system-wide retransmissions .. an aspect thats often forgotten when arguing over the fact that a 'network-scheduling-daemon' could be viewed as a 'single-point-of-failure' where as the 'other view' could be that without any arbitration, any 'standalone' application that directly talks to the wire could be viewed as a potential system-thread when it starts misbehaving as described above for ANY reason.

Anyhow .. its always a controversial discussion, thats why OpenSplice DDS (as of v6) supports both deployment modes: federated and non-federated (also called 'standalone' or 'single process').

Hope this is somewhat helpful.

Bombardier answered 23/7, 2013 at 11:41 Comment(0)
S
3

DDS specification is designed so that implementations are not required to have any central daemons. But of course, it's a choice of implementation.

Implementations like RTI DDS, MilSOFT DDS and CoreDX DDS have decentralized architectures, which are peer-to-peer and does not need any daemons. (Discovery is done with multicast in LAN networks). This design has many advantages, like fault tolerance, low latency and good scalability. And also it makes really easy to use the middleware, since there's no need to administer daemons. You just run the publishers and subscribers and the rest is automatically handled by DDS.

OpenSplice DDS used to require daemon services running on each node, but they have added a new feature in v6 so that you don't need daemons anymore. (They still support the daemon option).

OpenDDS is also peer-to-peer, but it needs a central daemon running for discovery as far as I know.

Seraglio answered 15/2, 2012 at 21:25 Comment(1)
OpenDDS does have a central deamon that is mandatory in some configurations, but now also supports the RTPS wire protocol which doesn't have this mandatory central daemonWomb

© 2022 - 2024 — McMap. All rights reserved.