Major History Cache/DDS Endpoint Redesign

It was decided to connect the DDS Endpoint directly to the RTPS
Endpoint. The history_cache Entity will be converted to a generic
History Cache acording to the RTPS Specification.
Because of consistency requirements the implementation was changed to a
single process/single port RAM design.
This should fully (blindly) implement the RTPS Reader side of the DDS
Entity.
This commit is contained in:
Greek 2021-01-21 12:51:39 +01:00
parent f2826ddd20
commit bdb397ae7d
3 changed files with 1877 additions and 3 deletions

View File

@ -439,7 +439,7 @@ INSTANCE MEMORY
+-------------------------------------------------------------+
05| STATUS_INFO | {A/B}
+-------------------------------------------------------------+
06| SAMPLE_COUNT | {A/B} [only MAX_SAMPLES_PER_INSTANCE/HISTORY]
06| SAMPLE_COUNT | {A/B}
+-------------------------------------------------------------+
07| DISPOSED_GENERATION_COUNT | {A} [only GENERATION_COUNTERS]
+-------------------------------------------------------------+
@ -576,6 +576,30 @@ Instead of blocking on the unresponsive Reader, the Writer should be allowed to
and proceed in updating its queue. The Writer should determine the inactivity of a Reader by using a
mechanism based on the rate and number of ACKNACKs received.
https://community.rti.com/content/forum-topic/instance-resources-dispose-and-unregister
Note that this means that with the default QoS settings RTI Connext DDS DataWriters do not release resources
of instances that have been "disposed" but are still registered. The reason is that there are various
scenarios under which "forgetting diposed instances" could lead to inconsistent or erroneous outcomes.
For example:
Scenario 1: With OWNERSHIP Qos Policy EXCLUSIVE and DURABILITY Qos Policy TRANSIENT_LOCAL, removing all
the DataWriter state associated with disposed (but still registered) instances would prevent the
DataWriter from maintaining Ownership of the instance in the presence of late-joining DataReaders.
Scenario 2: With OWNERSHIP Qos Policy SHARED, DURABILITY Qos Policy TRANSIENT_LOCAL, and DESTINATION_ORDER
Qos Policy BT_SOURCE_TIMESTAMP, removing all the DataWriter state associated with disposed (but still
registered) instances could lead to situations in which a late-joiner DataReader does not get notified
about the most recent state of an existing instance.
https://community.rti.com/content/forum-topic/instance-resources-dispose-and-unregister
The reason RTI DDS DataReaders do not release resources of instances in the NOT_ALIVE_DISPOSED state is
that there various scenarios under which this could lead to inconsistent or erroneous outcomes.
For example:
Scenario 1: With EXCLUSIVE ownership, removing the resources associated with the instance would forget
which DataWriter "owns" the instance and if a new DataWriter which lower strength wrote the instance the
update would be incorrectly accepted.
Scenario 2: With SHARED ownership and destination order by SOURCE timestamp, removing the resources
associated with the instance would forget the source timestamp when the deletion occurs and if a different
DataWriter where to write the instance with an earlier timestamp the update would be incorrectly accepted.
INVALIDATION
============

View File

@ -69,7 +69,8 @@
* Currently a RTPS Writer with DURABILITY TARNSIENT_LOCAL does send historical data to all matched readrs, not depensing if they are VOLATILE or TRANSIENT_LOCAL.
* Assert Heartbeat period > Heartbeat Suppression Period
* Can I request (NACK) SNs that were NOT announced by the writer (> last_sn in Heartbeat)?
* As it currently works, if a new sample is received and the QOS is not KEEP_ALL/RELIABLE, the oldest sample is removed.
In case of DESTINATION_ORDER = BY_SOURCE_TIMESTAMP it could happen that we effectively removed a sample that had a source timestamp later than the one we received.
* Fast-RTPS doen not follow DDSI-RTPS Specification
- Open Github Issue
@ -185,7 +186,7 @@ DESIGN DECISIONS
and is used for the insert operation to find a new empty slot. This in effect means that all frame sizes
have to be a multiple of 2 (all frame addresses have to be aligned to 2).
* The History Cache (HC) is the interface between RTPS and DDS. The History Cache contains the Sample Info
* !REJECTED! The History Cache (HC) is the interface between RTPS and DDS. The History Cache contains the Sample Info
and Payload memories. The HC has two input "sides", one is connected to the DDS and one to the RTPS entity.
Housing the memories inside the HC entity and abstracting the direct memory address via opcode requests
allows the memory interface to be replaced in future (e.g. AXI Lite).
@ -199,6 +200,18 @@ DESIGN DECISIONS
Because of this, it was decided against concurrent input handling in light of the fact that the history
cache will be most commonly quite large in size, and iterating through all
* Since most of the DDS QoS need information that is directly available to the History Cache (HC),
it makes sense to integrate most of the DDS functionality directly into the HC to save up space
and performance. Further more the needed stored information for a DDS Entity is different enough
from the generic HC defined in the RTPS Specification to warrant a seperate entity for both.
The DDS Entity will directly connect to the RTPS Endpoint. A separate generic HC will be
implemented, that follows the RTPS Specification.
The RTPS Endpoint will have to output multiple versions of Changes, depending on the connected
Entity, in order to facilitate this design decision.
* Since the "reading" side needs to have consistent state during it's processing, it does not make
sense to implement dual port RAMs for the History Cache.
PROTOCOL UNCOMPLIANCE
=====================

1837
src/dds_endpoint.vhd Normal file

File diff suppressed because it is too large Load Diff