From 38e753aa93eb2e1d18ca0ab7a898048c68a31208 Mon Sep 17 00:00:00 2001 From: Greek64 Date: Sun, 11 Jun 2023 13:13:31 +0200 Subject: [PATCH] tmp --- src/REF.txt | 97 ++++++++++++++++------------ src/TODO.txt | 54 +++++++++++++--- src/Tests/Level_2/L2_Type1_test1.vhd | 2 +- src/Tests/Level_2/L2_Type1_test2.vhd | 4 +- 4 files changed, 105 insertions(+), 52 deletions(-) diff --git a/src/REF.txt b/src/REF.txt index 9626085..851f5a8 100644 --- a/src/REF.txt +++ b/src/REF.txt @@ -791,6 +791,14 @@ of the Reader. It is also not able to drop out-of-order samples on the Reader side as this requires keeping track of the largest sequence number received from each remote Writer. +8.4.15.6 Reclaiming Finite Resources from Unresponsive Readers (DDSI-RTPS) +For a Writer, reclaiming queue resources should happen when all Readers have acknowledged a sample in the +queue and resources limits dictate that the old sample entry is to be used for a new sample. +There may be scenarios where an alive Reader becomes unresponsive and will never acknowledge the Writer. +Instead of blocking on the unresponsive Reader, the Writer should be allowed to deem the Reader as ‘Inactive’ +and proceed in updating its queue. The Writer should determine the inactivity of a Reader by using a +mechanism based on the rate and number of ACKNACKs received. + 8.5.4.2 (DDSI-RTPS) For the purpose of interoperability, it is sufficient that an implementation provides the required built-in Endpoints and reliable communication that satisfies the general requirements listed in 8.4.2. @@ -803,6 +811,21 @@ If any information is not present, the implementation can assume the default val In order to implement the DDS_BY_SOURCE_TIMESTAMP_DESTINATIONORDER_QOS policy, implementations must include an InfoTimestamp Submessage with every update from a Writer. +8.7.5 Group Ordered Access (DDSI-RTPS) + +When a Publisher is configured with access scope GROUP, all Data submessages and the first DataFrag +submessage from any Writer within the Publisher are accompanied with a GROUP sequence number sent as +part of the in-line QoS. The GROUP sequence number is a strictly monotonically increasing sequence number +originating from the Publisher. Each time that a DataWriter attached to a Publisher makes a CacheChange +(i.e., increments its own Writer sequence number), the GROUP sequence number is incremented. + +A DataReader attached to a Subscriber configured with access scope GROUP first orders the samples from a +remote Writer as it would in the cases where access scope GROUP is not set. Once a sample is ready to be +committed to the DDS DataReader, it will not commit it. Instead, it will hand it off to a HistoryCache of the +Subscriber where ordering across remote DataWriters belonging to the same Publisher occurs. A sample with +GROUP sequence number GSN can be committed to the DDS DataReader from the Subscriber’s history cache +if any of the following conditions apply + 9.4.2.11 (DDSI-RTPS) The ParameterList may contain multiple Parameters with the same value for the parameterId. This is used to provide a collection of values for that kind of Parameter. @@ -811,6 +834,39 @@ This is used to provide a collection of values for that kind of Parameter. For alignment purposes, the CDR stream is logically reset for each parameter value (i.e., no initial padding is required) after the parameterId and length are serialized. +2.2.2.4.2.22 assert_liveliness (DDS) +NOTE: Writing data via the write operation on a DataWriter asserts liveliness on the DataWriter itself +and its DomainParticipant. Consequently the use of assert_liveliness is only needed if the application +is not writing data regularly. + +2.2.2.4.2.11 write (DDS) +If (RESOURCE_LIMITS max_samples < RESOURCE_LIMITS max_instances * HISTORY depth), then in the situation +where the max_samples resource limit is exhausted the Service is allowed to discard samples of some other +instance as long as at least one sample remains for such an instance. If it is still not possible to make +space available to store the modification, the writer is allowed to block. + +2.2.2.4.2.7 unregister_instance (DDS) +If after that, the application wants to modify (write or dispose) the instance, it has to register it again, +or else use the special handle value HANDLE_NIL. + +2.2.2.5.3.8 read (DDS) +Samples that contain no data do not count towards the limits imposed by the RESOURCE_LIMITS QoS policy. + +2.2.2.5.3.8 read (DDS) +The act of reading a sample sets its sample_state to READ. If the sample belongs to the most recent +generation of the instance, it will also set the view_state of the instance to NOT_NEW. It will not +affect the instance_state of the instance. + +2.2.2.5.5 SampleInfo CLass (DDS) +The publication_handle that identifies locally the DataWriter that modified the instance. The publication_handle is the +same InstanceHandle_t that is returned by the operation get_matched_publications on the DataReader and can also +be used as a parameter to the DataReader operation get_matched_publication_data. + +2.2.3.16 LIFESPAN (DDS) +This QoS relies on the sender and receiving applications having their clocks sufficiently synchronized. If +this is not the case and the Service can detect it, the DataReader is allowed to use the reception timestamp +instead of the source timestamp in its computation of the ‘expiration time.’ + 7.4.3.5.2 Encoding of Optional Members (DDS-XTYPES) PLAIN_CDR serializes optional members by prepending either a ShortMemberHeader or a 12 byte LongMemberHeader. See Clause 7.4.1.1.5.2. The associated size is set to zero if the @@ -853,19 +909,6 @@ is the time when all previous DDS samples has been received—the time at which If DDS samples are all received in order, the committed time will be same as reception time. However, if DDS samples are lost on the wire, then the committed time will be later than the initial reception time. -2.2.3.16 LIFESPAN (DDS) -This QoS relies on the sender and receiving applications having their clocks sufficiently synchronized. If -this is not the case and the Service can detect it, the DataReader is allowed to use the reception timestamp -instead of the source timestamp in its computation of the ‘expiration time.’ - -8.4.15.6 Reclaiming Finite Resources from Unresponsive Readers (DDSI-RTPS) -For a Writer, reclaiming queue resources should happen when all Readers have acknowledged a sample in the -queue and resources limits dictate that the old sample entry is to be used for a new sample. -There may be scenarios where an alive Reader becomes unresponsive and will never acknowledge the Writer. -Instead of blocking on the unresponsive Reader, the Writer should be allowed to deem the Reader as ‘Inactive’ -and proceed in updating its queue. The Writer should determine the inactivity of a Reader by using a -mechanism based on the rate and number of ACKNACKs received. - https://community.rti.com/content/forum-topic/instance-resources-dispose-and-unregister Note that this means that with the default QoS settings RTI Connext DDS DataWriters do not release resources of instances that have been "disposed" but are still registered. The reason is that there are various @@ -897,40 +940,12 @@ the system (so they are no longer matched with the DataReader). Note that the in NOT_ALIVE_NO_WRITERS instance_state or a NOT_ALIVE_DISPOSED, depending on whether the instance was disposed prior to losing all the DataWriters. -2.2.2.5.3.8 read (DDS) -Samples that contain no data do not count towards the limits imposed by the RESOURCE_LIMITS QoS policy. - -2.2.2.5.3.8 read (DDS) -The act of reading a sample sets its sample_state to READ. If the sample belongs to the most recent -generation of the instance, it will also set the view_state of the instance to NOT_NEW. It will not -affect the instance_state of the instance. - https://community.rti.com/static/documentation/connext-dds/5.2.0/doc/manuals/connext_dds/html_files/RTI_ConnextDDS_CoreLibraries_UsersManual/Content/UsersManual/DESTINATION_ORDER_QosPolicy.htm#sending_2410472787_644578 Data will be delivered by a DataReader in the order in which it was sent. If data arrives on the network with a source timestamp earlier than the source timestamp of the last data delivered, the new data will be dropped. This ordering therefore works best when system clocks are relatively synchronized among writing machines. -2.2.2.4.2.22 assert_liveliness (DDS) -NOTE: Writing data via the write operation on a DataWriter asserts liveliness on the DataWriter itself -and its DomainParticipant. Consequently the use of assert_liveliness is only needed if the application -is not writing data regularly. - -2.2.2.4.2.11 write (DDS) -If (RESOURCE_LIMITS max_samples < RESOURCE_LIMITS max_instances * HISTORY depth), then in the situation -where the max_samples resource limit is exhausted the Service is allowed to discard samples of some other -instance as long as at least one sample remains for such an instance. If it is still not possible to make -space available to store the modification, the writer is allowed to block. - -2.2.2.4.2.7 unregister_instance (DDS) -If after that, the application wants to modify (write or dispose) the instance, it has to register it again, -or else use the special handle value HANDLE_NIL. - -2.2.2.5.5 SampleInfo CLass (DDS) -The publication_handle that identifies locally the DataWriter that modified the instance. The publication_handle is the -same InstanceHandle_t that is returned by the operation get_matched_publications on the DataReader and can also -be used as a parameter to the DataReader operation get_matched_publication_data. - INVALIDATION ============ diff --git a/src/TODO.txt b/src/TODO.txt index e54c596..2db42e1 100644 --- a/src/TODO.txt +++ b/src/TODO.txt @@ -112,6 +112,13 @@ We have to change the RTPS Reader to request the last SN, if the RTPS Writer did not publish for a minimum_separation period. * [8.4.7.1 RTPS Writer, DDSI-RTPS 2.3] states: "nackSuppressionDuration = ignore requests for data from negative acknowledgments that arrive ‘too soon’ after the corresponding change is sent." +* According to [Table 8.9, 8.2.6 The RTPS Endpoint, DDSI-RTPS 2.3], topicKind "indicates wether the Endpoint supports instance lifecycle management operations", while at the same time "indicates wether the Endpoint is associated with a DataType that has defined some fields as containing the DDS Key". + This implies that key-less Topics DO NOT have instance lifecycle management operations (i.e. no register/unregister/dispose operations) +* [8.7 Implementing DDS QoS and advanced DDS features using RTPS] states: + "This sub clause forms a normative part of the specification for the purpose of interoperability." + Which means that it is part of the Specification and NOT optional. + Hence why 8.4.2.2.5 requires writers to send Writer Group Information for the purposes of interoperability. + * Fast-RTPS does not follow DDSI-RTPS Specification - Open Github Issue @@ -132,24 +139,44 @@ interpret the Reader entityIds appearing in the Submessages that follow it.' But state is changed as follows 'Receiver.destGuidPrefix = InfoDestination.guidPrefix'. Isn't Reader -> Writer also valid? Does it have a specific direction? - - 9.4.5.3 Data Submessage - writerSN is incorrectly shown as only 32 bits in width - 8.2.3 The RTPS CacheChange Add IDL Specification for CacheChange_t - 8.3.4 The RTPS Message Receiver, Table 8.16 - Initial State of the Receiver Port of UnicastReplyLocatorList should be initialized to Source Port. + - 8.3.4.1 Rules Followed by the Message Receiver + 'Submessage and when it should be considered invalid.' + This belongs to the previous sentence. - 8.3.7 "Contains information regarding the value of an application Date-object." Shoulbe be Data-object - 8.3.7.4.3 Validity gapList.Base >= gapStart + - 8.3.7.4.5 Logical Interpretation + 'See section 8.7.6 for how DDS uses this feature.' + Wrong reference. 8.7.5 correct + - 8.3.7.5.5 Logical Interpretation + 'These fields provide relate the CacheChanges of Writers belonging to a Writer Group.' + Remove provide + 'See 8.7.6 for how DDS uses this feature.' + Wrong reference. 8.7.5 correct - 8.3.7.10.3 Validity 'This Submessage is invalid when the following is true: submessageLength in the Submessage header is too small' But if InvalidateFlag is set, Length can be Zero. Since the length is unsigned, there cannot be an invalid length. - 8.3.7.11.1 - "Given the size of a SequenceNumberSet is limited to 256, an AckNack Submessage is limited to NACKing only those samples whose sequence number does not not exceed that of the first missing sample by more than 256." + 'Given the size of a SequenceNumberSet is limited to 256, an AckNack Submessage is limited to NACKing only those samples whose sequence number does not not exceed that of the first missing sample by more than 256.' Remove one not + - 8.4.2.2.5 Sending Heartbeats and Gaps with Writer Group Information + This rules seems like a last minute addition and does not follow the format until now. + + Maybe rewrite as "Writers must send Heartbeats and Gaps with Writer Group Information"? + + 'A Writer belonging to a Group shall send HEARTBEAT or GAP Submessages to its matched Readers even if the Reader has acknowledged all of that Writer’s samples.' + This sentence has nothing in common with the actual requirement/rule. Usually the first sentence following the actual requirement explains the requirement in more detail. + Is this sentnce do be understood in addition or instead of the actual requirement/rule? + + 'The exception to this rule is when the Writer has sent DATA or DATA_FRAG Submessages that contain the same information.' + Link section 8.7.6 which states how this information is sent - 8.4.7 RTPS Writer Reference Implementation According to 8.2.2 the History Cache (HC) is the interface between RTPS nad DDS, and can be invoked by both RTPS and DDS Entities. @@ -171,9 +198,17 @@ MANUAL_BY_PARTICIPANT Liveliness. - 8.7.3.2 Indicating to a Reader that a Sample has been filtered Text refs 8.3.7.2.2 for DataFlag, but shoudl also ref 8.7.4 for FilteredFlag + - 9.2.2 + Add newline to IDL definition after "OctetArray3 entityKey;" + - 9.3.1.2 Mapping of the EntityId_t + Add newline to IDL definition after "typedef octet OctetArray3[3];" + - 9.3.2.4 GroupDigest_t + Missing "EntityId_t" struct type name on the second struct IDL definition. - 9.4.5.1.2 Flags Clarify from where the endianness begins. One might think it would begin after the Submessage Header, but the length is also endian dependent. + - 9.4.5.3 Data Submessage + writerSN is incorrectly shown as only 32 bits in width - 9.4.5.3.1 Data Flags "D=1 and K=1 is an invalid combination in this version of the protocol." Does this invalidate the Submessage? Does 8.3.4.1 apply (Invalidate rest of Message)? @@ -279,8 +314,8 @@ DESIGN DECISIONS exception of the Highest/Last received (since it only keeps the SN in order and does only need to request from the last stored SN on), the writer does need to keep track of the requested SN (and possibly also the acknowledgements). - This could be solved by either storing the SN in a bitmap in the endpoint data, or be storing the - requester bitmap (endpoint data address) in the change data. + This could be solved by either storing the SN in a bitmap in the endpoint data, or by storing the + requester bitmap (endpoint data address) in the cache change data. But since the writer might drop SN in any order, the highest and lowest SN inside the cache history is unbounded. We can thus only reference to still available SN, and not to GAPs. In order to acoomodate for that, we could store the lowest (and possibly highest) SN of a requested @@ -397,7 +432,7 @@ DESIGN DECISIONS * !REJECTED! DATA WRITER: Once an instance is unregistered, it is eligible for deletion except if it is Re-registered, or a write operation occurs on that instance. Disposal of an unregistered Instance does not re-register the instance (State remains NOT_ALIVE) and is still eligible for deletion. - NOTE: The statement above is incorrect, as a writer wanting to dispose an Intsnace has to re-register + NOTE: The statement above is incorrect, as a writer wanting to dispose an Instance has to re-register the Instance. Hence, it is re-registered (and the disposing writer is again active), the state Instance remains howerer in a NOT_ALIVE state. @@ -534,12 +569,15 @@ BRAINSTORMING * Add all Participant specific configuration into a generic array (maybe array of record?) and modify the discovery module to be centric to ALL participants. That means that the Participant Memory will - contain ALL remortely matched participants (even if they are matched only be 1 local participant). + contain ALL remotely matched participants (even if they are matched only by 1 local participant). The discovery module will also need to differentiate between the local participants for replies - (Parse RTPPS GUID and set local array index). + (Parse RTPS GUID and set local array index). The port interface of the discovery module will not change, meaning that ALL the endpoints of all the local participants will be flattened into one array for communication purposes (Maybe define "static" demangle package function?). + +* Since Publisher and Subscriber Groups are static, we can also generate the GroupDigests statically + and avoid having to use a HW MD5 calculator. PROTOCOL UNCOMPLIANCE ===================== diff --git a/src/Tests/Level_2/L2_Type1_test1.vhd b/src/Tests/Level_2/L2_Type1_test1.vhd index 70f558a..74e9d27 100644 --- a/src/Tests/Level_2/L2_Type1_test1.vhd +++ b/src/Tests/Level_2/L2_Type1_test1.vhd @@ -14,7 +14,7 @@ use work.rtps_config_package.all; use work.rtps_test_package.all; use work.Type1_package.all; --- This testbench tests the general system operation by interconnecting a complete system with a single writer with a complete sysetm with a single reader. +-- This testbench tests the general system operation by interconnecting a complete system with a single writer with a complete system with a single reader. -- Libraries are used to allow to use systems with different configurations. Testbench_Lib2 contains a single endpoint writer (Type1), and Testbench_Lib3 contains a single endpoint reader (Type1). -- Both Libraries have compatible settings for matching. -- The testbench first registers 2 instances, and writes 2 samples for each instance (once using the instance handle, and onc using HANDLE_NIL). This initial 4 writes are done before diff --git a/src/Tests/Level_2/L2_Type1_test2.vhd b/src/Tests/Level_2/L2_Type1_test2.vhd index aca9bdb..2fa45a9 100644 --- a/src/Tests/Level_2/L2_Type1_test2.vhd +++ b/src/Tests/Level_2/L2_Type1_test2.vhd @@ -16,9 +16,9 @@ use work.Type1_package.all; -- This testbench tests the general system operation by interconnecting two complete systems with a reader and a writer respectively, and perforing a loopback operation. -- Libraries are used to allow to use systems with different configurations. --- The testbench is interfacing with the readr and writer of Testbench_Lib5, and the loopback entity (test_loopback) is interfacing with the reader and writer of Testbench_Lib4. +-- The testbench is interfacing with the reader and writer of Testbench_Lib5, and the loopback entity (test_loopback) is interfacing with the reader and writer of Testbench_Lib4. -- The testbench->test_loopback communication uses a Type1 Instance with id=1, and the test_loopback->testbench communication uses a Type1 Instance with id=2. --- The testbench performs a REGISTER_INSTANCE operation to get the Instance Handle of the response channel. The it sends 5 samples with content 1,2,3,4, and 5, respectively. +-- The testbench performs a REGISTER_INSTANCE operation to get the Instance Handle of the response channel. Then it sends 5 samples with content 1,2,3,4, and 5, respectively. -- The test_loopback reads these Samples and responds with a x+1000 calculation on the Sample Contents (i.e. 1001,1002,1003,1004,1005). -- The testbench reads only Samples with the correct instance and checks for the expected content.