The server could at any time reset and sever the connection of the user to the
result memory, that could lead to the user indefenitely waiting for a droped
memory read request.
The server now only sever the connection when the result memory has no pending
requests.
L1_Fibanacci_ros_ation_server_test1 and L1_Fibanacci_ros_ation_server_test2
are exetnded to test the fix.
In case of nested collections in a WRITER Interface, the length of the
collection is latched into memories, but the output is not made available
to the user. Since the signal is lacthed either way, making the stored
value also available to the user gives more flexibility to the user.
Even with the previous commits, there still exist race conditions in
which if the Cyclone DDS implementation receives the initial HEARTBEAT
after the message has been sent, it is silently dropped (Volatile behaviour).
And since the Cyclone DDS implementation is ignoring HEARTBEATs of
yet unmatched endpoints (which is what happens to our init HEARTBEAT),
the best way to counter this is to just wait after the reception of the
first Cyclone DDS message (which signifies that it has matched all our
endpoints) until the HEARTBEAT timeout has also sent the respective
initial HEARTBEATs.
The sequential logic of the main FSM in dds_reader was just to big to
pass the timing requirement of 50 MHz.
All the DDS READ/TAKE relevant states were removed from the main FSM,
and added to a seperate read FSM. This reduces the state numberes and
state tarnsition logic of the main FSM, allowing it to pass the timing
requirements.
rtps_writer now can be configured to simulate multiple endpoints. All
Testbenched were modified to reflect and test this change.
Packages were extended with array definitions.
An internal signal was initialized wrongly, and due to various
other reasons (testbench bug, to_integer conversion bug) was not picked
up by the testbenches.