I have a N data-driven test cases of the type: Request, Process, Validate.
The processing part, which is the thing to be tested, takes a long time, but can be done in parallel.
Doing sequential RPV, one test case at a time, will take N times P.
If instead all requests were send together and then validated afterwards the run time would be 1 time P.
Example with 3 test cases:
Sequential: R1 P1 V1, R2 P2 V2, R3 P3 V3.
Parallel: R1 R2 R3, [P1 P2 P3], V1 V2 V3.
One solution could be to mark specific test cases failed during Suite Teardown, in which case the actual test cases will execute only the Request parts (and fail if request fails), register which requests were made together with the test case ID, and then let the Suite Teardown make the validations and retrospective fail the Test Cases that ended up failing.
Is this possible today? - Have been searching the docs without much luck so far.
Another, more clean solution, would be to allow 2 (or more) keywords to be run for each data-driven test case/suite. The semantics should then be that all data-sets are run through first keyword, then all through second keyword, etc. - Each and any of the keywords can then fail the respective test case.
(We could call it test pipelining?!). It does not require multi-threaded keyword execution, so I presume it must be pretty easy to achieve?
Have I missed something obvious?