I have more than 300 tests that depends on previous.
You have to structure your Tests and all folders accordingly to your logic, make an E2E approach to your testing project, add numbers to your files, like 1, 2, etc. In case you have more than 9 add the zero, like 01, 02, etc.
Generally it’s not recommended and if you can live without it it’s good way to go. But example if I create an object in first test then next test in same suite can modify/delete/whatever object. But if I have dependencies like this, then if I have failure in the first test, then I will automatically fail other test in the setup phase.
(…)make an E2E approach to your testing project, add numbers to your files, like 1, 2, etc.
Yeah. It is a E2E aproach. Despite I believe you give me a good option to deal with that test names, I decided to keep it based on its name instead. Before, using robot framework as my main tool, I used to name the test cases using numbers like SCHEDULE_AN_APPOINTMENT.Test.001, *.002, etc. However, when you have handred tests, I got some difficult to explain to eachother what each test do.
Have you considered a “passed token” style approach?
You gave an example of a simple 2 step E2E process, though this will work regardless how many steps.
What you need is a place to queue unique id’s or tokens, these are just strings that uniquely identify some information being passed from one test case to the next, in your case an appointment.examples of this could be:
an appointment id if the system generates one and the “doctor” can open the appointment with that
a date time string
a date + patient name string
Your initial setup would look like"
First step is run test case 1, if it passes append the appointment token to the queue
Next re-run test case 1 3-5 times so there are 3-5 appointments tokens in the queue
Now run test case 2 and have it pick the first appointment token from the queue, remove the token from the queue and then process the appointment (complete the test)
Now your ready to run your tests for each release, if there’s a bug in the release that causes test case 1 to fail then test case 2 can still run as there’s a buffer in the queue, the size you make the queue depends on how many failed releases you’re prepared to tolerate before loosing the ability to run test case 2.
FYI - This “passed token” approach is used a lot in performance testing (my speciality)
Now for the technical part, what is the queue?
if all your test cases use the same physical test machine (not a docker container that gets destroyed) then it could be something as simple as a text file on the hard drive. OperatingSystem Library has keywords like Append To File that are useful for this
If you are using docker container or different test machines, them maybe a text file on a shard file server
Use TestDataTable if you like, it was designed for this purpose for performance testing, so it can automatically remove the token from the queue for you.