We use RF mainly for acceptentance testing for a Windows desktop application, an ERP tool
The problem we have is that when RF testcase fails halfway of execution (because the SUT isnt function as it should), the only way to start the testcase FROM that point again, is to temporariliy add a Testcase(name) halfway the script to start the script from this point till the end of the testcase
I wonder if somebody else has a solution for this, so you can start a testcase from a certain point in the script. So a feature called 'start from’.
Are there any tool that provide this functionality?
so this is what we currently do:
first test run attempt of Testcase1
keyword3 --> FAILS
editing testcase so it can start from keyword 3
second test run attempt of Testcase1
it would be nice to have something similar to TestComplete ‘Run from selected operation’
Running Keyword Tests | TestComplete Documentation (smartbear.com)
Is there a cause to “fails halfway of execution (because the SUT isn’t function as it should)” is this consistently falling over at the same point?
There are keywords in RF that I’m sure you’ve seen, for example run keyword and continue on failure
this might be of use for you:
Striter does make a valid point:
it is not a good practice . Tests should test and if they encounter some error, they should fail. Otherwise it is not a test, only a bunch of code!
The reason that sometimes fail, is because of the unpredredictability of the UI…now and then.
The SUT is started as a citrix application in a acceptance environment.
Outside the offical testing rounds, the application responds pretty fast, but once more people start testing in the acceptance environement, things start to get slower and less responsive
- context menus collapse again after they just have been opened
- filters starting to work slower
performance drops and UI gets less predictable
I’m using RF basically as an assistance. 80% of the time it works perfectly, but sometimes I need to step in and give it a little push from where it got stuck.
I’ve been working of making the code being able to handle the situation where the UI isn’t responsive or unpredictable, but there will probably always be some situation I didnt forsee…and you have to start a test halfway.
Another reason it would be nice to have a feature ‘start from’ or ‘run untill’ is because of debugging purposes.
Thanks for further details.
You could add waits in areas for given elements where it falls over to help, and just add them in as of when you notice them. Apart from that, could this be a load issue or an environment issue, as you mention, with more users and in the acceptance environment it becomes less responsive?
Sorry I cant be much more help than that.