That might be a little too slow a ramp-up with 2 robots 30 seconds should be fine, it shouldn’t take more than 15 seconds to launch a browser on that hardware (I hope).
The other thing you can do is click the setting button next to test name on the plan screen, and remove all the Exclude libraries, then you should at least get something in the run screen, but maybe not what you want.
The elapsed time thing is weird, it’s saying that the test has been running for 455 thousand days, I think I’ve seen that before but don’t remember what it was. are you using two different machines one for the manager and a different one for the agent? it could be the system clocks being out of sync?
BTW which robot framework version are you using on your agent machine?
That machine would be a great agent machine and should give you 100-150 selenium library robots, but you’ll need to ramp up the robots to see how many robots you have when you reach 80% load.
If you are finding it really slow you have probably overloaded the machine, for testing you should have a dedicated manager machine and you should keep your agent load value below 80% so measure how many robots your machine can handle the ensure you have enough machines for the load you want to run. This is the same thing you would do with any other load test tool.
Hardware requirements vary significantly depending on which robot framework libraries you use.
I’ve changed the ramp up to 30 seconds, run time to 10 minutes. I removed all the Excluded libraries in the settings. I’m using my one single machine for both manager and agent, so I don’t think it is clock related.
I’m using Robot Framework 4.1.2 (Python 3.10.0 on win32).
I’ve been debating with myself what should constitute V1.0?
At the moment I have a reporting tool plus some other requested features for 0.9.0, this could be considered a version 1, but then version 0.8.0, which was the first version that was not beta (or production ready) should have also been considered version 1?
Then for version 0.10.0 I have planned adding support for language translations, so should that be version 1? I have no idea.
For now I am running with 0.x.0 is a feature release and 0.x.y is a bug fix, and no plan for a version 1.
This tool is exactly what I need.
For now, I’m planning to use it on one machine (CPU - 1.9 GHz, Memory - 15.7 GB) where python 3.6 is installed.
does this tool give me more virtual users than pabot(i can run 3-4 users)?
how many virtual users can I run with this configuration on single pc?
minimum requirements python 3.6 or 3.7?
With that configuration you should be able to get quite a few robots running, but you need to consider how many cpu cores you have rather than the cpu GHz. also the more sleep time you have in your test the more robots you’ll be able to run.
Bear in mind if you don’t have any sleep time in you tests then 3-4 users that you got with pabot might be the same here as you will be cpu constrained but if you ramp the robots up slowly you might be able to get more robots running especially if you are useing browser or selenium libraries as launching the web browser can be very cpu intensive.
Also note this tool will run the same test over and over again as it’s designed for performance testing, it’s not like pabot that will run each test only once with several tests running in parallel. RFSwarm and pabot are not competing tools they serve two different purposes.
Hi @damies13 ,
My robot framework test cases run on selenoid docker container pointing on my machine on localhost:8080.
Now I want to execute my tests on rfswarm for performance. When I execute the test, and click on the play button then nothing happens and I stay on the same screen below.
My first thought was you might not have an agent, you can switch to the agent tab (highlighted in orange below) before you start the test to check if you want, make sure there is at least 1 agent in the ready state.
Then on a closer look I see that in your screen shot rfswarm is on the Run tab (highlighted with the purple rectangle). rfswarm automatically switches from the Plan tab (highlighted with the green circle) to the run tab when the test starts.
I also see the elapsed time is 4:24, so the test has started, but you don’t have any results displayed yet, I also see that no robots are running yet.
There could be a several of reasons for this:
No agents available (you should have gotten a dialogue warning you about this though)
No robots were assigned yet, if you number of robots on the plan page is small and the ramp up is long then it could be valid. As an example 10 robots with a 30 minute ramp-up is one robot every 3 min, so it would be 3 min before the first robot is assigned to an agent, then up to another 10 seconds before the agent polls the manager and starts the first robot (after the knows the test has started it will poll every 2 sec), so depending on your settings your first robot may not have started yet?
Your robots have started on the agent but failed. If this is the case look in the results folder, in there you’ll find a Logs folder, in the Logs folder will be a folder for each robot that will contain the xml & html files generated by Robot framework as well as a .log file which is the robot framework command line output, look through these files to see what caused the failure or if the test was actually passing.
The keywords for your results might be excluded, you can adjust what gets included/excluded by adjusting the additional settings for test group
Thats what I can think of as the most likely problems, if I could see a screen shot of the plan tab that might help.
There was a case where someone else had a similar issue and it was because the test field was empty on their test plan (see below)
@damies13 Thanks for the detail reply.
Yes indeed, my agent are running but shows a warning. Also, I have added the script and the test under Plan tab as shown in your screenshot.
Ok warning is not good especially before a test starts as it means your agent machine is low on resources (CPU or Memory most likely) but it won’t stop robots being assigned unless the status is critical.