Can rfswarm read in variables and pass them to the Robot Framework test case being run?

Hello.

I found that I can use rfswarm to execute my Robot Framework test cases to run performance tests. In doing this, I would like to pass in variables from rfswarm to the test cases to be run. In this way, each run of the test case can be made against a different user. We don’t want to simulate the same test case over and over again for one user.

I can change the test cases to read an index file (default of 0), then read a variable file based off the index, process the test cases, then increment the index by 1 and write it back out. However, I would prefer not to do this. I would prefer to leave the test cases as is, which are set up for our function testing.

So, the question is, can I pass in variables from rfswarm to the test cases to be run?

Thank you very much.

Mark

Hi Mark,

This page in the rfswarm documentation Preparing a test case for performance, should have all the details you’ll need for managing data in your performance tests.

Specifically the Index and Robot number variables from the Useful Variables were included specifically for this situation.

If you need to login as a different user for each iteration there is also the Iteration variable that you can also use.

There is also another option, If you’ve used Loadrunner before you might be familiar with the VTS tool in Loadrunner, the equivalent tool for rfswarm is TestDataTable, it’s a seperate tool from rfswarm, but it was also built by me to provide this functionality. you can load in a list of userids into a column and then retrieve them one at a time in your robot script, when a robot requests a value from a TestDataTable column the value given to the robot is removed from the column so this guarantees each robot gets a unique userid

Hope that answers your question, if not let me know what I misunderstood.

Dave.

Thank you for the information Dave. This worked out fantastically.

Sorry, I was under pressure and just skimmed that section, thinking I would utilize that information later. I needed it now.


Next question, or should I create a new conversation?

I have my setting as follows.

  • robots - 100
  • delay - 2 seconds
  • ramp up - 10 seconds
  • run - 1 minute
  • a script that just creates a list (name only, no data) using API code
  • is taking about 5 minutes to run

From my thoughts, this should take the following time

  • 2 second delay
  • 10 second ramp up
  • 1 minute run
  • 10 second clean up
  • Total = 1 minute and 22 seconds
  • OR 2 second delay x 100 = 200 seconds (Is this what it means?)
  • giving a total of 4 minute and 40 seconds

Hi Mark,

Glad that was helpful, as for a new conversation, I think here is fine for now.

  1. 100 robots in 10 sec, is quite a fast rampup, ~10 robot’s/second, even with loadrunner people don’t normally rampup that fast, 1-2 loardunner vusers/sec is typical and 4-5 loardunner vusers/sec is very fast. I guess it depends how many agents you have whether or not rfswarm will actually be able to rampup that quickly? I just checked the agent code and 10 robots/sec it about the max number of robots 1 agent can start, so If you need that fast of a rampup i’d suggest at least 2+ agent machines as depending on the robot framework library you are using you might be maxing out the agent machine’s CPU trying to start robots that quickly
  2. Delay 2 seconds - I would suggest keeping things simple to start with and leaving out the delay. The main purpose for the delay is to construct more advanced test case models e.g.:
    • System capacity testing, where you run 100 robots for 1/2 an hour and than add another 100 robots for 1/2 an hour, etc
    • Simulating a work day model with a morning and afternoon peaks and a mid day “lunch break” slump
  3. regarding “run - 1 minute” and “is taking about 5 minutes to run”, how long does this test case take when you run it manually with robot from a command line?
    • Run time directly quoted from Plan below, if your test manually takes ~4 minutes then ~5 minutes for the test would be expected.
Run The amount of time (HH:MM:SS*) to keep all the virtual users (robots) defined in the Users column running after Ramp Up has finished. If a robot finishes it’s test steps before the end of this time it will be restarted After this time has elapsed the robots will finish their test steps and exit normally (Ramp Down)
  • Which library is you script using, if it’s BrowserLibrary or SeleniumLibrary, starting a web browser is quite system intensive on your agent machines so could be impacting your run times?
  1. Regarding your question to understand the run time, your first estimate of 1 minute and 22 seconds is closer to the mark, to clarify it I’ll give you 2 examples that both use the same scenario design.
  • For our hypothetical test we’ll use the same test design as you used:
    • robots - 100
    • delay - 2 seconds
    • ramp up - 10 seconds
    • run - 1 minute
  • And we have 10 agent machines all are recent generation i5 cpu’s and 8Gb of RAM so initially this would appear to be overkill hardware wise.
  • In our first example we have
    • test script uses RequestsLibrary
    • when run manually
      • robot takes about 0.5 sec to initialise
      • robot takes about 5 sec to run the test
      • robot takes about 5.5 sec total
    • when run in our rfswarm scenario
      • 2 second delay
      • 10 second ramp up
      • 1 minute run, as the robots only take ~5.5 sec they will keep restarting until the run time has elapsed, (60/5.5 ~11, 70/5.5 ~13) about 11-13 times each
      • upto 5.5 seconds for all robots to finish up and exit cleanly
      • Total = 1 minute and 17.5 seconds
  • In our second example we have
    • test script uses SeleniumLibrary
    • when run manually
      • robot takes about 2 sec to initialise (launch robot, selenium and launch the browser and for selenium and SeleniumLibrary to hook into the browser)
      • robot takes about 3 min to run the test
      • robot takes about 3 min 2 sec total
    • when run in our rfswarm scenario
      • 2 second delay
      • 10 second ramp up
      • 1 minute run, as the robots only take longer than 1 min to run each will only run once
      • at least 2 min 2 sec all robots to finish up and exit cleanly
      • Total = 3 minutes and 14 seconds
  • in our second scenario it might actually take a bit longer as while the agent machines can easily run 10 browser instances each, launching all 10 in a 10 second period is likely to max out the CPU and Disk IO of the agent machine, so that 2 seconds to initialise may blow out to well over 15 seconds and the first few robots that might have initialised in 2 sec are going to be slowed down rendering the pages in the browser as the browser is fighting for CPU resources with the other browsers that are initialising so the run time may blow out to 3.5 or even 4 minutes extending the total test time to easily 5-7 minutes.

I hope that clears things up,

Dave.

1 Like

Hi Thomas,

Hopefully the links I gave Mark above have helped.

There isn’t really a one size fits all solution to data management for performance testing regardless of the test tool, you need to adjust the approach depending on the application your testing and your testing approach and requirements (e.g randomised data or exactly the same data at the same point in the test every time)

As I mentioned above the page in the documentation, Preparing a test case for performance , is a good starting point it gives a few approaches to dealing with test data.

If you have a specific problem your trying to solve post a question here mention rfswarm in the subject and I’ll be happy to help you find a solution, I check this forum most days as well as the rfswarm discord and slack channels

Dave.