disturb away, it’s fine really
if you look at the 1st graph (top left) in your screen shot you can see the orange line started around 15:01, after that the CPU, Network and memory all spiked, this is when the agent uploads the log files, so you will have smaller but similar peaks when you get fails, this is the default behaviour, i.e. if test passes logs are left on the agent and then uploaded to the manager after the test is finished, but if a test fails the logs for that failed test is uploaded to the manager imminently so you can review the logs on the manager if you need to. In the Scenario settings you can change this behaviour how you prefer it to behave.
It depends on your application, some applications have heavy network traffic and you might need to check that you are not exceeding the bandwidth capacity of your agent machine’s network connection, this is less of an issue now days with gigabit Ethernet, but if your agent was simulating a remote office on a 64k ISDN line you would care more, it’s there if you need it
If your python script has opened the sqlite file you have the path in python
resultdir = os.path.basename('/path/to/your/sqlitefile.db')
Actually RFSwarm does everything with relative paths internally, so the full path is not stored in the db (well the full path of the robot files are but only so they can be put in the report, it’s never used by RFSwarm itself)
It’s perfectly reasonable that you might run the test with RFSwarm Manager on one machine and then copy the results folder to another machine to run the RFSwarm Reporter, the Manager might be Windows or Linux and the reporter might be MacOS or the other way around, RFSwarm should not care and should still work the same, It’s intended to be OS agnostic.
This is fine, nothing stopping you, I don’t recommend it because you can run into resource contention (especially CPU and memory), but if your runner machine. has enough resources to run the test without compromising response times, great go ahead understanding the risk. But I’d suggest monitoring the runner machine during the test to make sure it’s not resource constrained.