For the windows desktop app, there are many libraries to choose from, I would suggest you start with Robotframework-FlaUI Library as if it works with your app it will be the easiest.
For the Linux-based app, if it’s a Linux desktop app, you really only have two options:
Both work by finding the location of the image on the screen, I’ve used both and both work well.
You could also just do everything with just one of the image recognition libraries too if you prefer, but a windows library that can switch windows will make life easier when you want to control whether the RealVNC window or the windows app is the foreground app.
Good luck with your automation, and remember we’re here if you get stuck with something.
If the app on Linux side is written in QT, tool called squish (previously by Froglogic but part of QT now) could be used. It at least used to have some RF bindings… another option would be pyautogui. Both of these would need to be used via robot’s remote server thought…
This Robot Framework library provides the facilities to automate GUIs based on image recognition similar to Sikuli. This library wraps pyautogui to achieve this.
Thanks for the info about squish, I didn’t know that (I don’t have any QT apps to test either), is there something similar for TK apps i don’t know about?
Thank you for the suggestions. I’m really new to this so pardon me if I have more general questions.
Do any of these suggestions work if the Linux app in the viewer changes with every click? As in new set of buttons appear after a click. Also, does the viewer window have to have a specific size and location during each run?
For the image recognition libraries (SikuliLibrary and ImageHorizonLibrary), you will need to capture images of the elements you want to interact with. It is then much advisable to always use a fixed screen resolution and color depth. On SikuliLibrary, we usually define a Region of Interest (ROI), with the size of the window we want to scan for the elements images. For example, in my case, I use remmina to connect to a Windows server via RDP. And the size is always 1024x768 px.
What will impact the ability to match is changes in screen size or screen scaling, also colour depth will have an impact, i.e. if you originally connected with 32bit colour when you took the screen shots and then dropped down to 24 or 16 bit colour then it likely won’t match.
I’ll also suggest staying with the same vnc viewer, sometimes there can be differences between how different vnc viewers render the image and that may impact your tests too.
Hi Dave,
I’m working on a Windows VM with a QT app.
I use the “doctest library” to compare images and the “robotframework-applicationlibrary” to interact with the application.
I connect to the VM using WinAppDriver.
Question: How do you click on the buttons in the QT app remotely? Because I can’t get developers to set the “name” property of the buttons
Actually, squish does have Tk support too - totally forgot that as I’ve only used it in QT context in the past (when it was Froglogic, before QT bought it). See; Squish for Tk Tutorials | Squish 8.1
Continuing with the topic…
I’m running a QT app on a Windows VM.
I access the QT app remotely (winappdriver.exe xxx.xxx.xxx.xxx 4723) but sometimes (usually) it throws the following error:
“Test Case 5 → Exportar como CSV | FAIL |
WebDriverException: Message: The operations timed out. (Exception from HRESULT: 0x80131505)”
I assume it’s because it consumes a lot of resources, since the lighter versions run without giving an error.