Where can I find a screen clicker that simulates real finger?

Hi All,

The application where I need to do test automation, unfortunately prevented programmatic testing like using Appium. The application recognizes if you simulate pressing the screen using any programmatic method as such and can’t be used.

So I need a physical device that simulates a real finger and performs the functions I want. I found automatic screen clickers that are otherwise suitable, but they work with the device’s own keys and can’t be controlled from any kind of custom scripts or frameworks. The purpose is to integrate the device with the Robot Framework and use for example Serial Port communication to control the device automatically only and when needed.

Such a device would be perfect if it could be controlled from a script:
Phone Screen Auto Clicker Mute Psical Linker Touch Screen Artifact Finger Click

Has anyone used this method before, or do you know where I can find a device that can be controlled from a script via, for example, a Serial Port?

Hi @zqgge,

While I haven’t done this, what i’d suggest in your situation is get a basic touch screen stylus pen like this one and attach it to a cnc machine (replacing the cutter)

Then you can use robotframework-cnclibrary or PyCNC to move the stylus around the screen, press and release the screen.

As I mentioned I haven’t done this, but it’s quite a generic solution so hopefully it’ll work for you :crossed_fingers:t2:

Dave.

Hi, thanks for the answer,

The problem with that solution is that the plan is to use this method simultaneously for many devices, maybe even dozens. At the moment I have one arduino-powered device for this, but I need a reliable, long-lasting and compact device that can be used repeatedly and easily for many devices. The real use case is really only one page, more precisely the login page. That’s why such a small and simple solution is suitable.

Although, that suggestion of yours seems interesting and I will definitely look into it! Who knows, maybe I’ll try it. :slight_smile:

Sakari.

Hi @zqgge,

I would imagine with the solution I suggested you remove the base plate from the CNC device and replace it with your own “base plates” that you make from plywood / mdf or similar wood that would have a cut out for the type of device you are testing, the cut out should hold the device with the screen at a known relative height (level with the base plate?) and stop the device sliding around maybe with wedges/chocks/foam pads. Then it would be a matter of place the device in the slot test, remove and replace with the next device.

I also imagine the same could be achieved by modifying a 3d printer (either a Cartesian or Delta)

I also remember there was a thread a while back where someone was asking about controlling a cnc device from robot framework, If I remember correctly they were using a modified cnc device with the base plate removed for testing touchscreen monitors on kiosks

Basically you need a device that can move the stylus in 3 planes (x axis, y axis & z axis) rather than developing something, I’m suggesting to adapt an existing mass produced device that already does this, CNC & 3d Printers are already produced in higher volumes than any dedicated hardware test tool would ever be so a dedicated hardware test tool would be more expensive due to the low volumes of production.

Hopefully that explains why my answer is not quite what you were expecting,

Dave.

Hi,

Thanks, I’m actually growing on this idea already. Not bad at all, come to think of it, thanks you for the great ideas!

Sakari.

1 Like

How is the application (under test) run or where does the application run from? Because I’m wondering if you can just run it under a virtual guest OS and use the local virtual display on the host to automate the application. That should work unless the application detects for virtual OS environment and won’t run under such.

Another approach could be to remotely access the application over VNC or (RDP) remote desktop (for Windows), or x11 forwarding (if is Linux based GUI application). Using any of these approaches, you can do the UI automation on the remote client side rather than local to the (server) application’s desktop. Thus the application would have no knowledge it is being automated. This does have the trade-off that you can’t use direct UI automation tools like Appium because those will need to run from the application host. You can run tools however that use OCR and image recognition like Sikuli and others remotely, and work off a fixed specified/controlled desktop/window screen size. Those kind of tools work off a screen, it doesn’t matter if the application is hosted locally or remotely.

Just some thoughts in case the hardware based approach is tricky to do or expensive.

And for simultaneous use, except for case of say key presses or taps/clicks or finger movement across multiple devices at the same time, you could have a big desktop screen remoting to multiple instances of the application (over VNC or RDP) and handle one screen after another sequentially. There might be options to do it in parallel but you’d have to find some additional tooling for that.

1 Like