Forum Replies Created
It would be enough for the client to somehow hint at the local configuration in a way that could be parsed in a desktop start script on the server.
This is what NoMachine normally does. Do the clients have this problem only when reconnecting to an existing virtual desktop? Or even when creating a new virtual desktop?
Can you share the logs from one of the problematic sessions? You can find instructions in https://kb.nomachine.com/DT11R00181. Please also tell what is the expected keyboard layout for that client.
NoMachine doesn’t load the keymaps from the system path (/usr/share/X11/xkb), but it ships its own keyboard config files in /usr/NX/share/X11/xkb. You can try to add the layout file there.June 9, 2022 at 11:02 in reply to: Resizing of NoMachine changes resolution on Ubuntu server? #38830
NoMachine doesn’t change the refresh rate explicitly, it just chooses a different resolution among the ones made available by the system in order to fit the size of the NoMachine player window. This is only done if you have the ‘Resize remote display’ option set, which I assume is your case, otherwise no change should occur on the remote system.
It would be interesting to know the system configuration at the time the problem is reproduced. Could you try to reproduce again, run the ‘xrandr -q’ command on the server and show us the output?
As we said, we believe using uncompressed streams over any network, in a software like NoMachine makes no sense, but we will surely explore both of these, as soon as they become more widespread, as we always do with any new advancements and any new technologies ;-).
In theory yes, in practice unfortunately not. It’s true that 49 ms needed to transfer the same frame, on a 1 Gbps network, would become 4.9 ms, on a 10 Gbps network, but this is only in theory. In practice moving the data from the network layer to the video RAM is much, much more expensive. We did our experimentation, of course, and the real frame-rate that we were able to achieve, with uncompressed data, on a 10 Gbps network, with a dedicated switch, with only the client and server on the physical layer, were close to 20 frames-per-second. And this only at times, with a sustained rate much lower than that. And this with the code that is inside the production NoMachine software, code that is optimized to be zero-copy (except the copy from the network layer to the video RAM, of course). The fact is that data-tranfers are expensive. Operating systems can use DMA, but doing that in user-level code is basically impossible and, even if it was possible, it would only be at conditions that would greatly reduce the number of systems where the “feature” can be leveraged and users make real use of it. The point, in the end, is that even if we did such “uncompressed encoding”, and even if it worked, it would be of so little use for almost the totality of our users that it would just be a quirk, something to mess about with. Different is the approach we have taken, the continued work on algorithmically improving the “end quality” of the output, so much to appear visually lossless.
Let’s do some math considering a 1Gbps Ethernet connection and a screen resolution of 1280*960, which, you will agree, is pretty low for today’s standards.
For a start, we can calculate how many MBs can be transferred on the network per second:
(1000000000 / 10) / 1024 = 97656.25
1Gbps = 97.656MBs
The size of a single 1280×960 frame is given by:
(1280 * 960 * 4) / 1024 = 4800.00
Size of 1 frame = 4.8MB
Now we can calculate how many frames can be transferred on the network per second:
97656.25 / 4800.00 = 20.34
It is possible to transfer 20.34 frames per second, which is very far from the 60 fps that would be considered a good frame rate.
We can also calculate how much time is needed to transfer a single frame, that will directly affect the latency:
1000 / 20.34 = 49.16
So it takes 49.16 ms to transfer a single frame through the Ethernet. This considering a direct gigabit Ethernet link between only the two computers. Any other computer on the same network could add more latency. And we can easily imagine what could happen with a FullHD (1920×1080) or a 4K resolution.
This explains why what you say makes perfectly sense in theory but not really in practice. It is far better making the CPU and GPU do the work to reduce the transferred size, as they will always be faster than any network.
I’m not sure I understand the scenario. Are you playing a video on the server and watching it on the client through the NoMachine connection? Can you provide some more details?
If you could record a video showing the issue, that would be very useful.
are the 2 physical monitors turned on? How are they connected to the server (HDMI, Displayport…)?
From the logs it seems you are connecting to the server’s login screen. Do you still get a black screen if the server is logged on to the user’s desktop?
Can you try to disable hardware encoding as shown in https://knowledgebase.nomachine.com/DT11R00180#2.5?
in your case I would not change any of the default settings and let NoMachine adapt automatically to network conditions and available hardware resources, but with one exception: check the ‘Disable client side image post-processing’ option, that can be a heavy operation for your pi. Do not choose a specific codec or a specific frame rate, as NoMachine will use, at any moment, the best values to optimize performance.
just for the record, x264 is the software H.264 encoder, that doesn’t make use of the GPU, while you want to leverage the hardware encoding made available by the graphics card, namely NVENC. Hardware encoding support is not available in virtual desktop sessions when X11 vector graphics mode is enabled, as explained here. You can try to disable X11 vector graphics, so that hardware encoding will be used, and compare the results. It will mostly depend on the applications used.
The graphics card is also used to accelerate the applications running in the virtual desktop, by means of VirtualGL support (https://knowledgebase.nomachine.com/AR05P00982). This would only be useful if you run applications that use OpenGL for rendering.
are you running a virtual desktop session or connecting to the physical display of the server? Do you confirm that the ‘setxkbmap -print’ command you showed is run in the remote session and not on the client machine? And could you run it also on the other side?
Logs can be useful. Please find how to gather them in https://knowledgebase.nomachine.com/AR10K00697. You can send them to forum[at]nomachine[dot]com.
Finally, please do a test. Instead of creating a new KDE desktop (assuming you are connecting to a virtual desktop, as per my initial question), try to ‘Create a new custom session’, by selecting to run the console ‘in a virtual desktop’. Is the keyboard layout correct there?March 1, 2022 at 09:31 in reply to: NXFrameBuffer failed to start on headless node after upgrade to version 7.7.4 #37741
did you check the system logs? They could show some hints. You can use the ‘journalctl -b’ command for that. Feel free to send the output to us so we can check.
we would need client and server logs to investigate. Please collect them by following the instructions in https://knowledgebase.nomachine.com/AR10K00697. You can send them to forum[at]nomachine[dot]com.
The current behaviour is in place since NoMachine version 7.0.
Can you try to measure your network latency, for example with the ping command? You can also try to send the logs to us, in case they would provide any hint. You can find instructions in https://knowledgebase.nomachine.com/AR10K00697 and send everything to forum[at]nomachine[dot]com, by referencing this thread.