August 30, 2017 at 15:19 #15638
I’m looking for the method to disable the VirtualGL of the NoMachine on ubuntu 16.04.3.
A few days ago, I installed the NoMachine version 5.3.10-6 on ubuntu 16.04.3 and enabled the VirtualGL following this instruction.
However, in my ubuntu server , the VirtualGL made some troubles. So, I’m looking for the method to disable the VirtualGL.
Please tell me.
Client OS: macOS 10.12.6
Client Side NoMachine : NoMachine version 5.3.10(free version)
Server OS: Ubuntu 16.04.3
Sever Desktop: Xfce 4.12.2
Server Side NoMachine: NoMachine version 5.3.10-6(free version)
Sever GPUs: Quadro K620 & Tesla K80
* This server doesn’t have the display. (Headless server)
** I’m using the Nouveau display driver for Quadro K620 and the Nvidia official proprietary driver for Tesla K80, because the Linux Kernel 4.10 conflicts the Nvidia official proprietary display driver for Quadro K620. So, the process of VirtualGL runs on Tesla K80. However, this makes a trouble: When I use the Tesla K80 for the heavy GPGPU program, the NoMachine become very slow.
This is the output of “nvidia-smai”
| NVIDIA-SMI 384.69 Driver Version: 384.69 |
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| 0 Quadro K620 Off | 00000000:02:00.0 Off | N/A |
| 34% 39C P0 1W / 30W | 11MiB / 1998MiB | 0% Default |
| 1 Tesla K80 Off | 00000000:84:00.0 Off | 0 |
| N/A 41C P0 59W / 149W | 96MiB / 11439MiB | 1% Default |
| 2 Tesla K80 Off | 00000000:85:00.0 Off | 0 |
| N/A 35C P0 71W / 149W | 11MiB / 11439MiB | 0% Default |
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
| 1 3073 C /usr/NX/bin/nxnode.bin 85MiB |
*** I tried the re-install of NoMachine on this server. (remove /usr/NX & re-install NoMachine), but the VirtualGL proess( /usr/NX/bin/nxnode.bin) runs in the new NoMachine.
**** I tried to stop the VirtualGL from “/usr/NX/scripts/vgl/vglserver_config”. I executed these commands,
1, sudo systemctl stop lightdm
2, sudo rmmd nvidia_uvm nvidia_drm nvidia_modeset nvidia
3, sudo /usr/NX/scripts/vgl/vglserver_config
Then, typed “2” to executed “Unconfigure server for use with VirtualGL”.
4, sudo reboot
However, I couldn’t disable the VirtualGL.August 30, 2017 at 17:11 #15642graywolfModerator
Did you turn off VirtualGL in node.cfg:
# # Enable or disable loading VirtualGL libraries when starting virtual # desktops on Linux. # # 1: Enabled. This make OpenGL applications able to use server side # graphics hardware. # # 0: Disabled. VirtualGL libraries are not loaded. # #EnableVirtualGLSupport 0
May I ask you which kind of troubles you got with VirtualGL?August 31, 2017 at 07:36 #15645
Thank you for your reply.
> Did you turn off VirtualGL in node.cfg:
Yes, I did. However, VirtualGL is still running.
> May I ask you which kind of troubles you got with VirtualGL?
Currently, VirtualGL is running on Tesla K80 in my environment because of the Nouveau display driver for Quadro K620 (and Linux Kernel 4.10). However, I want to use Tesla K80 only for GPGPU computing. Now, I’m using MATLAB and Tesla K80 to analyze the data. The size of each data is Tens of GB and these data require the bandpass filtering. So, when I use Tesla K80 for GPGPU, the memory of Tesla K80 becomes full and the VirtualGL process becomes very slow. As a result, the NoMachine stops responding.
Therefore, I want to disable VirtualGL.
Of Course, If the Linux Kernel stops conflicting the Nvidia official proprietary display driver for Quadro K620, I want to use VirtualGL on the Quadro K620.August 31, 2017 at 09:39 #15650graywolfModerator
VirtualGL is turned off. The nxnode.bin process is using the video card for other purposes, likely video encoding.
Try to turn it off. In node.cfg change this key to 0:
# # Enable or disable use of the hardware encoder. # # 1: Enabled. Use the hardware encoder if supported by the graphics # card. # # 0: Disabled. Don't use the hardware encoder. # #EnableHardwareEncoding 1
Then restart server with
sudo /usr/NX/bin/nxserver --restartAugust 31, 2017 at 11:58 #15651
Changing “EnableHardwareEncoding” key to 0 and executing
sudo /usr/NX/bin/nxserver --restartstopped nxnode.bin process on Tesla K80.
As a result, when I use Tesla K80 for GPGPU computing, the NoMachine works fine!
This topic was marked as solved, you can't post.