No hardware acceleration on remote desktop w/multiple cards
Posted: Thu Jan 15, 2015 6:41 pm
Dear Arction Support,
One of our customers is using our application over remote desktop connection to a host machine that has multiple NVidia cards. Specifically, it has three cards, two of which are set to TCC mode and the third allowed for display use. They are reporting very slow rendering and poor responsiveness which we can see is due to a fallback to software rendering.
Here is the LightningChart rendering info that was pulled when running over remote desktop:
Active graphics card: Software rasterizer
Optimal pure device mode supported: False
Shader model 3 supported: True
Anti-aliasing supported: True
Fast-vertex format supported: False
Hardware vertex processing: False
Index buffer max size: 16777215
Index buffers (32-bit): True
Texture max height: 8192
Texture max width: 8192
Texture sizes not power of two allowed: False
GPU chip vendor ID: 0
Graphics render device created: True
Rendering Tier: 0
Compare this to the output on the same machine from dxdiag.exe over remote desktop:
---------------
Display Devices
---------------
Card name: RDPDD Chained DD
...
D3D9 Overlay: n/a
DXVA-HD: n/a
DDraw Status: Not Available
D3D Status: Enabled
AGP Status: Not Available
The hardware/driver configuration seems to have possibly fooled LightningCharts to thinking it needs to use software rendering. On other machines, we have found the LightningCharts to work very well over remote desktop using the hardware acceleration available through the RDPDD Chained DD protocol.
Do you think this is a situation that will rely on better remote desktop implementation from Microsoft or is there something else that LightningCharts can do to find and use the RDPDD Chained DD adapter?
Thanks,
Shawn
One of our customers is using our application over remote desktop connection to a host machine that has multiple NVidia cards. Specifically, it has three cards, two of which are set to TCC mode and the third allowed for display use. They are reporting very slow rendering and poor responsiveness which we can see is due to a fallback to software rendering.
Here is the LightningChart rendering info that was pulled when running over remote desktop:
Active graphics card: Software rasterizer
Optimal pure device mode supported: False
Shader model 3 supported: True
Anti-aliasing supported: True
Fast-vertex format supported: False
Hardware vertex processing: False
Index buffer max size: 16777215
Index buffers (32-bit): True
Texture max height: 8192
Texture max width: 8192
Texture sizes not power of two allowed: False
GPU chip vendor ID: 0
Graphics render device created: True
Rendering Tier: 0
Compare this to the output on the same machine from dxdiag.exe over remote desktop:
---------------
Display Devices
---------------
Card name: RDPDD Chained DD
...
D3D9 Overlay: n/a
DXVA-HD: n/a
DDraw Status: Not Available
D3D Status: Enabled
AGP Status: Not Available
The hardware/driver configuration seems to have possibly fooled LightningCharts to thinking it needs to use software rendering. On other machines, we have found the LightningCharts to work very well over remote desktop using the hardware acceleration available through the RDPDD Chained DD protocol.
Do you think this is a situation that will rely on better remote desktop implementation from Microsoft or is there something else that LightningCharts can do to find and use the RDPDD Chained DD adapter?
Thanks,
Shawn