#StackBounty: #windows-10 #display #nvidia-graphics-card #gaming Windows 10 graphics kernel crashes randomly when playing games

Bounty: 100

I’ve been having a very odd hardware issue on my PC lately

  • Windows 10 (ver 1904)
  • 2014 vintage Intel i7 4790
  • 16GB RAM
  • "EVGA Nvidia GTX 980 FTW" graphics card
  • Gigabyte "H81M-DS2V" motherboard
  • Samsung 3440×1440 monitor @ 60 Hz via either HDMI or DisplayPort

I’ve been using this PC for casual gaming for over a year now, it plays Doom 2016 (for example) very stably and at high graphics settings. Recently I’ve been playing another game (Snowrunner) and every so often the monitor just goes completely black as if the video cable had been unplugged (cables are fine and secure). Replugging the HDMI cable does nothing (it does not even make the "device detected" noise that Windows usually makes) which leads me to believe that the GPU itself has gone into a failure state of some kind. The card simply goes idle as if it has fallen off the PCIe bus completely (I can’t prove that because frustratingly there’s no screen to look at).

However Windows does not crash and remains responsive (I can still play/pause media player which is usually running in the background using hotkeys on my keyboard). I can still use "[WIN+X], [U], [U]" to safely power off the PC. Oddly, power-cycling the PC does not automatically bring back the display unless I then consequently replug either the HDMI or DisplayPort cables.

I’ve gone through the process of removing the Nvidia drivers using the DDU tool and physically removing the graphics card, cleaning the PCIe contacts carefully with isopropyl alcohol and blowing any dust out of the cable sockets with an air duster). Then I reinstalled the Nvidia drivers to the latest version. The problem remains.

The Windows "View Reliability History" tool tends to report the following for a 187 code:

Description
A problem with your hardware caused Windows to stop working correctly.

Problem signature
Problem Event Name: LiveKernelEvent
Code:   187
Parameter 1:    1
Parameter 2:    0
Parameter 3:    0
Parameter 4:    0
OS version: 10_0_18363
Service Pack:   0_0
Product:    256_1
OS Version: 10.0.18363.2.0.0.256.48
Locale ID:  2057

The "Code" number changes each time, sometimes it’s 141 report…

Description
A problem with your hardware caused Windows to stop working correctly.

Problem signature
Problem Event Name: LiveKernelEvent
Code:   141
Parameter 1:    ffff9b0f493c7010
Parameter 2:    fffff8003f567650
Parameter 3:    0
Parameter 4:    51e8
OS version: 10_0_18363
Service Pack:   0_0
Product:    256_1
OS Version: 10.0.18363.2.0.0.256.48
Locale ID:  2057

The other problem is that this is intermittent. I can go all week without this error and then get 2 in the same day. It does not appear to happen more when the CPU/GPU is under heavy load (although Snowrunner is a very graphics intensive game) it can just as well happen when the system is cold and hardly spinning it’s fans. Then again Oblivion is not a graphically intense game and that causes it too.

Can anyone tell what might be causing this and what can be done to fix it?

EDIT: This problem also occurs when playing "TES4 Oblivion" with the same kind of error codes.


Get this bounty!!!

#StackBounty: #windows-xp #resolution #nvidia-graphics-card #console #ntvdm Console and NTVDM resolution upscaling in full-screen

Bounty: 50

When running either console Windows applications (such as Far Manager) or legacy DOS applications via NTVDM in the full-screen mode, I observe my Nvidia graphics driver upscaling the display resolution to that of my Windows desktop (1600x1200).

For DOS applications, this can be remedied by switching desktop resolution to the application resolution (provided it is at least 640x480) before running the application.

This doesn’t work for console applications, however, since a 9x16 font at 80x25 char cells yields a non-standard resolution of 720x400, and obviously I can’t set my desktop to that (despite it has been supported by most displays for ages).

The problem is specific to Windows XP (and maybe to some extent Vista), as Windows 7 and above no longer supports full-screen mode for console applications.

How can I entirely disable upscaling for my graphics card driver?


Get this bounty!!!

#StackBounty: #drivers #graphics-card #gpu #nvidia-graphics-card #cuda Why peer-to-peer (P2P) access between two Tesla K40c GPUs fails …

Bounty: 50

I would like to run a CUDA C program using two Tesla K40 devices and enable peer-to-peer (P2P) between them as my data will be shared among the devices. My computer has the following deviceQuery summary and NVIDIA-smi results (OS: Windows 10).

deviceQuery:

Device 0: "Tesla K40c"
CUDA Driver Version / Runtime Version          10.2 / 10.2
CUDA Device Driver Mode (TCC or WDDM):         TCC
Device PCI Domain ID / Bus ID / location ID:   0 / 21 / 0

Device 1: "Tesla K40c"
CUDA Driver Version / Runtime Version          10.2 / 10.2
CUDA Device Driver Mode (TCC or WDDM):         TCC
Device PCI Domain ID / Bus ID / location ID:   0 / 45 / 0

Device 2: "Quadro P400"
CUDA Driver Version / Runtime Version          10.2 / 10.2
CUDA Device Driver Mode (TCC or WDDM):         WDDM
Device PCI Domain ID / Bus ID / location ID:   0 / 153 / 0

Nvidia-smi:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 441.22       Driver Version: 441.22       CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name            TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla K40c          TCC  | 00000000:15:00.0 Off |                    0 |
| 23%   36C    P8    24W / 235W |    809MiB / 11448MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  Tesla K40c          TCC  | 00000000:2D:00.0 Off |                  Off |
| 23%   43C    P8    24W / 235W |    809MiB / 12215MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   2  Quadro P400        WDDM  | 00000000:99:00.0  On |                  N/A |
| 34%   35C    P8    N/A /  N/A |    449MiB /  2048MiB |     14%      Default |
+-------------------------------+----------------------+----------------------+

I make the Quadro P400 invisible to my CUDA program via set CUDA_VISIBLE_DEVICES=0,1 and run the SimpleP2P example. The P2P example runs successfully but the results indicate that there is a P2P issue here. Specifically, the memcpy speed shows only about 0.2 GB/s despite the fact that the Tesla devices are connected to two PCIe3 x16 CPU0 slots:

Checking for multiple GPUs...
CUDA-capable device count: 2
Checking GPU(s) for support of peer to peer memory access...
> Peer access from Tesla K40c (GPU0) -> Tesla K40c (GPU1) : Yes
> Peer access from Tesla K40c (GPU1) -> Tesla K40c (GPU0) : Yes
Enabling peer access between GPU0 and GPU1...
Allocating buffers (64MB on GPU0, GPU1 and CPU Host)...
Creating event handles...
cudaMemcpyPeer / cudaMemcpy between GPU0 and GPU1: 0.19GB/s
Preparing host buffer and memcpy to GPU0...
Run kernel on GPU1, taking source data from GPU0 and writing to GPU1...
Run kernel on GPU0, taking source data from GPU1 and writing to GPU0...
Copy data back to host from GPU0 and verify results...
Disabling peer access...
Shutting down...
Test passed

The SimpleP2P example also fails when I modify the code slightly to examine the P2P performance deeper (please see this question for programming details if you find them relevant). The tests I have done and the comments of a few experts on my post here indicate that the problem is a system/platform issue. My motherboard is an HP 81C7 with a BIOS version of v02.47 (up to date as of Apr/11/2020). I have also installed Nvidia driver and CUDA several times and have tried CUDA 10.1 as well but with no luck. Can someone shed some light on how I can dig into the problem more and find the source of the problem?


Get this bounty!!!

#StackBounty: #graphics-card #nvidia-graphics-card #pixels Weird and pixelated image on Nvidia control panel

Bounty: 50

I don’t know if this is the correct place to post this question if no please guide me where I should post it to get help.

Suddenly my Nvidia GeForce started to give this weird image on the control panel (screenshot below).

My laptop is Lenovo Ideapad s510p and the GeForce is 820M.

I updated the driver with no luck.

Does anyone know what may be the issue? Is my GeForce dead?

enter image description here


Get this bounty!!!

#StackBounty: #windows-10 #remote-desktop #nvidia-graphics-card #h.264 #remotefx Windows 10 Remote Desktop With RemoteFX and Hardware h…

Bounty: 100

I have Windows 10 Pro as a server and Windows 10 Pro as a client. Server has a GTX 1070 card with the latest driver and is fully NVENC capable of h.264 / AVC 444 hardware encoding. Steam streaming works using the hardware codec on the server side.

I have RemoteFX enabled on the server, and have enabled the following in the Group Policy under Remote Desktop Services / Remote Desktop Session Host / Remote Session Environment:

  • Use hardware graphics adapters for all Remote Desktop Services sessions
  • Use advanced RemtoteFX graphics for RemoteApp
  • Prioritize H.264/AVC 444 graphics mode for Remote Desktop Connections
  • Configure H.264/AVC hardware encoding for Remote Desktop Connections
  • Configure compression for RemoteFX data
  • Configure image quality for RemoteFX Adaptive Graphics
  • Enable RemoteFX encoding for RemoteFX clients designed for Windows Server 2008 R2 SP1
  • Enable Remote Desktop Protocol 8.0
  • Configure image quality for RemoteFX Adaptive Graphics

When I connect to the server, in the Event Viewer and go to RemoteDesktopServicesd-RdpCoreTS, I don’t see any events with EventID 162 or 170, and the documentation says these should appear when hardware encoding is used.

What am I doing wrong? Why am I not getting hardware h.264 encoding? Is there another setting that I’m missing that I need to enable?


Get this bounty!!!

#StackBounty: #windows-10 #multiple-monitors #nvidia-graphics-card #screenshot #widescreen 21:9 monitor taking squished screenshots

Bounty: 100

I have dual ultra wide LG 21×9 monitors.

They mirror each other.

When I hit print screen and then I paste it to a paint document for example, it’s completely squished horizontally, as if the PC thought it was 1080 regular and tried to compensate.

I don’t mean it only takes a 1920 screenshot, it’s actually squished further. Unable to find any fix as of yet, lots of software and searching has produced the same results.

RTX 2060
NVIDIA drivers detect 2560×1980
Windows detects display(s) at 2560×1080

Monitors only have original and auto wide as options.


Get this bounty!!!

#StackBounty: #linux #fedora #nvidia-graphics-card On a laptop with a non-optimus dedicated Nvidia GPU, how can I force an application …

Bounty: 50

I have a new laptop with an Nvidia RTX 2060 and an intel i7-9750H, running Fedora 31 KDE spin. I have always used fedora without issue though never before with a dedicated GPU. I’ve installed the proprietary nvidia drivers from rpmfusion and it appears to be recognized, though the CPU appears to be handling all of the graphical work and I’m not sure what to do about it. As far as I know this is not an Optimus graphics card so I can’t use bumblebee/optirun to switch from cpu to gpu graphics (if there’s a way to selectively use the card like that though I’m all ears, that’d be the ideal solution).

All the relevant information I could think of (or be told is relevant by Google) is below. I’m not sure what to do from here. I’ll provide whatever relevant followup information is requested.

Relevant lspci output:

[root@bulbasaur ~]# lspci -v|grep VGA
00:02.0 VGA compatible controller: Intel Corporation UHD Graphics 630 (Mobile) (prog-if 00 [VGA controller])
01:00.0 VGA compatible controller: NVIDIA Corporation TU106M [GeForce RTX 2060 Mobile] (rev a1) (prog-if 00 [VGA controller])

glxinfo output:

[root@bulbasaur ~]# glxinfo |grep render
direct rendering: Yes
    GLX_MESA_multithread_makecurrent, GLX_MESA_query_renderer, 
    GLX_MESA_query_renderer, GLX_MESA_swap_control, GLX_OML_swap_method, 
Extended renderer info (GLX_MESA_query_renderer):
OpenGL renderer string: Mesa DRI Intel(R) UHD Graphics 630 (Coffeelake 3x8 GT2) 
    GL_ARB_compute_shader, GL_ARB_conditional_render_inverted, 
    GL_NV_conditional_render, GL_NV_depth_clamp, 
    GL_ARB_compute_shader, GL_ARB_conditional_render_inverted, 
    GL_NV_conditional_render, GL_NV_depth_clamp, GL_NV_fog_distance, 
    GL_EXT_read_format_bgra, GL_EXT_render_snorm, GL_EXT_robustness, 
    GL_NV_conditional_render, GL_NV_draw_buffers, GL_NV_fbo_color_attachments, 
    GL_OES_element_index_uint, GL_OES_fbo_render_mipmap,

Screenshot of nvidia-settings, showing no XWindows items in the left:

https://i.imgur.com/ddRuFsz.png

glmark2 output:

    glmark2 2017.07
    OpenGL Information
    GL_VENDOR:     Intel Open Source Technology Center
    GL_RENDERER:   Mesa DRI Intel(R) UHD Graphics 630 (Coffeelake 3x8 GT2) 
    GL_VERSION:    3.0 Mesa 19.2.4

One of the solutions I found on Google was to copy nvidia.conf into /etc, which I did as follows:

# cp /usr/share/X11/xorg.conf.d/nvidia.conf /etc/X11/xorg.conf.d/

and to add the line Option "PrimaryGPU" "yes" which I did, to no effect:

        [root@bulbasaur xorg.conf.d]# cat nvidia.conf
        #This file is provided by xorg-x11-drv-nvidia
        #Do not edit

        Section "OutputClass"
                Identifier "nvidia"
                MatchDriver "nvidia-drm"
                Driver "nvidia"
                Option "AllowEmptyInitialConfiguration"
                Option "SLI" "Auto"
                Option "BaseMosaic" "on"
        EndSection

        Section "ServerLayout"
                Option "PrimaryGPU" "yes"
                Identifier "layout"
                Option "AllowNVIDIAGPUScreens"
        EndSection


Get this bounty!!!

#StackBounty: #windows-7 #nvidia-graphics-card #waterfox Waterfox contents are shifted with Nvidia cards

Bounty: 50

When I first set up my computer, I had ATI cards in it. Those have since died on me and I’m using an Nvidia card as a replacement. Since installing the Nvidia drivers, Waterfox and ONLY Waterfox has its window contents shifted up slightly. If I uninstall/disable the drivers, the issue goes away. Another reason I know that it’s just a graphical issue is that I have to adjust where I click in Waterfox down slightly to interact with it properly. If I go fullscreen with anything, (the browser window, video, browser game), it works fine again. Windows GUI elements seem to be fine, the menu drop downs work properly as well as the Save As dialog box. Anyone know how to fix this, or possible troubleshooting steps to diagnose what’s going on with this?

Thanks.

Troubleshooting/Observations:

  1. It’s not a scaling issue as that doesn’t fix the problem.
  2. Not an addon issue since disabling them doesn’t work.
  3. Disabling drivers fixes issues.
  4. Affects multiple versions of graphics cards and drivers. 700 series and 900 series. Only one used at a time.
  5. Only Waterfox has this issue.
  6. Executable profile in the Nvidia settings doesn’t seem to do anything, though I may not have gotten the right settings.
  7. ATI drivers were uninstalled.
  8. Waterfox is up to date.
  9. Latest drivers for Nvidia don’t fix it.
  10. Menus and other dialogs are in the proper places and are visually correct.
  11. History/Downloads/Bookmark window, and the bookmark update window have the same issue, so this happens to all of Waterfox.

Here’s a pic of what is going on:
enter image description here


Get this bounty!!!

#StackBounty: #drivers #graphics-card #nvidia-graphics-card #nvidia-geforce #cuda Looking for a way to get PyCUDA with default MacOS NV…

Bounty: 50

I have kernel panic with my CUDA NVIDIA card ( GeForce GT 750 M), So I would like to use the defaut NVIDIA drivers of MacOS to be able to install PyCUDA like I did with NVIDIA drivers.

Is it possible ? For the moment, I get in Python shell :

>>> import pycuda.autoinit
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pycuda/autoinit.py", line 5, in <module>
    cuda.init()
pycuda._driver.LogicError: cuInit failed: 

Any help is welcome, I don’t know how to install pycuda with defaut MacOS defaut NVIDIA drivers

Regards


Get this bounty!!!

#StackBounty: #windows-10 #nvidia-graphics-card #lag #sli What are the requirements for using SLI?

Bounty: 50

I’ve just got a second graphics-card and I wanted to try SLI.
After I’ve installed the latest driver for my cards (bother are the exact same model and chip), I was able to use both of them.

Somehow my system is not happy at all with those cards.

I’m getting hard lags in normal operating mode.
Windows lags hard, I cannot move or some windows, even the sound is laggy.

I’ve read, that RAM has to be SLI-Certified and that CPU might be the issue.

How would I track down the source of the problem ?

My current setup:

If any other detail is needed I will add them ! (just let me know in the comments).

Things I’ve tried so far:

  • disabling HPET (did not work, does not seem to be active)


Get this bounty!!!