Ryouji Posted June 15, 2022 Posted June 15, 2022 The other day, I troubleshooted with Scara and others about some extreme GPU usage while running the demo. Both Scara and I have 3060s, however mine is a laptop model. When running the demo, less than two minutes in, my GPU was running between 70-80% Usage, and my CPU between 60-70%. GPU temps were reaching between 73 and 80 C. I believe that it may have something to do with newer cards and all of their 'features,' as friends who have lower end cards are not experiencing these issues. I would assume, though I cannot confirm, as I'm not a technician, that something about this build might be causing GPU memory leaks. Roughly speaking, I imagine the problem might be occurring as assets are drawn by the card and stored on the GPU memory. Speculating areas to look at: Draw calls, as it's possible there might be too many too quickly on these particular cards. Insufficient memory dumps/culling, resulting in the card having to constantly load assets as memory cannot store additional assets. Conflicting code, which is causing the cards to try and run 'features' (RTX, DLSS, etc) unintentionally even if those features are not implemented into the demo. (Vague, I'm sorry) Ultimately I have a feeling it might have something to do with GPU memory itself on higher end nVidia cards (I can only speak for mine, a laptop 3060). I hope this helps at all, as I'm no expert in the field, I'm just drawing on years of QA experience (non-professional).
Recommended Posts