dotMemory - new user questions
I'm running the trial edition and we're already getting some good information to find out why our production applications are taking up some memory. Any help with the following questions would be appreciated.
We have a standard IIS web hosting setup. I can "Attach" to the running w3wp.exe process to then get a "Sample" dump, and have these questions:
1. While getting a snapshot, which takes about 30 seconds, the attached w3wp.exe process uses 80% or more of CPU. The dotMemory process uses 5-10% CPU. I thought the "sample" mode was less impact than "full" mode, so is this normal? We got customer complaints when doing this, so obviously what we can do in production is limited.
2. The target w3wp.exe process stays at around 10% CPU even while not getting a snapshot, but just Attached. To reduce the target process CPU usage back to normal, we have to Detach. Is this correct? I would like to keep the process "attached" all day and look at it occasionally and if we catch a memory spike in action, then capture a screenshot.
3. Is the snapshot actually taking up as much disk space as the memory usage? For example if a Workspace says "3.7GB" (after capture), is it using that much disk? I know I can check this myself but haven't had a chance to investigate how dotMemory stores captured info.
4. During the capture I got "High GC pressure" warnings. Could the Sample operation itself be using so much memory that the target processes start to experience low memory, which pushes them to "High GC Pressure"? Or were we actually seeing the w3wp's own High GC Pressure at the time, by coincidence? (Which is the kind of thing we want to see and track)
5. The server has 32GB of physical memory, is usually at around 80% usage, and has a 12GB fixed paging file size. The IIS w3wp.exe processes are usually around 2GB. When one spiked to 5GB, I tried to capture a screenshot, but got a warning that dotMemory needed more memory and to change the paging file size to 32GB or switch to let Windows manage the paging file size. I'm open to doing this, because the server has plenty of hard drive space free (and it's SSD). If we do this then can we more safely leave dotMemory in "attach" mode all day to capture multiple screenshots of the process? We run a few w3wp.exe processes on the server and each are usually about 2GB but can spike very high to even 20+GB which is what we're trying to figure out.
6. I read all the profiling methods but it's still not clear how to profile an IIS website. I can attach to a running process but that has some limits - it says you can't see creation stack traces, for example, which makes sense. But how can we start a production customer website while attached? We only know how to start them from IIS Manager (the "Start" operation on a website starts the associated w3wp.exe automatically). I want to capture memory usage our user's ongoing activity, not our test activity, because in the customer's site with their test data we can't modify their data. We can clone the entire customer site and run on a different server but it will be very hard to simulate the actual customer user activity.
Thanks!
Edit: fixed my numbering.
Please sign in to leave a comment.
dotMemory displays this warning when you "open" a memory snapshot to analyze it, not when you just captured it. I don't recommend you to analyze data on the production server whether there are enough resources for that or not, it can slow down a computer significantly. It's much better to move dotMemory workspace to another computer and analyze it there.
Gathering profile data and analyzing them are unrelated processes. The warning was related to the analyzing part.
dotMemory attached to your process has two impacts:
In the "attach" mode dotMemory not able to show the creation stack trace for each object in the memory snapshot, but still shows for some of them, which could be enough to understand the root of the problem and shows call stacks for objects collected before the memory snapshot was taken in the "Memory Allocations" view.
There is the way to do that using ".NET Process" profile configuration. Don't forget to setup process filters in order to profile only the process you need.
Also I would recommend you to take a look at the console dotMemory profiler, I think it suits profiling a production server much better than GUI application. Set it up to get a memory snapshot automatically when a memory consumption grows fast, then analyze memory allocation in this time period, I think it will answer the question what is happening.
Hello,
"Sampling" mode relates to memory allocation collection type and doesn't really have a significant impact on your application, unlike getting a snapshot.
When you get memory snapshot, profiler stops all threads in your application to collect full objects graph. This operation can't be done if your application continue working. Besides, profiler injects into profiled process and collects snapshot data inside your application, therefore it consumes resources within the process being profiled, but it only happens while getting a snapshot. When snapshot data is collected, dotMemory continues processing in its own process.
It may take more than memory usage value after calculating all caches. dotMemory presents snapshot data in a different ways and perform a lot of calculations to build dominators tree, calculate retention graph and so on. All this data will be cached to present information in the user interface without additional calculations.
Allocations sampling collects data on each GC. If GC happens frequently in your application, profiler can really use ~10% CPU for this. Could you please attach a screenshot of dotMemory window after attaching to your process while using ~10% CPU?
As described above, profiler may affect CPU, but it shouldn't affect this inspection. Profiler's code is native and it doesn't perform managed allocations in the profiled process. We can suggest you to open memory allocations on this time intreval and check which objects were allocated. You can read more about this inspection here.
Thank you both for such detailed and helpful replies. I'm going to go ahead and buy the product, it's rare to get service like this nowadays.
My summary of your replies:
1. Taking a snapshot pauses the target process
2. Snapshot may take as much as target process RAM or more.
3. High GC pressure noted during snapshot was not caused by snapshot itself, so may have been our the target process having the actual problem.
4. Memory warning is from opening snapshot for viewing, not during capture.
5. Analysis uses lots of resources, so do it on another host, not production.
6. Use the console profiler with something like "--trigger-mem-inc" to do snapshots only in certain conditions (research more)
The main question left is about #5. Do I just copy the subfolders from:
C:\Users\Administrator\AppData\Local\JetBrains\dotMemory\v231\Workspaces
This has folders with 7-character subfolders with random names like "Gofetuh".
So I just copy those subfolders to some other server (also with JetBrains), in the same folder space, and the second server will be able to analyze these snapshots from the production server?
If you have a link to this process please provider, otherwise I'll figure it out.
Thank you.
Please ignore last request, export/import of workspace from one server to another was very simple, I am proceeding with analysis and getting good information. Purchased an license for dotUltimate as dotMemory has already shown enough value already. Thank you again for your help. Will ask more detailed questions in other thread, if things come up.