Can we use dotMemory profiler for the dotnet application which are hosted in Azure Kubernetes Cluster?
Hi Team,
I want to use the dotmemory profiler for my dotnet application which is deployed in the AKS Linux cluster.
Is it possible to run the dotM emory profiler with AKS cluster applications?
If possible then please can you help me with the steps to profile the application deployed in aks cluster.
Thanks
Please sign in to leave a comment.
Hello,
There isn't any "out of box" solution how to use dotMemory in AKS clusters. Actually, it's possible to run dotMemory command line tool in docker container and it will work if you deploy this container to the container registry on Azure and apply to AKS. However, the main issue is how to access the saved workspace file. You'll need to find a way to download it from storage which is used by the service running on the kubernetes cluster or set up your docker container to be able to save data to any external storage. This is not a basic task and we don't have a simple solution for this.
We can suggest the basic steps regarding how to run dotMemory inside Docker:
1. Edit Dockerfile to download and unzip dotmemory clt:
where </path/to/dm> is the path folder inside your docker container.
2.1. If you want to run your application under profiler, you need to add entry point to start your app under dotMemory. We don't suggest to use this case on production server because your application will be stopped when dotMemory finishes its work. Anyway, you need to add entry point at the end of docker file with dotMemory command line, e.g.:
start-net-core command will start your app located in <path to your app>/<AppName>.dll under dotnet.
You can also specify --temp-dir and --log-file to store temp and log files to specific folder.
--save-to-dir allows you to save workspace to the certain folder.
In this sample, dotMemory will starts <AppName>.dll and gets snapshot by 20s trigger; dotMemory will be finished after 2 minutes timeout and workspace will be saved to </path/to/dm>/dotMemoryclt/workspaces folder.
You can configure this command line according your needs. Use "dotmemory help" and "dotmemory help <command>" to get more options.
2.2. If you want to attach profiler to already running process, you need to create additional script and specify it as ENTRYPOINT to run your application and then attach by pid. Create simple script attach.sh:
This script starts application, then saves pid of the latest started process, then inserts sleep to wait process is initiated and attaches dotMemory to the all processes with name dotnet (you can also use pid to attach by PID, remove --all argument in this case). "wait $pid" is needed to avoid stopping the container after the script is executed.
Modify the docker file to run this script:
You can copy attach.sh file to your app folder or any other folder. Also you should allow script execution via "chmod +x", then use this script as an entry point.
3. Build and run container and check if workspace is created. You can check it locally if use "docker exec -it <Container ID> bash" and check workspace file in </path/to/dm>/dotMemoryclt/workspaces folder.
--timeout is one of the way to stop profiling session. The other way it to send command via stdin file but you'll need to write data to the file located inside docker container. This is easy if you run container locally, but this is not a trivial task if you run Azure service for the reasons described above.
--service-input=</path/to/file>/msg.rtf allows you to use file to send messages via stdin. msg.rtf file should exist and be empty before dotMemory is started.
Now you'll be able to send commands via this file (don't recreate file when you write data to it, just append new line):
##dotMemory["get-snapshot"] - send this command to get snapshot
##dotMemory["disconnect"] - send this command to stop profiling session.
"dotmemory help service-messages" can suggest you more options.
When all steps are performed, you can try to deploy and run modified version on Azure. We checked this scenario and according to the AKS logs, the workspace was saved successfully while running as a service on the AKS cluster. However, we can't suggest a simple solution on how to download the saved workspace. Perhaps you have your own ideas on this subject, we will be glad if you share them with us.
I also want to know the steps to profile the application deployed in Azure Kubernetes Clusters by dotMemory Standalone and as well from Console/Command line profiler. My application is in dotnetcore 3.1 and containerized using docker file and deployed in AKS(Azure Kubernetes cluster) cluster .
1. Also if you can use kubectl you can get inside the pod using the next command
2. After that let's get to tmp folder just not to mess up the app folder
3. Then as Anna Guseva said before you can download the dotmemory cli (here I'm using the latest version) and install it there inside the container
4. Execute the dotmemory command inside the container to get the snapshot
5. After you finish your session, you can copy the files to your machine. Here is an example of how to copy the snapshot to your machine from the pod. Run it on your machine:
----------------------------------------------------------
If you attach to the process, instead of getting a snapshot run the next command inside the pod:
And then archive what you need, for example workspaces folder
To copy them from the pod run the next command on your machine
Thanks Anna Guseva for the above information.
Can you please provide the above commands for Windows based Container and dockerfile (dotnetcore3.1) as well ?
Thanks.
I've described my ideas about Windows solution here.