Metrics such as CPU, Memory, disk in windows - fluent-bit

I am using fluent-bit to collect metrics such as CPU, Disk, Memory in windows but unable to collect? Does anyone know how to collect data of these metrics using fluent-bit?

Ap per the doc, node-exporter plugin which collects CPU / Disk / Network metrics is only supported on Linux based operating systems.

Related

CPU usage of R Session from windows performance monitor or Resource Monitor

How can I obtain the data of CPU usage of R Session from windows performance monitor or Resource Monitor between two time points as csv file in Windows

How does google cloud shell allocate RAM and Cores?

I have recently started to host a minecraft server on google cloud shell using https://github.com/lordofwizard/mcserver , and its all well untill we check the specs, my friends when cheking on his account has 16 GB of ram and 2 cores , but i only have 8 GB of ram and 1 core , there is no other alt account of mine which got the 16 GB ram and 2 core machine, any idea how does google cloud shell gives its users ram and is it possible to change the amount it gives , Note this is for the non-paid version of google cloud shell.
There is no way to modify the memory nor CPU on the Cloud Shell. If you need something more, please create a VM using Compute Engine where you can modify it to your needs.
You can run free -h to see the size of memory.
According to the Github page you linked to it says this:
Each Cloud Shell session will have different specs of your server
based on your physical location so you won't always get the best
performance of your server but good news being that it's always the
range between 8GB to 16GB so you won't have to worry about lag when
playing in the server with high processing in your server.

Checking if gpu is integrated or not

I couldn't find any query command about device being integrated/embedded in cpu or using system ram or its own dedicated gddr memory? I can benchmark mapping/unmapping versus reading/writing to get a conclusion but that device can be under load at that time and behave wrong and it would add complexity to already complex load balancing algorithm that I'm using.
Is there a simple way to check if a gpu is using same memory with cpu so I can choose directly mapping/unmapping instead of reading/writing?
Edit: there is CL_DEVICE_LOCAL_MEM_TYPE
CL_GLOBAL or CL_LOCAL
is this an indication of integratedness?
OpenCL 1.x has the device query CL_DEVICE_HOST_UNIFIED_MEMORY:
Is CL_TRUE if the device and the host have a unified memory subsystem
and is CL_FALSE otherwise.
This query is deprecated as of OpenCL 2.0, but should probably still work on OpenCL 2.x platforms for now. Otherwise, you may be able to produce a heuristic from the result of CL_DEVICE_SVM_CAPABILITIES instead.

How to find root cause for high cpu usage in app tier (WCF)

My current application is comprises of 3 tier- Web tier - App Tier - Database
While testing with 100 users, we found that App tier's cpu is touching almost 90% where as web server and database server are doing fine.
I am not able to figure out what code is causing high cpu usage. Majorly we have CRUD operation there. We take input in the form of DTO, we transfer them into entities (using Entity framework), add/update/delete into database. In case of Get operation, we fetch data into EF entities, store them in DTO and then send DTO to client.
I have tried to use DebugDiag but could not figure out any useful information.
Following are the server's configuration:
Web Server (Quantity = 1) Processor Intel Xeon CPU X5675 #3.07 GHz 2.19 GHz
Number of Cores (Virtual) 8
RAM 8GB
Operating System Windows Sever 2012 Standard
Processor Type 64 Bit
Softwares Installed NET Framework 4.5
App Server (Quantity = 1) Processor Intel Xeon CPU X5675 #3.07 GHz 3.07 GHz
Number of Cores (Virtual) 8
RAM 8GB
Operating System Windows Sever 2012 Standard
Processor Type 64 Bit
Softwares Installed NET Framework 4.5
DB Server (Quantity =1) Processor Intel Xeon CPU E7-4830v2 # 2.20 GHz 2.19 GHz
Number of Cores (Virtual) 8
RAM 8GB
Operating System Windows Sever 2012 Standard
Processor Type 64 Bit
Softwares Installed Microsoft SQL Sever 2014
There is no better solution than to install an APM tool. With them you'll find the root cause very quickly. AppDynamics or NewRelic are easy, Dynatrace a bit more complex but maybe more powerful.
Else go on shooting in the dark
Windows sysinternal tool Process Explorer(Procexp) is a good tool to find the high CPU process and thread call stack(method calls)
OR
- Collect multiple full user dump using task manager/Procexp on the high CPU process
And collect perfmon log with Thread counter. Perfmon -> Add counter -> Thread under thread select %Processor Time, ID thread, ID process.
From the perfmon you can find the high CPU thread ID. Now you can co-relate the thread ID with debug diag analysis report and find the thread call stack.
Hope this helps.
Thanks,
Parthiban

what does the return value of cudaDeviceProp::asyncEngineCount mean?

I have read the documentation, and it said that if it returns 1:
device can concurrently copy memory between host and device while executing a kernel
If it is 2:
device can concurrently copy memory between host and device in both directions and execute a kernel at the same time
What exactly is the difference?
With 1 DMA engine, the device can either download data from the CPU or upload data to the CPU, but not do both simultaneously. With 2 DMA engines, the device can do both in parallel.
Regardless of the number of available DMA engines, the device also has an execution engine which can be running a kernel in parallel of ongoing memory operations.

Resources