I have recently started to host a minecraft server on google cloud shell using https://github.com/lordofwizard/mcserver , and its all well untill we check the specs, my friends when cheking on his account has 16 GB of ram and 2 cores , but i only have 8 GB of ram and 1 core , there is no other alt account of mine which got the 16 GB ram and 2 core machine, any idea how does google cloud shell gives its users ram and is it possible to change the amount it gives , Note this is for the non-paid version of google cloud shell.
There is no way to modify the memory nor CPU on the Cloud Shell. If you need something more, please create a VM using Compute Engine where you can modify it to your needs.
You can run free -h to see the size of memory.
According to the Github page you linked to it says this:
Each Cloud Shell session will have different specs of your server
based on your physical location so you won't always get the best
performance of your server but good news being that it's always the
range between 8GB to 16GB so you won't have to worry about lag when
playing in the server with high processing in your server.
Related
I have an interesting problem with how Windows and .Net manage memory for Asp.Net applications that I can't explain myself. The problem is that I have a big Asp.Net application that after starts up can take about 1 GB memory according Resource Manager. We tried to test how many instances of the application we can run at the same time on a single machine with 14-16 GB memory.
First test is with an Azure Windows 2016 server with 8 vCPUs, 14 GB RAM, HDD.
After a few instances:
After 30 instances:
As you can see, private byes and working set of some instances reduced a lot. Based on what I read from how memory is managed (aka working set, physical memory, virtual memory, page files...), I can understand how the OS can take physical memory away from some idle processes for the others that are in need. So far so good.
Then we tested the same scenario with another Azure Windows 2016 server with 4 vCPUs, 16 GB RAM, but this one uses SSD.
After about 20 instances, we got OutOfMemoryException:
The key difference I could see is that memory of all those w3wp processes were still high. In other words, they were not reduced as in the test above.
My question is why the behaviors were different? What prevented the second cases from saving memory to page file (my guess!) and thus caused OutOfMemoryException?
Checking pagefile setting showed us that it was stilled enabled in "System managed size" mode but somehow Windows refused to use it for the w3wp processes. We tried to change it to custom size and set it to 20 GB and everything started working again as expected. I must admit that I still don't know why Windows 2016 behaves like that when SSD is used though.
I am running a large scale ERP system on the following server configuration. The application is developed using AngularJS and ASP.NET 4.5
Dell PowerEdge R730 (Quad Core 2.7 Ghz, 32 GB RAM, 5 x 500 GB Hard disk, RAID5 configured) Software: Host OS is VMWare ESXi 6.0 Two VMs run on VMWare ESXi .. one is Windows Server 2012 R2 with 16 GB memory allocated ... this contains IIS 8 server with my application code Another VM is also Windows Server 2012 R2 with SQL Server 2012 and 16 GB memory allocated .... this just contains my application database.
You see, I separated the application server and database server for load balancing purposes.
My application contains a registration module where the load is expected to be very very high (around 10,000 visitors over 10 minutes)
To support this volume of requests, I have done the following in my IIS server -> increase request queue in application pool length to 5000 -> enable output caching for aspx files -> enable static and dynamic compression in IIS server -> set virtual memory limit and private memory limit of each application pool to 0 -> Increase maximum worker process of each application pool to 6
I then used gatling to run load testing on my application. I injected 500 users at once into my registration module.
However, I see that only 40% / 45% of my RAM is being used. Each worker process is using only a maximum amount of 130 MB or so.
And gatling is reporting that around 20% of my requests are getting 403 error, and more than 60% of all HTTP requests have a response time greater than 20 seconds.
A single user makes 380 HTTP requests over a span of around 3 minutes. The total data transfer of a single user is 1.5 MB. I have simulated 500 users like this.
Is there anything missing in my server tuning? I have already tuned my application code to minimize memory leaks, increase timeouts, and so on.
There is a known issue with the newest generation of PowerEdge servers that use the Broadcom Network Chip set. Apparently, the "VM" feature for the network is broken which results in horrible network latency on VMs.
Head to Dell and get the most recent firmware and Windows drivers for the Broadcom.
Head to VMWare Downloads and get the latest Broadcom Driver
As for the worker process settings, for maximum performance, you should consider running the same number of worker processes as there are NUMA nodes, so that there is 1:1 affinity between the worker processes and NUMA nodes. This can be done by setting "Maximum Worker Processes" AppPool setting to 0. In this setting, IIS determines how many NUMA nodes are available on the hardware and starts the same number of worker processes.
I guess the 1 caveat to the answer you received would be if your server isn't NUMA aware/uses symmetric processing, you won't see those IIS options under CPU, but the above poster seems to know a good bit more than I do about the machine. Sorry I don't have enough street cred to add this as a comment. As far as IIS you may also want to make sure your app pool doesn't use default recycle conditions and pick a time like midnight for recycle. If you have root level settings applied the default app pool recycling at 29 hours may also trigger garbage collection against your child pool/causing delays even in concurrent gc where it sounds like you may benefit a bit from Gcserver=true. Pretty tough to assess that though.
Has your sql server been optimized for that type of workload? If your data isn't paramount you could squeeze faster execution times with delayed durability, then assess queries that are returning too much info for async io wait types. In general there's not enough here to really assess for sql optimizations, but if not configured right (size/growth options) you could be hitting a lot of timeouts due to growth, vlf fragmentation, etc.
We have a small (for now) Asp.Net MVC 5 website on a dedicated VPS. When I go to the server and fire-up task manager, I see that "SQL Server Windows NT - 64 bit" is using around 80% of CPU and 170MB of RAM and IIS is using 6% CPU and 400MB of RAM. Server Specs are:
CPU 1.90Ghz dual core
Memory 2GB
Windows Server 2012
SQL Server Express 2012
Disk Space: 25GB, 2.35 Free.
The database is not very big. Its backup is less than 10MB.
I have tried to optimize the website as much as I could. I added caching to a lot of controllers and implemented donut caching for quite a lot of controllers. But today, even though there were only 5 users online, our search wouldn't work. I restarted the Windows on the server and it started working but I got the high CPU usage the minute server started. Interestingly when I open the SQL Server Management Studio and try to get the report for top CPU-consuming queries it says that there are no queries currently consuming any CPU!!! But at the same time I can see that SQL server is consuming a lot of CPU. How can I examine what is taking all the CPU? Below is a picture from the server:
I was/am very careful with designing and implementing the website. All the database access is through latest version of Entity Framework. I just wonder if the server's specs are low. Any help would be very much appreciated.
Update:
Here's the result of the sp_who2 stored procedure.
This could happen if the memory set to use is more than the available memory on the box. The default memory setting of 2147483647MB. In our case the AWS box had only 30.5 GB so we changed the setting to 26GB and the CPU usage fell to 40%. You generally want to leave 20% of memory for OS and its operations.
I would agree running SQL Profiler to spot large query durations and large write operations. Try running perfmon and spotting any potential connection leaks (reclaimed connections).
I have my IIS 6 running my website. It is on a Windows Server 2003 which has 4GB of RAM. I run SQL intensive code after the user submits a form (math statistics stuff). This process is not threaded (should it be, especially if 2 or more users run the same thing?). But my process seems to consume only a couple of GBs of memory and the server crawls. How do I get my IIS process to use nearly all the memory?
I see on other sites that its 2GB or 3GB allocated using boot.ini. But is there another way for the process to use memory? If I make it multithreaded, will there be a process for each thread?
If there is still memory free for IIS, it does not need more. Even if you give it more memory it will perform better. It is good to see some memory is not used and can be used for other processes as IIS. If you want to make is multi threading, it depends on what you do parallel if more memory is used, and if you gain any performance.
The basic here is to start with your requirements and see what peak use you can have. Then make a performance test to see if your machine can handle that load. To be sure you can handle some more do an other test to see the peek load your machine can handle. Then you will know if you have to invest any more time.
Check you database server to see if you bottleneck is not on that machine, most developers forget optimizing and maintaining their databases.
I'm a developer in a large company that has some legacy code that requires a very large ammount of memory on export functions. To address this, ini_set('memory_limit', '4G'); is used.
The problem is that the script crashes with memory exaustion. If I set the limit to 2G, the script runs to the end. It doesn't even reaches 1GB peak memory usage.
Since the code is versioned and shared with the rest of the company I can't change the limit and changing it on my local install is cumbersome.
My question is: what can make a script crashes with 4GB limit but not 2GB?
PS: my setup is a virtualbox machine running Debian with nginx and php-fpm. The vm has 4GB RAM (although changing this doesn't seem to do any difference).
[update]
Created a new virtual machine with an 64 bits operation system and if I set the vm memory to 2GB it works. (If i use 4GB it doesn't).
Since i'm ok with 2GB, i'll close this issue.
It is a natural limitation: 2 or even 4 Gbs of address space are used for file mapping also which takes some memory pages.
The ultimate solution would be to use the 64-bit PHP interpreter (i.e., switch to 64-bit system, if possible).
Maybe you are on a 32bit system?
Well if your VM only has 4GB, then you probably should give it more memory.
On the 32 bit system 4GB is the limit of memory space. I guess that there can be some memory violations when PHP tries to get 4GB memory.