we are working on a project with asp.net core 6 using Visual studio 22 and the build process stuck in
C:\Program Files\Microsoft Visual Studio\2022\Community\MSBuild\Current\Bin\Roslyn\csc.exe
the build done successfuly ,but its slow . build time about 1 minute and 30 seconds.
how to reduce build process time?
Any help would be much appreciated.
I am working on Blazor in vs2022 and every change requires recompilation or partial-compilation (hot reload) which was painfully slow.
The following changes I recommend for speeding up build times.
CPU
Get a processor with high turbo clock rate, around 4GHZ-5GHZ.
If you are running a laptop, try get an Intel processor that ends with the H letter. For example, the latest Intel CPUs are 12700H/12900H. These are insanely fast laptop processors which can outperform many desktop CPUs.
Ensure your computer is using the Windows Performance profile or equivalent so that your CPU is not being throttled to save power.
DISK
First prize is a 4th GEN NVME drive paired with a computer that supports GEN4 NVME. Second prize is any NVME drive.
ENCRYPTION
First prize is not to use disk encryption, but if you do need it, opt for hardware encryption as software encryption will consume CPU resources leaving less for compiling. Hardware encryption uses the SSD's own internal encryption (which is always active) to handle the encryption.
My own testing has resulted in +- 40% loss in write performance with software encryption.
RAM
Just make sure you have enough RAM and Windows is not swapping memory to disk in order to compile your project. So most often 16GB RAM is sufficient, but I personally prefer to have 32GB so that more is cached by Windows in memory.
VS2022
Disable visual studio analyzers during build. Some have reported build times increase when this is turned off.
Related
I've been using R 3.1.2 on an early-2014 13" MacBook Air with 8GB and 1.7GHz Intel Core I7, running Mavericks OSX.
Recently, I've started to work with substantially larger data frames (2+ million rows and 500+ columns) and I am running into performance issues. In Activity Monitor, I'm seeing virtual memory sizes of 64GB, 32GB paging files, etc. and the "memory pressure" indicator is red.
Can I use the "throw more hardware" at this problem? Since the MacBook Air tops out at 8GB physical memory, I was thinking about buying a Mac Pro with 64GB memory. Before I spend the $5K+, I wanted to ask if there are any inherent limitations in R other than the ones that I've read about here: R Memory Limits or if anyone who has a Mac Pro has experienced any issues running R/RStudio on it. I've searched using Google and haven't come up with anything specific about running R on a Mac Pro.
Note that I realize I'll still be using 1 CPU core unless I rewrite my code. I'm just trying to solve the memory problem first.
Several thoughts:
1) Its a lot more cost effective to use a cloud service like https://www.dominodatalab.com (not affiliated). Amazon AWS would also work, the benefit of domino is that it takes the work out of managing the environment so you can focus on the data science.
2) You might want to redesign your processing pipeline so that not all your data needs to be loaded in memory at the same time (soon you will find you need 128 GB, then what). Read up on memory mapping, using databases, separating your pipeline into several steps that can be executed independent of each other, etc (googling brought up http://user2007.org/program/presentations/adler.pdf). Running out of memory is a common problem when working with real life datasets, throwing more hardware at the problem is not always your best option (though sometimes it really can't be avoided).
I have a basic question.
If I run an executable file (Release, Visual Studio 2010) on two computers with the same CPU speed run two different Windows operating systems, eg. Windws7 vs XP, shall I expect to see different CPU usages when I measure it using the task manager? Is the CPU speed the only factor to measuring the CPU usage?
Thanks.
Sar
Different OS's? Yes.
Operating Systems are the go-between between the programs you run and the bare-metal they run on. As OS'es change and evolve the naturally and and remove features that consume resources--these are things that run in the background; or they could be changes to the manner in which the OS speaks to the hardware.
Also, the measurement of CPU usage is done by the OS. There isn't a tachometer on chips saying "running at 87% of redline", but rather that "tach" is constructed largely by the OS.
After better understanding your situation: I would suggest taking a look at the Performance Monitor (perfmon.exe) which ships with both XP and Win7, and gets you much finer-grain detail about processor usage levels. Another (very good) option would be to consider running a profiler on your application on both OSes and compare the results. It would likely be the best option to specifically benchmark your application on both OSes.
Even on the same OS you should expect to see different usages, because there are so many factors that determine CPU usage.
The percentage of CPU usage listed in the task manager is not a very good indication of much of anything, except to say that a program either is, or is not using CPU. That particular statistic is derived from task switching statistics, and task switching is very sensitive to basically every single thing that's going on in a computer, from network access to memory speed to CPU temperature.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm an ASP.NET / C# developer. I use VS2010 all the time. I am thinking of enabling BitLocker on my laptop to protect the contents, but I am concerned about performance degradation. Developers who use IDEs like Visual Studio are working on lots and lots of files at once. More than the usual office worker, I would think.
So I was curious if there are other developers out there who develop with BitLocker enabled. How has the performance been? Is it noticeable? If so, is it bad?
My laptop is a 2.53GHz Core 2 Duo with 4GB RAM and an Intel X25-M G2 SSD. It's pretty snappy but I want it to stay that way. If I hear some bad stories about BitLocker, I'll keep doing what I am doing now, which is keeping stuff RAR'ed with a password when I am not actively working on it, and then SDeleting it when I am done (but it's such a pain).
2015 Update: I've been using Visual Studio 2015 on my Surface Pro 3 when I travel, which has BitLocker enabled by default. It feels pretty much like my desktop, which is an i7-2600k # 4.6 GHz. I think on modern hardware with a good SSD, you won't notice!
2021 Update: I have been enabling bitlocker on all my computers and it flies now. No worries. Get an NVMe SSD and don't look back.
With my T7300 2.0GHz and Kingston V100 64gb SSD the results are
Bitlocker off → on
Sequential read 243 MB/s → 140 MB/s
Sequential write 74.5 MB/s → 51 MB/s
Random read 176 MB/s → 100 MB/s
Random write, and the 4KB speeds are almost identical.
Clearly the processor is the bottleneck in this case. In real life usage however boot time is about the same, cold launch of Opera 11.5 with 79 tabs remained the same 4 seconds all tabs loaded from cache.
A small build in VS2010 took 2 seconds in both situations. Larger build took 2 seconds vs 5 from before. These are ballpark because I'm looking at my watch hand.
I guess it all depends on the combination of processor, ram, and ssd vs hdd. In my case the processor has no hardware AES so compilation is worst case scenario, needing cycles for both assembly and crypto.
A newer system with Sandy Bridge would probably make better use of a Bitlocker enabled SDD in a development environment.
Personally I'm keeping Bitlocker enabled despite the performance hit because I travel often. It took less than an hour to toggle Bitlocker on/off so maybe you could just turn it on when you are traveling then disable it afterwards.
Thinkpad X61, Windows 7 SP1
Some practical tests...
Dell Latitude E7440
Intel Core i7-4600U
16.0 GB
Windows 8.1 Professional
LiteOn IT LMT-256M6M MSATA 256GB
This test is using a system partition. Results for a non-system partition are a bit better.
Score decrease:
Read: 5%
Write: 16%
Without BitLocker:
With BitLocker:
So you can see that with a very strong configuration and a modern SSD disk you can see a small performance degradation with tests. I don't know what about a typical work, especially with the Visual Studio.
Having used a laptop with BitLocker enabled for almost 2 years now with more or less similar specs (although without the SSD unfortunately), I can say that it really isn't that bad, or even noticable. Although I have not used this particular machine without BitLocker enabled, it really does not feel sluggish at all when compared to my desktop machine (dual core, 16 GB, dual Raptor disks, no BitLocker). Building large projects might take a bit longer, but not enough to notice.
To back this up with more non-scientifical "proof": many of my co-workers used their machines intensively without BitLocker before I joined the company (it became mandatory to use it around the time I joined, even though I am pretty sure the two events are totally unrelated), and they have not experienced noticable performance degradation either.
For me personally, having an "always on" solution like BitLocker beats manual steps for encryption, hands-down. Bitlocker-to-go (new on Windows 7) for USB devices on the other hand is simply too annoying to work with, since you cannot easily exchange information with non-W7 machines. Therefore I use TrueCrypt for removable media.
I am talking here from a theoretical point of view; I have not tried BitLocker.
BitLocker uses AES encryption with a 128-bit key. On a Core2 machine, clocked at 2.53 GHz, encryption speed should be about 110 MB/s, using one core. The two cores could process about 220 MB/s, assuming perfect data transfer and core synchronization with no overhead, and that nothing requires the CPU in the same time (that one hell of an assumption, actually). The X25-M G2 is announced at 250 MB/s read bandwidth (that's what the specs say), so, in "ideal" conditions, BitLocker necessarily involves a bit of a slowdown.
However read bandwidth is not that important. It matters when you copy huge files, which is not something that you do very often. In everyday work, access time is much more important: as a developer, you create, write, read and delete many files, but they are all small (most of them are much smaller than one megabyte). This is what makes SSD "snappy". Encryption does not impact access time. So my guess is that any performance degradation will be negligible(*).
(*) Here I assume that Microsoft's developers did their job properly.
The difference is substantial for many applications. If you are currently constrained by storage throughput, particularly when reading data, BitLocker will slow you down.
It would be useful to compare with other software based whole disk or whole partition encryption like TrueCrypt (which has the advantage if you dual boot with Linux since it works for both Windows and Linux).
A much better option is to use hardware encryption, which is available in many SSDs as well as in Hitachi 7200 RPM HDD. The performance of encrypted v. not is undetectable, and the encryption is invisible to operating systems. If you have a decent laptop, you can use the built-in security functions to generate and store the key, which your password unlocks from the encrypted key storage of the laptop.
I used to use the PGP disk encryption product on a laptop (and ran NTFS compressed on top of that!). It didn't seem to have much effect if the amount of disk to be read was small; and most software sources aren't huge by disk standards.
You have lots of RAM and pretty fast processors. I spent most of my time thinking,
typing or debugging.
I wouldn't worry very much about it.
My current work machine came with bitlocker, and being an upgrade from the prior model. It only seemed faster to me. What I have found, however, is that bitlocker is more bullet proof than truecrypt, when it comes to accurately laying down the data. I do a lot of work in SAS which constantly writes backup copies to disk as it moves along and shoots a variety of output types to disk at the end. SAS works fine writing output from multithreaded processes back to bitlocker and doesn't seem to know it's there. This has not been the case for me with truecrypt. I'm not sure what happens or how, but I found that processes got out of synch when working with source/output data in a truecrypt container, which is what I installed on my second work computer since it had no bitlocker. The constant backups were shooting to an SSD while the truecrypt results were on a regular HD. Maybe that speed difference helped trip it up. Whatever the cause, I had to quit using truecrypt on that second computer because it made my SAS results out of synch with respect to processing order and it screwed up some of my processes and data. Scary stuff in my world.
I work with people who have successfully used Truecrypt on the exact same computer, but they weren't using a disk intensive app. like SAS.
Bitlocker to Go, the encryption which bitlocker applies to thumb-drives, does slow things down quite a bit when it comes to read/write times. It's not too hard to use as long as you remember your password on the thumbdrive, and are willing to wait for it to format/initialize the drive, but in my experience it made access to the flash drive about 4 times as slow. Don't know why it would slow down a thumb drive and not a disk but that's how it was for me and my coworker.
Based on my success with bitlocker at work, I bought Windows Pro for my home computer to get bitlocker and plan to encrypt some directories with it for things like financials.
So I've seen this question, but I'm looking for some more general advice: How do you spec out a build server? Specifically what steps should I take to decide exactly what processor, HD, RAM, etc. to use for a new build server. What factors should I consider to decide whether to use virtualization?
I'm looking for general steps I need to take to come to the decision of what hardware to buy. Steps that lead me to specific conclusions - think "I will need 4 gigs of ram" instead of "As much RAM as you can afford"
P.S. I'm deliberately not giving specifics because I'm looking for the teach-a-man-to-fish answer, not an answer that will only apply to my situation.
The answer is what requirements will the machine need in order to "build" your code. That is entirely dependent on the code you're talking about.
If its a few thousand lines of code then just pull that old desktop out of the closet. If its a few billion lines of code then speak to the bank manager about giving you a loan for a blade enclosure!
I think the best place to start with a build server though is buy yourself a new developer machine and then rebuild your old one to be your build server.
I would start by collecting some performance metrics on the build on whatever system you currently use to build. I would specifically look at CPU and memory utilization, the amount of data read and written from disk, and the amount of network traffic (if any) generated. On Windows you can use perfmon to get all of this data; on Linux, you can use tools like vmstat, iostat and top. Figure out where the bottlenecks are -- is your build CPU bound? Disk bound? Starved for RAM? The answers to these questions will guide your purchase decision -- if your build hammers the CPU but generates relatively little data, putting in a screaming SCSI-based RAID disk is a waste of money.
You may want to try running your build with varying levels of parallelism as you collect these metrics as well. If you're using gnumake, run your build with -j 2, -j 4 and -j 8. This will help you see if the build is CPU or disk limited.
Also consider the possibility that the right build server for your needs might actually be a cluster of cheap systems rather than a single massive box -- there are lots of distributed build systems out there (gmake/distcc, pvmgmake, ElectricAccelerator, etc) that can help you leverage an array of cheap computers better than you could a single big system.
Things to consider:
How many projects are going to be expected to build simultaneously? Is it acceptable for one project to wait while another finishes?
Are you going to do CI or scheduled builds?
How long do your builds normally take?
What build software are you using?
Most web projects are small enough (build times under 5 minutes) that buying a large server just doesn't make sense.
As an example,
We have about 20 devs actively working on 6 different projects. We are using a single TFS Build server running CI for all of the projects. They are set to build on every check in.
All of our projects build in under 3 minutes.
The build server is a single quad core with 4GB of ram. The primary reason we use it is to performance dev and staging builds for QA. Once a build completes, that application is auto deployed to the appropriate server(s). It is also responsible for running unit and web tests against those projects.
The type of build software you use is very important. TFS can take advantage of each core to parallel build projects within a solution. If your build software can't do that, then you might investigate having multiple build servers depending on your needs.
Our shop supports 16 products that range from a few thousands of lines of code to hundreds of thousands of lines (maybe a million+ at this point). We use 3 HP servers (about 5 years old), dual quad core with 10GB of RAM. The disks are 7200 RPM SCSI drives. All compiled via msbuild on the command line with the parallel compilations enabled.
With that setup, our biggest bottleneck by far is the disk I/O. We will completely wipe our source code and re-checkout on every build, and the delete and checkout times are really slow. The compilation and publishing times are slow as well. The CPU and RAM are not remotely taxed.
I am in the process of refreshing these servers, so I am going the route of workstation class machines, go with 4 instead of 3, and replacing the SCSI drives with the best/fastest SSDs I can afford. If you have a setup similar to this, then disk I/O should be a consideration.
Possibly better suited for "Rack Overflow", but from a developer's point of view, what are the advantages and disadvantages of running IIS (serving both legacy classic ASP and .NET) as a 32bit process instead of a 64bit process on a 64bit windows host?
The main advantage of 32/64 (iis/server) over 32/32 seems to be the ability to go up to 4gb in memory per IIS process.
The advantages I expect of 32/64 over 64/64 appear to be that it's easier to access legacy 32-bit in-process DLLs (of which we still have one from a partner vendor we can't move away from immediately) and perhaps a smaller memory footprint for the same code given smaller memory pointers.
Are there any performance benefits of 64/64 over 32/64 or anything else that would warrant a full switch now? Have I made any false assumptions here?
The only perf advantage to running IIS on 64bit vevrsus 32-bit is to allow access to a much larger memory address space.
If you are doing normal ASPX page processing, then it's likely you don't need to address more than 4gb from any single process. Suppose you run in 32-bit mode with a web-garden with multiple worker processes on the same machine. In that case each process can address up to 4gb.
The big advantage can come when you perform caching. A 64-bit process can maintain a huge in-memory cache (assuming you have the 32GB or more of RAM to support it) to allow you to cache complex page content or data, on the web server. This allows perf gains when the data is more expensive to generate than it is to retrieve - for example if the data is an elaborated form (let's say the result of a monte carlo simulation), or if the data resides off-box and the network IO time is much more expensive than cache-retrieval time.
If you do not use caching, then 64-bit IIS is not going to help you. It will require 64-bit pointers for every lookup, which will make everything a little slower.
64-bit servers are much more effective when used for databases like SQL Server, or other data management servers (let's say, an enterprise email server like Exchange), than for processing servers, such as IIS or the worker processes it manages. With a 64-bit address space, servers that need to manage data can keep much more of that data in memory, along with indexes and other caches. This saves disk IO time and elaboration time when a query comes in. Most Web apps don't need to address more than 4gb from a single process.
Maybe a useful analogy: In transport, an large SUV is like a 64-bit machine, while a regular, compact passenger car is like a 32-bit server. You can carry much more stuff in a large SUV, and it has a larger towing capacity, seating for 8 people, and a GVWR of 8600 lbs. But with all that, you pay. The truck is heavier. It uses more fuel. If you are only carting around 2 people and one duffel bag, you don't need an SUV. You'll be better off with the smaller vehicle. It can be speedier and more efficient.
I don't think you've made any false assumptions. But I'd say, no, there's likely to be no performance difference between any of the scenarios you outlined. 32 on 64 on Windows does not operate at a penalty. 64 on 64 may give some slight performance boost, but that's dubious. There may be some memory savings with a 32-bit process, but this is likely negated by the thunking required to run the process in the first place.
The only benefit is the DLL issue you mentioned. That could be a reason for upgrading as well (if you have something specifically 64-bit that you need to use).
I've had an experience where moving from a 32bit Windows 2003 Server to a 64bit Windows 2003 Server both running IIS 6 and the performance of the ASP.NET 3.5 website was unacceptable.
The 64bit server would run a clear 2 seconds behind the 32bit one consistently.
After switching IIS 6 to run as a 32bit worker process, the performance was equal and comparable once again.
I haven't verified it, but I think it might only apply to IIS6 win2k3, as testing I've done with IIS7 x64 (Vista) and a 64bit IIS worker process seems to perform just fine.
The process to swap to the 32bit process was quite simple. Here is the KB article with the supporting details:
http://support.microsoft.com/kb/894435/en-us
ASP.NET 2.0, 32-bit version
To run the 32-bit version of ASP.NET 2.0, follow these steps:
Click Start, click Run, type cmd, and then click OK.
Type the following command to enable the 32-bit mode:
cscript %SYSTEMDRIVE%\inetpub\adminscripts\adsutil.vbs SET W3SVC/AppPools/Enable32bitAppOnWin64 1
Type the following command to install the version of ASP.NET 2.0 (32-bit) and to install the script maps at the IIS root and under:
%SYSTEMROOT%\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe -i
Make sure that the status of ASP.NET version 2.0.50727 (32-bit) is set to Allowed in the Web service extension list in Internet Information Services Manager.
See the KB article for setting back to 64-bit.
For memory availability, refer to this msdn blog.
Memory availability. For my application, we got what we needed switching from 32 bit process on 32 bit OS to 32 bit process on 64 bit OS, without the trouble of replacing 3rd party libraries. So, we stopped there. Benefits are: 1) 2-3x effective memory available to each IIS worker process and 2) In a 32 bit OS where the web site uses a lot of memory, other system processes and web sites compete for limited total memory. For your application, look at how much memory do your worker processes use. If each WP isn't using a lot of memory (well over 1GB), 64 bit worker processes won't help much.
For performance, I think you have to test your own applications in both configurations. Dave's post above indicates that you might have performance degradation with 64 bit. As cheeso notes, some applications may see benefits from caching (2GB + of cache is a lot, though). Except for limited and simple applications, I don't think we are going to be able to make performance generalizations. We might be able to point to specific technologies that perform better or worse.
Besides the obvious memory differences, 32 bit processes on a 64 bit OS have to run in something called "Windows on Windows" or WOW mode. It's basically a thunking/emulation layer. There is a performance penalty if you pay close enough attention.
This is actual advice from Microsoft: "We recommend that you configure IIS to use a 32-bit worker processes on 64-bit Windows. Not only its compatibility better than the native 64-bit, performance and memory consumption are also better."
Please refer to this link posted in one of the comments above and published 05/14/2020:
https://learn.microsoft.com/en-us/iis/web-hosting/web-server-for-shared-hosting/32-bit-mode-worker-processes
I cannot claim to understand exactly why, but this advice is very clear, with 64bit workers virtual address space is bigger so a 32bit worker is generally more efficient