How can I uninstall Win32 assemblies and cleanup WinSxS? - assemblies

After a lot of trial and error (mostly due to lack of documentation and examples) I have managed to create MSI installers that install custom DLLs to WinSxS as side-by-side assembly. There is only one problem: Uninstalling leaves all files (DLLs, manifests and catalogs) in the WinSxS directory. How can or should I best clean that up? I know for sure that nothing else references it.
I have read somewhere that WinSxS has a self-scavenging process that cleans up over time but I could not find more information about that. Can you manually invoke this to clean up stuff?
The only other way I see is manually deleting those bits. First you have to change the owner of all files (assembly, catalog, manifest and their respective directory) from SYSTEM to an administrator account, adjust the permissions and delete them. There are also pieces left in the registry (I think HKLM\COMPONENTS\DerivedData\Components may be one place), but since WinSxS should be treated as opaque it is hard to find any information.

Scavenging isn't exposed anywhere that I know of. I'm not even sure when it is kicked off automatically. Maybe on uninstall of a service pack? Maybe some tool admins can run? I really forget.
Anyway, my suggestion is don't fight it. There are so many twisty turns down there that it just isn't worth trying to get the disk space back. Once uninstalled the bits still in the SxS cache will not be activated so they are just wasting space.
It's a dumb design but blame Microsoft and don't try to overcompensate.

Here is an article, it's kinda complete guide to WinSxS.
So, shortly, you can only uninstall some components (all their versions are in this folder), and you can run Service Pack bridge burning utility (in Vista it is named VSP1CLN.EXE and shipped with SP1). Note, that after execution, you shouldn't be able to uninstall SP or any components to state, prior to SP release date.

No-one is convinced you can - short of a complete reinstall, your bloaty WinSxS directory is there to stay.
There's been a long "discussion" of the problem on technet.
There is no documentation of the format, or any instructions how to remove files that are no longer needed - MS seems to think that disc space is cheap. There is a self-scavenging feature, but no-one's convinced it works, or if it does, it is very conservative (as you'd hope as you don't want it to break your OS)
You can tell is the scavenger is working by checking the "C:\Windows\winsxs\Temp\PendingDeletes." folder, as this is where files are moved by windows update or an installer moves them to - the scavenger just deletes the files in here.

You'll notice that after you uninstall your assembly, while the files are still there, they can no longer be bound to - so they are just "staged", or cached, but not really installed.
Rob & gbjbaanb are correct - you cannot manually invoke a scavenge yourself. Don't try to delete the files yourself - there are multiple places in the registry where they are registered, DerivedData\Components being only one of the many references.
I think the rule for Vista is scavenging is kicked off by the TrustedInstaller service after 10 minutes of machine inactivity, after the last servicing operation (service pack, hotfix, etc). But it's very fickle, so it doesn't run as often as it should. So just be patient, and the files will disappear on their own.

Well i was having some issues as i have an 80GB SSD for my windows and the WinSxs folder was about 12gb's
I was searching the net and i found this command:
DISM.exe /online /Cleanup-Image /spsuperseded
And now my WinSxs is 7gb which was wonderful news.

There are a few updates regarding the cleanup method that apply to newer OS. Check http://www.karafilis.net/winsxs-cleanup

Related

Qt 5.13.2.0 possible malware Variant.Adware.Kazy.795337 in qwebp.dll

today we received info from one of our customer about this malware detection:
Gen:Variant.Adware.Kazy.795337
It's only inside the qwebp.dll file attached to our project by qtdeploy process.
We're building 32-bit Qt (5.13.2.0) from the source and the same issue is reported on the same DLL no matter where it was built. We're using the latest VS 2019.
https://www.virustotal.com/gui/file/9f09c05803ad4ffcd99454c420a840e17549ee711690fb1f11fd1b59bccc3b23/detection
https://www.virustotal.com/gui/file/80c4c747d781a27c72de71c0900ccc045aefd2b4e4f17c949aaeeb3d0b7973b1/detection
When I scanned the older version (5.13.0.0) everything is ok:
Previous versions seem to be clean:
https://www.virustotal.com/gui/file/b7b7cacaef0e76439ef8c367c401524e93dfa00c9ca67a20290e829fec325a5a/detection
Also, any debug build and 64-bit builds are clean too.
Any idea what can cause this? Can anyone else please try to scan this file?
Thanks
TL;DR: It is probably nothing, but notify Qt anyway (and check your own systems).
Are you using the prebuilt Qt binaries or are you compiling the sources yourself?
If you are using the official prebuilt binaries, I'd of course expect that the Qt Devteam scans them and verifies that they don't accidently spread malware, but there is always the miniscule chance of something slipping through.
Same goes for the sources - while their review process should be thorough enough to avoid malicious code being slipped in, there is still the outside chance of either a key account being compromised or (even more unlikely) bad code being added slice-by-slice over a longer time period to avoid detection (along the lines of the underhanded C contest). Still, either case seems to be rather unlikely.
Bottom line: while this does sound like (and probably is) a false positive, you still may want to raise an issue with Qt e.g. on the their Bugtracking site or directly with Qt support (if you have a commercial license) to be sure. Also (if you didn't do that already) verify that the problem is not on your end, e.g. that your computers are clean and that you don't just randomly catch/detect your infection in that file.
Update:
A ticket concerning this issue was opened (I assume by Ludek Vodicka) on Qt bugtracker. Opened on Nov 19th and categorized as P1: Critical, but unfortunately no indication that it is actually being worked on (at least of Dec 18th).

Cygwin SVN: E200030: SQLite disk I/O error

When I use Subversion in Cygwin to update some repository, some directories update with success, while some other one gets a failure with the error message:
svn: E200030: sqlite: disk I/O error
When doing svn update again for the same repository, a different directory can get the same error. Sometimes, there is a SVN instruction after the above error message.
This happened due to a change someone wanted in Cygwin's SQLite package. I was the maintainer of that package when this question was asked, and I made the change that caused this symptom.
The change was released as Cygwin SQLite version 3.7.12.1-1, and it fixed that one person's problem, but it had this bad side effect of preventing Cygwin's Subversion package from cooperating with native Windows Subversion implementations.
What Happened?
The core issue here is that Subversion 1.7 changed the working copy on-disk format. Part of that change involves a new SQLite database file, .svn/wc.db. Now, in order to implement SQLite's concurrency guarantees, SQLite locks the database file while it is accessing it.
That's all fine and sensible, but you run into a problem when you try to mix Windows native and POSIX file locking semantics. On Windows, file locking almost always means mandatory locking, but on Linux systems — which Cygwin is trying to emulate — locking usually means advisory locking instead.
That helps understand where the "disk I/O error" comes from.
The Cygwin SQLite 3.7.12.1-1 change was to build the library in "Unix mode" instead of "Cygwin mode." In Cygwin mode, the library uses Windows native file locking, which goes against the philosophy of Cygwin: where possible, Cygwin packages call POSIX functions instead of direct to the Windows API, so that cygwin1.dll can provide the proper POSIX semantics.
POSIX advisory file locking is exactly what you want with SQLite when all the programs accessing the SQLite DBs in question are built with Cygwin, which is the default assumption within Cygwin. But, when you run a Windows native Subversion program like TortoiseSVN alongside a pure POSIX Cygwin svn, you get a conflict. When the TortoiseSVN Windows Explorer shell extension has the .svn/wc.db file locked with a mandatory lock and Cygwin svn comes along and tries an advisory lock on it, it fails immediately. Cygwin svn assumes a lock attempt will either succeed immediately or block until it can succeed, so it incorrectly interprets the lock failure as a disk I/O error.
How Did We Solve This Dilemma?
Within Cygwin, we always try to play nice with Windows native programs where possible. The trick was to find a way to do that, while still playing nice with Cygwin programs, too.
Not everyone agreed that we should attempt this. "Cygwin SQLite is part of Cygwin, so it only needs to work well with other Cygwin programs," one group would say. The counterpartisans would reply, "Cygwin runs on Windows, so it has to perform well with other Windows programs."
Fortunately, we came up with a way to make both groups happy.
As part of the Cygwin SQLite 3.7.17-x packaging effort, I tested a new feature that Corinna Vinschen added to cygwin1.dll version 1.7.19. It allowed a program to request mandatory file locking through the BSD file locking APIs. My part of the change was to make Cygwin SQLite turn this feature on and off at the user's direction, allowing the same package to meet the needs of both the Cygwin-centric and Windows-native camps.
This Cygwin DLL feature was further improved in 1.7.20, and I released Cygwin SQLite 3.7.13-3 using the finalized locking semantics. This version allowed a choice of three locking strategies: POSIX advisory locking, BSD advisory locking, and BSD/Cygwin mandatory locking. So far, the latter strategy has proven to be completely compatible with native Windows locking.
Later, when Jan Nijtmans took over maintenance of Cygwin SQLite, he further enhanced this mechanism by fully integrating it with the SQLite VFS layer. This allowed a fourth option: the native Windows locking that Cygwin SQLite used to use before we started on this journey. This is mostly a hedge against the possibility that the BSD/Windows locking strategy doesn't cooperate cleanly with a native Windows SQLite program. So far as I know, no one has ever needed to use this option, but it's nice to know it's there.
Alternate Remedy
If the conflict you're having is between Cygwin's command line svn and the TortoiseSVN Windows Explorer shell extension, there's another option to fix it. TortoiseSVN ships with native Windows Subversion command-line programs as well. If you put these in your PATH ahead of Cygwin's bin directory, you shouldn't run into this problem at all.
Having encountered the same problem, it appears (in my case at least) to be an interaction with TortoiseSVN. Disabling TortoiseSVN's status icon cache (Settings > Icon Overlays > Status cache "None" > Apply) has everything working just fine for me.
(That obviously doesn't resolve the underlying problem, which appears to be due to the SQL package that Cygwin's Subversion package relies on changing its mode of access. As I write, there's active [if slow] discussion on the Cygwin mailing list about how to resolve this.)
ldd /usr/bin/svn shows that SVN depends on /usr/bin/cygsqlite3-0.dll.
After I change libsqlite3 from 3.7.12 back to 3.7.3, the problem seems to go away. So this may be a SQLite library problem.
Using TortoiseSVN, ticking off Refresh shell overlays at clean up solved the problem for me.
For others reference, I just had this same error (svn: E200030: sqlite: disk I/O error) and found that one of my log files was taking up all my space (and could not write to the HDD because there was no free space).
Run (to make sure you have enough disk space)
df -h
(If you don't delete some large files (I just removed some backup and log files)
Then I just needed to run:
svn cleanup
This resolved the error for me.

stopping app_offline.htm from being created and deleted at each build?

I have a solution with 2 projects in Visual Studio 2008 SP1, .NET Framework 3.5 SP1.
a ASP Web site.
a Class Library (dll) project.
I have a reference from the Web Site to the Class Library, as the Class Library is my data layer. But anyway, the thing happens only with this basic setup, a solution with these 2 types of projects and a reference from the Web Site to the Class Library.
Now, each time I modify something in the Class Library and I build it, Visual Studio creates a file called app_offline.htm and then deletes it (it sends it to the Recycle Bin).
This is really annoying because at the end of the day I end up with a full Recycle Bin and me, being the perfectionist I am, I want to keep it clean. I'm not the only one with this problem: here and here.
I know now the cause of the problem, but still not how to fix it. If you didn't hear about app_offline.htm before, here's ScottGu's article on app_offline.:
Does anyone know a solution to the problem? Some setting in VS to delete the file forever after the Build process? (I really don't want to set my Recycle Bin to do that, as I do delete things unintentionally from time to time and I'd like to be able to recover those.)
This file does not go into the Recycle Bin for me. Perhaps you have some draconian utilities installed, which do this? Many anti-virus tools and general system utility suites used to do this back in 2000 but I do not have experience with later versions.
Update: You can use Process monitor to find out which process moves this file to the recycle bin.
[Disclaimer: I'm adding an answer firstly because I hope it will get the question seen by more people (I admit it) and secondly because I have no characters limit on an answer, as oposed to a comment.]
I followed Sander's suggestion and used Process Monitor to track which process moves this file to the Recycle Bin.
It was indeed devenv.exe.
There are several events where it makes operations like: QueryDirectory, QueryOpen, CreateFile and CloseFile. And devenv.exe is the only process that has anything to do with app_offline.htm
Still... How could I make Visual Studio stop filing up my Recycle Bin? (way to go, Dan, putting a question in the 'answer' (: )
I started seeing the same problem shortly after we suffered a VSTS server problem. The VSTS server went down for a day so I had to open the solution in offline mode. After the VSTS server came back online, I had to reopen the solution under source control, and the app_offline.htm files start occurring non-stop every time I recompile my web projects.
THIS IS REALLY ANNOYING!
I am not sure how to stop it yet, but I know how to reliably recreate the problem on my environment:
Windows XP Pro, VS2008, SourceGear (Source Control System).
Whenever I perform a checkout, the app_offline.htm file is instantly created and deleted in/from the root folder. The source control system is using SQL Enterprise, so I am not sure it is related to some references from posts people are making about SQL Express.
Again, still don't know how to stop it, but maybe this will help other figure out how/when the file is generated and deleted.
Use Web Application projects, not the Web Site templates, those are for 'dummies'. :)
I had this problem because I published directly to Azure Web Service from the dev machine.
The answer here with another possible workaround here.
This is all I could find on the subject. Unfortunately it's also speculation.
http://petermcg.wordpress.com/2008/05/12/silverlight-app-offline/

New to SVN, How to Setup?

I have a Windows 2003 Server with IIS, I installed VisualSVN Server on it.
I have two developers, who are going to use TortoiseSVN.
Since this is my first time ever setting up a SVN server I am kind of confused on how this will all work. The way I see it, each developer would have a copy of the repository on his or her local pc, would each person be required to have IIS installed on their PC as well to test their copies before checking out?
Should I create a testing folder on the server and then a production ready? It seems as if that would cause more issues with copies?
What would you do?
EDIT
I dont know what I was thinking, I forgot that VS has a built in IIS when you debug so the issue about setting up IIS on either client or server is now a non-issue. But I am confused, I imported the site into the repo, it said it was on revision 2 but I dont see any of the files in the repo folder. Do I create a virtual folder in IIS pointing to the repo that I created?
No, each developer uses your repository, and checks out their own copies to do their work. They do not need IIS or svn, etc, installed on their systems.
I recommend reading up on the Subversion FAQ.
Your devs don't have a local repository, they have a Working Copy on their PC. Typically, this is the most recent version of the app with whatever changes have been made by the developers but not committed yet.
As this is a web app, then your developers will need some kind of web server locally to test it - this could be IIS, or Visual Studio's built in web server (although that does behave differently to IIS in subtle ways).
You said in a comment: "My problem is I dont want the devs to commit to the live site in case there was a bug.".
The devs commit to the SVN repository on the server: at some point you will want to export (aka 'publish') a copy from the latest version in your repository to your live site. In order to make sure this works, you can check out a specific version from the server, test it, and if it passes the tests upload it: devs will always check in code with bugs (even though it builds) as it's better to check code in frequently than build up lots of changes locally then commit them, as there are bound to be conflicts with work other developers have done.
Branching and Tagging are useful concepts here: when you have a version which is almost right, you 'branch' it away from the main 'trunk' of the source code tree, fix any issues in the branch (back-porting to the main trunk as required), then when you have a working version you 'tag' it (as version x.y.z) and upload it. This way you can always refer to the particular version of the code you have uploaded, which makes it a lot easier to identify bugs which turn up in production. As others have suggested, read the SVN documentation for more info.
It depends on how you work. There are other discussions about folder structure and such which play directly into how you use version control.
Uh, no, no local repositories. Setting up SVN is easy, well almost. You'll want to look for the svn windows installer and set it up on the server. You'll want to install Apache and then you'll have a little hurdle setting up the http.conf file to expose svn over http. There's a little complexity with setting up security so go with Windows Authentication, you'll need WebDav, google it.
Once that's done, any svn client can hit it and checkout a copy and work with SVN normally. If you get really stuck, comment here and I'll go get a copy of our install and config for you.
The good news is that it's rock solid, once you get it setup it'll run forever.
"Pragmatic Version Control Using Subversion" and the SVN red-bean are the two sources you need to see.
Set up SVN on a single server and have all your developers point to it.
I've installed tortoise on the server and do Updates / Checkouts of the release website. Some people don't like checking in compiled code, but I like having the production compiled site in SVN.
If you use tortoise on the server, Do the initial checkout to the inetpub/website directory and then on rollouts you just need to update the directory using tortoise->update
Of course checkin to rollout is considered bad practice without first rolling out and testing on staging servers, but depends on your team size.
I have used the following resources for learning SVN:
http://www.polymorphicpodcast.com/shows/subversion/
http://www.dimecasts.net/Casts/ByTag/SVN
Found both quite good, and learning by watching can be easier especially for getting started.
No - your central server will maintain the repository. Your developers will get copies of the repository, make changes, and then commit them to your repository.
You actually have quite a few things to figure out if you want to do a successful deployment of subversion.
One really good article about setting up subversion on Windows - https://blog.codinghorror.com/setting-up-subversion-on-windows/
No, SVN server must be installed on a single computer. Each developer point at this computer and get locally (and eventually) a full copy or a partial copy of the repository.
You may also buy a book from O'Reilly about Subversion. Don't remember the title, sorry, but it helps me a lot.
All the best ! Sylvain.

What artifacts to save for a nightly build?

Assume that I set up an automatic nightly build. What artifacts of the build should I save?
For example:
Input source code
output binaries
Also, how long should I save them, and where?
Do your answers change if I do Continuous Integration?
You shouldn't save anything for the sake of saving it. you should save it because you need it (i.e., QA uses nightly builds to test). At which point, "how long to save it" becomes however long QA wants them.
i wouldn't "save" source code so much as tag/label it. I don't know what source control you're using, but tagging is trivial (performance & disk space) for any quality source control system. Once your build is tagged, unless you need binaries, there really isn't any benefit to just having them around because you can simply re-compile when necessary from source.
Most CI tools let you tag on each successful build. This can become problematic for some systems as you can easily have 100+ tags a day. For such cases I recommend still running a nightly build and only tagging that.
Here are some artifacts/information that I'm used to keep at each build:
The tag name of the snapshot you are building (tag and do a clean checkout before you build)
The build scripts themselfs or their version number (if you treat them as a separate project with its own version control)
The output of the build script: logs and final product
A snapshot of your environment:
compiler version
build tool version
libraries and dll/libs versions
database version (client & server)
ide version
script interpreter version
OS version
source control version (client and server)
versions of other tools used in the process and everything else that might influence the content of your build products. I usually do this with a script that queries all this information and logs it to a text file that should be stored with the other build artifacts.
Ask yourself this question: "if something destroys entirely my build/development environment what information would I need to create a new one so I can redo my build #6547 and end up with the exact same result I got the first time?"
Your answer is what you should keep at each build and it will be a subset or superset of the things I already mentioned.
You can store everything in your SCM (I'd recommend a separate repository), but in this case your question on how long you should keep the items looses sense. Or you should store it to zipped folders or burn a cd/dvd with the build result and artifacts. Whatever you choose, have a backup copy.
You should store them as long as you might need them. How long, will depend on your development team pace and your release cycle.
And no, I don't think it changes if you do continous integration.
This isn't a direct answer to your question, but don't forget to version control the nightly build setup itself. When the project structure changes, you may have to change the build process, which will break older builds from that point on.
In addition to the binaries as everyone else has mentioned I would recomend setting up a symbol server and a source server and making sure you get the correct information out and into those. It will aid in debugging tremendously.
We save the binaries, stripped and unstripped (so we have the exactly same binary, once with and once without debug symbols). Further we build everything twice, once with debug output enabled and once without (again, stripped and unstripped, so every build result in 4 binaries). The build is stored to a directory according to SVN revision number. That way we can always retain the source from the SVN repository by simply checking out this very revision (that way the source is archived as well).
A surprising one I learned about recently: If you're in an environment that might be audited you'll want to save all the output of your build, the script output, the compiler output, etc.
That's the only way you can verify your compiler settings, build steps, etc.
Also, how long to save them for, and where to save them?
Save them until you know that build won't be going to production, iow as long as you have the compiled bits around.
One logical place to save them is your SCM system. Another option is to use a tool that will automatically save them for you, like AnthillPro and its ilk.
We're doing something close to "embedded" development here, and I can tell you what we save:
the SVN revision number and timestamp, as well as the machine it was built on and by whom (also burned into the build binaries)
a full build log, showing whether it was a full/incremental build, any interesting (STDERR) output the data baking tools produced, a list of files compiled and any compiler warnings (this compresses very well, being text)
the actual binaries (for anywhere from 1-8 build configurations)
files produced as a side effect of linking: a linker command file, address map, and a sort of "manifest" file indicating what was burned into the final binaries (CRC and size for each), as well as the debugging database (.pdb equivalent)
We also mail out the result of running some tools over the "side-effect" files to interested users. We don't actually archive these since we can reproduce them later, but these reports include:
total and delta of filesystem size, broken down by file type and/or directory
total and delta of code section sizes (.text, .data, .rodata, .bss, .sinit, etc)
When we have unit tests or functional tests (e.g. smoke tests) running, those results show up in the build log.
We've not thrown out anything yet -- given, our target builds usually end up at ~16 or 32 MiB per configuration, and they're fairly compressible.
We do keep uncompressed copies of the binaries around for 1 week for ease of access; after that we keep only the lightly compressed version. About once a month we have a script that extracts each .zip that the build process produces and 7-zips a whole month of build outputs together (which takes advantage of only having small differences per build).
An average day might have a dozen or two builds per project... The buildserver wakes up about every 5 minutes to check for relevant differences and builds. A full .7z on a large very active project for one month might be 7-10GiB, but it's certainly affordable.
For the most part, we've been able to diagnose everything this way. Occasionally there's a hiccup on the buildsystem and a file isn't actually a the revision it's supposed to be when a build happens, but there's usually enough evidence of this in the logs. Sometimes we have to dig out a tool that understands the debugging database format and feed it a few addresses to diagnose a crash (we have automatic stackdumps built into the product). But usually all the information needed is there.
We haven't had to crack the .7z archives yet, to mention. But we have the info there, and I have some interesting ideas on how to mine bits of useful data from it.
Save what can't be reproduced easily. I work on FPGAs where only the FPGA team have the tools and some cores (libraries) of the design are licensed to compile on only one machine. So we save the output bitstreams. But try to check them over one another rather than with a date/time/version stamp.
Save as in check in to source code control or just on disk? Save nothing to source code control. All derived files should be visible in the file system and available to developers. Don't checkin binaries, code generated from XML files, message digests etc. A separate packaging step will make these end products available. As you have the change number you can always reproduce the build if necessary assuming of course everything you need to do a build is completely in the tree and is available to all builds by syncing.
I would save your built binaries for exactly as long as they have a chance to go into production or be used by some other team (like a QA group). Once something has left production, what you do with it can vary a lot. For a lot of teams, they'll keep just their most recent prior build around (for rollback) and otherwise discard their builds.
Others have regulatory requirements to keep anything that went into production around for as long as seven years (banks). If you are a product company, I'd keep around any binary a customer might have installed in case a tech support guy wants to install the same version.

Resources