How to avoid antivirus during installation? - qt

Just made an installer (using QTIFW) for my Qt project but when I tried to install it on other machine the 360 Total Security interrupted the installation process.
It pops up and complains about d3dcompiler_47.dll, asking the user to allow/block the file. If user do nothing, or don't allow, it
seems to be preventing QTIFW of writing it as part of the application installation.
That led to following error
Can't create C:\Program Files\company\project\d3dcompiler_47.dll"
That's quite terrible. I'm wondering how to deal with this situation?

False Positives: False positives from maleware scanners can be quite hard to deal with. To check using more than one malware scanner you can upload the release files individually as well as the complete setup to https://www.virustotal.com. This service runs many malware scanners on the submitted files so you can see what malware scanners flag which binary. There are a few other such anti-malware online scanners such as Kaspersky, Avira, etc...
Update: And then there is Process Explorer. Check this tweet chain for how to check your running application for malware hits per process and loaded file.
When you see the scope of the problem (how many files are flagged), you should work backwards to see how you could go about solving the problem. This can involve getting the files whitelisted by the malware vendor(s), eliminating them from your setup, or fixing technicalities that flag the files, etc... There are some options listed and elaborated below.
Fixes: There are both technical and practical fixes you can try. Don't expect it to be easy. The issue of false positives is a very serious deployment problem. The proposed fixes and workarounds below are in random order:
Compiler Settings: Sometimes you can actually choose different compiler settings to avoid the problem, but often you are not so lucky. I have seen this with files compiled with special Spectre / Meltdown mitigation settings. They were flagged as unknown by malware scanners.
Dangerous API-calls: You should also check what API-calls are made in the problem file(s) that could be known to cause security warnings (unusual and / or dangerous API-methods) - and remove them if you can. I have heard of cases where malware vendors want to refuse to whitelist your binary because what the binary does makes no sense to them (try calling a firmware update for an embedded system as part of your setup installation or some low-level call triggered by a security tool you are installing).
Eliminate Files: Removing certain components from your application can also help sometimes - especially if they are third-party components added to your application for convenience only. In other words your application works fine without them. Removing a problem can be much simpler than fixing it.
Vanilla Installer: Sometimes you can split problem components into a separate setup so your main setup installs without issues. This can help enormously with support issues or overall application approval in corporate settings. You can also make 2 full setups where one has all probable false-positive triggers removed - your "vanilla setup" that should install without drama in all cases.
Digital Signatures: Signing the file with a digital signature can help since a proper certificate "buys trust outright" in reputation-based score systems such as Microsoft SmartScreen. Note that this needs to be an EV-level certificate. Please check for updated information here as technology evolves. Certificate / signing technologies always seem to cause something unexpected.
Malware Scanner Whitelisting: Submit file for white listing. Then there is the formal approach with the malware vendors as explained by Bogdan Mitrache of Advanced Installer here: Antivirus Whitelisting Pains. You submit files to them for white-listing. The article explains real-world experience with binaries flagged as malware when delivering software. Mandatory reading.
Microsoft SmartScreen: Microsoft has their own way to submit files for analysis and white-listing: https://www.microsoft.com/en-us/wdsi/filesubmission. They state: "Microsoft security researchers analyze suspicious files to determine if they are threats, unwanted applications, or normal files. Submit files you think are malware or files that you believe have been incorrectly classified as malware.".
Unique Executable Per Customer: Sometimes a unique executable is used for every customer by auto-generating an installer for each sale. I would advise against this since the installer executable - even when signed - will be a "new encounter" for malware scanners. You could run into trouble you do not need. There is also an added risk for each generated installer executable to actually be infected by real malware, and there is also the QA-issue that every installer should be tested before release.
Signed Malware: Whatever you do, make sure the file in question isn't actually real malware! Obviously your own files can get infected. Test well. If you sign malware and deliver to your client the digital signature is proof positive that you delivered the malware to them. Not good. And then.
More on Digital Signatures: Some information and links to get your setup and / or files signed:
https://www.advancedinstaller.com/user-guide/faq-digital-signature.html
https://knowledge.digicert.com/generalinformation/INFO1119.html
https://www.thawte.com/resources/getting-started/how-code-signing-works/
Installshield Custom Dialogue Installer

Related

iOS 11.4.1 shows device Jailbroken when it is not

I have been using different techniques for jailbreak detection and it was working fine till iOS 11.4. However when I upgraded my iOS to latest 11.4.1 it is showing me the device is jailbroken when the device is not. This issue we are facing only in iOS 11.4.1 and iOS 12 Beta.
Following were the techniques we are using:
1.Process forking
2."CydiaApp" Scheme Detection
3.Check for suspicious/root folders and files
4.Check for folders that was created during the jailbreak process
5.Check for write permission for non user folder
Is there any thing related to these files and folder access permission we are doing wrong to detect the jailbreak detection?
Any help will be appreciated.
But which one is actually gives you false positive?
If I were to guess, I would say that checking write permissions is not reliable. The way that iOS protects unauthorized access to system files is not with permissions but mainly with sandbox profiles. With that in place Apple can assign any permissions they want to system files, sandboxing will still protect the system. Even when you jailbreak your phone you still have sandboxing in place (don't remember any jailbreak that would disable sandboxing completely) and often limited by it when, for example, injecting your CydiaSubstrate dylibs into system daemons/applications that operate under sandbox profiles. That's the whole security of iOS - code signing, entitlements, sandboxing, IPC. No need for POSIX permissions which Apple actually doesn't use that much.
Checking for suspicions directories and files could also give you false positives and is not very reliable in general. Apple often changes it's root file system, you never know what might be in the new iOS version. Of course, if it's related to Cydia then it should be ok.
And that's, in part, why Apple doesn't like AppStore apps checking for jailbreak and often rejects them because of it. Not only you try to access something you shouldn't, which makes it hard to distinguish between jailbreak detection and actual usage of private API to circumvent iOS security. But given that jailbreak is all about very specific kernel patches and things that you wouldn't have access inside AppStore app due to sandboxing anyway (launching unsigned binaries, modifying root partition), there's no reliable way of detecting jailbreak in general. Forking, Cydia, support for CydiaSubstrate - all of that is optional and depends on specific jailbreak implementation. With recent jalbreaks that's even more relevant - all of them are very different and not completely finished lacking some features that were standard in the past. Even more importantly, if Apple decides to change something in iOS, in might trigger jailbreak detecting code by accident. False positive is much worse than false negative.
In the end, no matter what you do, every jailbreak detection code can be easily patched or hooked. When you're in control of the system, apps are no longer able to protect themselves. That's all offtopic but it's a good reason to just ignore jailbreak.

Can I check in unix binaries (compiled executable, libraries, etc) into PVCS

We are starting up a Unix development engagement and evaluating version control options.
Specific question: Does PVCS deployed on a Unix platform support checking in compiled code from a Unix build environment?
If so, example command perhaps?
Not looking to hear about other SCM systems at this point.
I doubt that PVCS would distinguish between binary and text files. Even if it did, it should support the notion of a binary file.
PVCS apparently doesn't do merging (not as a built-in operation), so there really isn't much that it needs to do to "support" checking in of unix binaries.
You may have problems dealing with the file permissions, however I would consider that to be a security feature - the files shouldn't be marked as executable unless you intend them to be executed, a deploy script would more than achieve this.
That said. Semantically, it is problematic asking if it "supports" checking in of unix binaries: Can a system that happens to allow such files to be checked in be claimed to "support" those files if it provides no features that ease management of those specific files, as distinct from other types of files?
Unfortunately their website is so full of marketing information that it is next to impossible to find out this information. Seriously pick a different VCS if at all possible. Heck, even Perforce would be a better choice, they provide tools for almost every current operating system and provide many levels of documentation. (Personally, I'm inclined to recommend Git, although Perforce would be a better choice in this case if you are more interested in versioning many binary files)

Hosting big files for users

We need to be able to supply big files to our users. The files can easily grow to 2 or 3GB. These files are not movies or similiar. They are software needed to control and develop robots in an educational capacity.
We have some conflict in our project group in how we should approach this challenge. First of all, Bittorrent is not a solution for us (despite the goodness it could bring us). The files will be availiable through HTTP (not FTP) and via a filestream so we can control who gets access to the files.
As a former pirate in the early days of the internet i have often struggled with corrupt files and using filehashes and filesets to minimize the amount of redownload required. I advocate a small application that downloads and verifies a fileset and extracts the big install file once it is completely downloaded and verified.
My colleagues don't think this is nessecary and point to the TCP/IP protocols inherit capabiltities to avoid corrupt downloads. They also mention that Microsoft has moved away from a downloadmanager for their MSDN files.
Are corrupt downloads still a widespread issue or will the amount of time we spend creating a solution to this problem be wasted, compared to the amount of people who will actually be affected by it?
If a download manager is the way to go, what approach would you suggest we take?
-edit-
Just to clearify. Is downloading 3GB of data in one chunk, over HTTP a problem OR should we make our own EXE that downloads the big file in smaller chunks (and verifies them).
You do not need to go for your own download manager. You can use some really smart approach.
Split files in smaller chunks, let's say 100MB each. So even if a download is corrupted, user will end-up downloading with that particular chunk.
Most of web servers are capable of understanding and treating/serving range headers. You can recommend the users to use download manager / browser add-ons which can use this capacity. If your users are using unix/linux systems, wget is such a utility.
Its true that TCP/IP has capacities of preventing corruption but it basically assumes that network is still up and accessible. #2 mentioned above can be one possible work-around to the problems where network was completely down in middle of download.
And finally, it is always good to provide file hash to your users. This is not only to ensure the download but also to ensure the security of the software that you are distributing.
HTH

Drupal development workflow for teams

In my last Drupal project we were 5 people doing coding and installing new modules, at the same type our client was putting up content. Since we chose to have only one server for simplicity there were times were many people needed to write to the same files like style.css or page.tpl.php or when someones broken code would prevent others from working
Are there any best practises for a team that works with Drupal? How can leverage code repositories or sandboxes?
A single server may appear to give you "simplicity", but what it gives you, as you've experienced, is utter chaos -- and you were lucky if it didn't result in unpleasant and hard-to-reproduce, harder-to-fix crashes. Don't settle for anything less than a "production" server (where your client can be working -- on content only -- if they like minor risks;-) and a "staging" one (where anything from the development team goes to get tested and tried for a while before promotion to development, which is done at a quiet and ideally prearranged time).
Second, use a version control system of some kind. Which one matters less than using one at all: svn is popular and simple, the latest fashion (for excellent reasons) are distributed ones such as hg and git, Microsoft and other have commercial offerings in the field, etc.
The point is, whenever somebody's updating a file, they're doing so on their own client of the VCS. When a coherent set of changes is right, it's pushed to the VCS, and the VCS diagnoses and points out any "conflicts" (places where two developers may have made contradictory changes) so the developer who's currently pushing is responsible for editing the files and fixing the conflicts before their pushes are allowed to go through. Only then are "current versions" allowed to even go on the staging system for more thorough (and ideally automated!-) testing (or, better yet, a "continuous build" system).
Basically, there should be two layers of defense against such conflicts as you observed, and you seem to have deployed neither. They're both essential, though, if forced under duress to pick just one, I guess I'd reluctantly pick the distinction between production and staging servers -- development will still be chaotic (intolerably so compared to the simple solidity of any VCS!) but at least it won't directly hurt the actual serving system;-).
Here's a great writeup about development workflow in Drupal. It sums everything so far responded here and adds "Features", "Strongarm" and a few more tricks to the equation. http://www.lullabot.com/articles/site-development-workflow-keep-it-code

Advantages of a build server?

I am attempting to convince my colleagues to start using a build server and automated building for our Silverlight application. I have justified it on the grounds that we will catch integration errors more quickly, and will also always have a working dev copy of the system with the latest changes. But some still don't get it.
What are the most significant advantages of using a Build Server for your project?
There are more advantages than just finding compile errors earlier (which is significant):
Produce a full clean build for each check-in (or daily or however it's configured)
Produce consistent builds that are less likely to have just worked due to left-over artifacts from a previous build
Provide a history of which change actually broke a build
Provide a good mechanism for automating other related processes (like deploy to test computers)
Continuous integration reveals any problems in the big picture, as different teams/developers work in different parts of the code/application/system
Unit and integration tests ran with the each build go even deeper and expose problems that would maybe not be seen on the developer's workstation
Free coffees/candy/beer. When someone breaks the build, he/she makes it up for the other team members...
I think if you can convince your team members that there WILL be errors and integration problems that are not exposed during the development time, that should be enough.
And of course, you can tell them that the team will look ancient in the modern world if you don't run continuous builds :)
See Continuous Integration: Benefits of Continuous Integration :
On the whole I think the greatest and most wide ranging benefit of Continuous Integration is reduced risk. My mind still floats back to that early software project I mentioned in my first paragraph. There they were at the end (they hoped) of a long project, yet with no real idea of how long it would be before they were done.
...
As a result projects with Continuous Integration tend to have dramatically less bugs, both in production and in process. However I should stress that the degree of this benefit is directly tied to how good your test suite is. You should find that it's not too difficult to build a test suite that makes a noticeable difference. Usually, however, it takes a while before a team really gets to the low level of bugs that they have the potential to reach. Getting there means constantly working on and improving your tests.
If you have continuous integration, it removes one of the biggest barriers to frequent deployment. Frequent deployment is valuable because it allows your users to get new features more rapidly, to give more rapid feedback on those features, and generally become more collaborative in the development cycle. This helps break down the barriers between customers and development - barriers which I believe are the biggest barriers to successful software development.
From my personal experience, setting up a build server and implementing CI process, really changes the way the project is conducted. The act of producing a build becomes an uneventful everyday thing, because you literally do it every day. This allows you to catch things earlier and be more agile.
Also note that setting build server is only a part of the CI process, which includes setting up tests and ultimately automating the deployment (very useful).
Another side-effect benefit that often doen't get mentioned is that CI tools like CruiseControl.NET becomes the central issuer of all version numbers for all branches, including internal RCs. You could then enforce your team to always ship a build that came out of the CI tool, even if it's a custom version of the product.
Early warning of broken or incompatible code means that all conflicts are identified asap, thereby avoiding last minute chaos on the release date.
When your boss says "I need a copy of the latest code ASAP" you can get it to them in < 5 minutes.
You can make the build available to internal testers easily, and when they report a bug they can easily tell you "it was the April 01 nightly build" so that you can work with the same version of the source code.
You'll be sure that you have an automated way to build the code that doesn't rely on libraries / environment variables / scripts / etc. that are set up in developers' environments but hard to replicate by others who want to work with the code.
We have found the automatic VCS tagging of the exact code that produce a version very helpful in going back to a specific version to replicate an issue.
Integration is a blind spot
Integration often doesn't get any respect - "we just throw the binaries into an installer thingie". If ithis doesn't work, it's the installers fault.
Stable Build Environment
Prevents excuses such as "This error sometimes occurs when built on Joe's machine". Prevents using old dependent libraries accidentally when building on Mikes machine.
True dogfooding
You inhouse testers and users have a true customer experience. Your developers have a clear reference for reproducing errors.
My manager told us we needed to set them up for two major reasons. None were really to do with the final project but to make sure what is checked in or worked on is correct.
First to clean up DLL Hell. When someone builds on their local machine they can be pointing at any reference folder. Lots of projects were getting built with the wrong versions of dlls from someone not updating their local folder. In the build server it will always be built of the same source. All you have to do is get latest to get the latest references.
The second major thing for us was a way to support projects with little knowledge of them. Any developer can go grab the source and do a minor fix if required. They don't have to mess with hours of set up or finding references. We have an overseas team that works primarily on a project but if there is a rush fix we need to do during US hours we can grab latest and be able to build not have to worry about broken source or what didn't get checked in. Gated checkins save everyone else on your team time.

Resources