I'm creating a fairly straightforward offline installer for my product using the Qt Installer Framework (v3.0).
The product includes a driver installer for a Sentinel HASP protection key. Ideally I would like to present the user with an option to skip running this driver installer (in the case where they have already installed this driver with a previous product installation, for example), but I can't seem to find a concise example in the QtIFW documentation of the best/simplest way to achieve this.
The size of the driver is relatively tiny compared to the size of the main product package, so there's no concern with always including the file.
Let we have driver-installer.exe that installs that driver.
If you want to make its execution optional you can add failed return code as accepted return code for this execution like below
component.addOperation("Execute", {0,xxx}, "driver-installer.exe");
Here xxx should be replaced with your failed return code
Related
My problem is that I can't use R-studio at my work place as the IT does not support it . I want to use R and R-studio that installed on my personnel laptop on my company laptop ( using a modern browser which is behind firewall ) . Some of the options I am thinking of two two things
should I need to build a docker for R and R-studio (I see base images are already available) , I am mostly interested in basic R , Dplyr (haven ,xporter, and Reticulate ) packages .
Should I have to use a binder . I am not technical person and my programming skills are very limited can any one suggest me way .
What exactly are the difference between using Docker option vs Binder ?
I know I can use R-Studio online and get my work done but with the new paid account I am running out of project hours and very slow sometimes . Thanks in advance
Here are some examples beyond the modern RStudio MyBinder example:
https://github.com/fomightez/pythonista_skewedf
https://github.com/fomightez/r_phylogenetics_worshop
https://github.com/fomightez/chapter7/tree/master/binder
The modern RStudio MyBinder example has been set as a template on GitHub so you can use
The first one is for a special use of a package not on conda. And I started that one from square one.
The other two were converted from content by others to aid in making them Binder-ready.
You essentially list everything you need from conda in the environment.yml along with the appropriate channels. If you need special stuff not on conda, you need the other configuration files included there.
Getting everything working can take some iterations on adding things, letting the image get built, and testing your libraries are available. Although you seem to think your situation is not overly complex.
The binder launch badges you see are just images where you modify the URL to point the MyBinder federation site at your repository. Look at the URL and you should see the pattern where you put studio at the end of the URL pointing at your repo. The form at MyBinder.org site can help with this; however, most often it is easier to just adapt a working launch badge's code copied from elsewhere. The form isn't set up at this time for making the URLs for launching to RStudio.
Download anything useful your create in a running session. The sessions timeout after 10 minutes, although RStudio usually keeps them active.
Lack of Persistence and limited memory, storage, & power can be drawbacks. The inherent reproducibility and portability are advantages.
MyBinder.org doesn't work with private repos. If you have code you don't want to share, you can upload it to the temporary session, using the repo for specifying the environment. You could host a private binderhub that does allow the use of private git repositories; however, that is probably overkill for your use case and exceed your ability level at this time.
GitHub isn't the only place to host repositories that can be pointed at the MyBinder system. If you go to the MyBinder.org page and click where it says 'GitHub' on the left side of the top line of the form, you can see a list of the sources at which you can host a repository and point the system to build an image and launch a container with that specified image.
Building the image from a source repository takes some minutes the first time. Once the image is built though on the service, launch is typically less than 30 seconds. Each time you make a change on the source repo, a build is necessary. Some changes don't cause the new build to be as long as the initial one as some optimizing is done to only build what is necessary after a change. Keep in mind there are several members of the federation around the workd and if traffic on the internet gets sent to where the built image isn't yet available, it will be built from scratch again first.
The Holepunch project is out there to offer some help for users working in the R ecosystem; however, with the R-Conda system that is now integrated into MyBinder it is pretty much as easy to do it the way I described. Last I knew, the Holepunch route makes a Dockerfile that isn't as easy to troubleshoot as using the current the R-Conda system route. Dockerfiles are essentially a last ditch configuration file that MyBinder can handle. The reason being the other configuration files are much easier and don't require knowing Dockerfile syntax. MyBinder aims to offer the ability to take advantage of Docker offering containers with a specified environment without users needing to know anything about Docker.
There is a Binder Help category for posting to get help at the Jupyter Discourse Forum. Some other examples of posts already there may help you troubleshoot.
Notice of a common pitfall
Most of the the configuration files for making a repository Binder-ready are simply text and can be edited right in the GitHub browser interface, without need to git or even cloning the repo locally.
Last I knew, there are two exceptions to this. The postBuild and start configuration files have settings that allow them to be run as scripts and these get altered in a way they no longer work if you edit them via the GitHub browser interface. (This was my experience when last I tried. Your mileage may vary or things may have changed now.) To edit those, you have to have git available on a system you have and pull one from some other source. Then edit that on your machine that has git working & add it your repo and push it back up from your local computer.
(If this is a problem, you can post in the Jupyter Discourse Forum Binder help category and you and I could coordinate where I fork and edit those files in your repo to your specifications and then make a pull request to update your source of the fork with those changes.)
If you are using Jupyter notebooks extensively then it may make sense to use Binder
But if you simply want to use R and Rstudio, then all you need is docker. A good resource is
https://github.com/rocker-org/rocker
I'm developing a package that has been released, but every week or every few weeks there are new features added to the core version.
What I'd like to do is to notify users that new features are available such as
julia> using Package
Note: new features are available:
- feature 1
- feature 2
call Pkg.update("Package") to make these features available
Is there a standard/built-in way of doing this? I'd rather not make the users have to install the Requests and LibCurl packages for this single feature.
There is a built-in command to download a file, download. You could download the next three possible release numbers from GitHub:
for version in [v"0.4.1", v"0.5.0", v"1.0.0"]
filename = download("https://github.com/JuliaFinance/Currencies.jl/releases/tag/v$version")
data = readstring(filename)
if data != """{"error":"Not Found"}"""
println("Version v$version is available!")
# this release was tagged on GitHub, notify user
end
end
Probably this will only work if you tag your versions on GitHub. You should probably hide this all in a try...catch and suppress any errors, so the lack of Internet connection does not affect users' ability to use the package.
I would like to run some shell commands during using the QT Install Framework in order to recover information required to configure the installation itself (e.g listing the network adapter).
Currently IFW seems to allow one to prepare canned shell operations (addOperation, addElevatedOperation) that run only after the installer configuration process.
I would like, instead, to run them during the installation set-up. E.g. running an operation in one of the installer page and retrieve the result.
Something like:
Component.prototype.pageChanged = function (page) {
if (page === QInstaller.ReadyForInstallation) {
component.runOperation(...)
}
};
QT Installation Framework installer provides a way to solve this issue.
You can use installer.performOperation running a ConsumeOutput operation.
ConsumeOutput allows one to run an external process and store the process output into an installer key.
I would like to run an external process and wait on its result in my
installer based on Qt Installer Framework. How can I do that?
This can be resolved either with native API (Windows example) or with Qt QProcess::WaitOnFinish (more universal). So, likely you need to somehow work around the installer's API with C++ code.
We have a few driver packages that we pre-install in the driver store on Windows with SetupCopyOEMInf, following Microsoft's suggested procedures. This process has worked properly for years; no problems on XP, Vista, 7, and even 8.
While evaulating Windows 8.1 RTM, we found that our drivers were no longer pre-installing.
Checking the setupapi.dev.log, I found:
!!! sto: Failed to query boot critical status of device class. Error = 0x00000002
and later:
!!! idb: Failed to query inbox status of device class {ff646f80-8def-11d2-9449-00105a075f6b}. Error = 0x00000002
!!! idb: Failed to publish 'C:\Windows\System32\DriverStore\FileRepository\[ourinfname].inf_x86_3361fc76cd85b678\[ourinfname].inf'. Error = 0x00000002
I've poured through documentation, trying to find out what we're doing incorrectly.
Preinstalling from the commandline with pnputil.exe -a or using InstallScript's DIFxDriverPackagePreinstall() produce the same results.
The drivers work on Windows 8.1 if we DON'T try and put them in the driver store. The preinstallation also works if we upgrade a Windows 8 machine that already had our drivers on it to Windows 8.1. In either case, once it's working, it continues to work.
Why is this failing on Windows 8.1?
After two weeks of digging and debugging, it turns out the problem was with our Device Class GUID.
After stripping our INF down to the bare minimum and comparing with another INF that DID properly preinstall on Windows 8.1, I realized that the only difference between the two was the class GUID. I did a quick search for ff646f80-8def-11d2-9449-00105a075f6b and turned up over a thousand hits; not exactly what you want to see from a unique identifier.
I then looked back through 12 years of version control, and found that the person responsible for originally creating our device drivers didn't change the GUID in the wizard-generated INF from the Win2K DDK.
Creating a new unique class guid solved the problem, and our drivers preinstall properly on Windows 8.1.
I don't know if Microsoft is blocking preinstallation attempts with that GUID specifically, but the bottom line is: if an example says to change the GUID, CHANGE IT!
Here's the sample code for completeness. Don't do this:
;; ********* PLEASE READ ***********
;; The wizard cannot create exact INF files for all buses and device types.
;; You may have to make changes to this file in order to get your device to
;; install. In particular, hardware IDs and logical configurations require
;; intervention.
;;
;; The Win2K DDK documentation contains an excellent INF reference.
;--------- Version Section ---------------------------------------------------
[Version]
Signature="$Windows 95$"
Provider=%ProviderName%
; If device fits one of the standard classes, use the name and GUID here,
; otherwise create your own device class and GUID as this example shows.
Class=NewDeviceClass
ClassGUID={ff646f80-8def-11d2-9449-00105a075f6b}
My previous answer was actually a bit of a red herring. While one should definitely not use the GUID in the sample INF, the REAL problem ended up being with our installer. Turns out, our installation was trying to pre-create the registry key for the class:
HKLM\SYSTEM\CurrentControlSet\Control\Class\{FF646F80-8DEF-11D2-9449-00105A075F6B}
Removing this from our installer fixed the problem.
Microsoft must have changed how driver preinstallation works under the hood from previous versions of Windows.
This is sort of related to a previous post of mine. I have the need to use the bigmemory library on my 32bit Windows PC to do some ugly matrix calculations. Unfortunately, it appears that the maintainers have temporarily ceased production of Windows binaries. I have Ubuntu on my home PC. I would really like to take the .tar.gz file and build it into a Windows binary that I can actually run at work. I realize there are more efficient ways, like installing RTools on the Windows device. However, our IT keeps our admin rights on lockdown, so I can never edit my PATH enviro variable. Could anyone provide some general guidance for doing this? Are there any tools I need to install on my Ubuntu PC above and beyond R?
I found similar questions, but nothing that thoroughly answered my questions.
Unless the package source is incompatible with current versions of R, you could use the R project's win-builder site to build a Windows binary. Quoting from the linked site, win-builder is a service:
intended for useRs who do not have Windows available for checking and building Windows binary packages.
As a convenience, Hadley Wickham's devtools package includes a utility function, build_win(), that you can use for this purpose. From ?build_win:
Works by building source package, and then uploading to http://win-builder.r-project.org/>. Once building is complete you'll receive a link to the built package in the email address listed in the maintainer field. It usually takes around 30 minutes.
Windows has four sets of environment variables (system, user, volatile and process sets). The first three sets are stored in the registry but the process set is not so even if they have locked down the registry its typically still possible to set the process environment variables (including the PATH) in a local process, i.e. on a temporary basis, so you might double check your assumptions that you can't modify anything. Its more likely that you can't modify the system variables and registry but can still modify the set in your local process. To check this from the Windows cmd line enter this:
set mytest=123
set mytest
and if the second line shows that mytest has the value 123 then you likely have all the permissions you need.
Furthermore anything you need to set is all handled automatically for you by R.bat in the batchfiles distribution so you don't have to set anything yourself.
Just ensure that Rtools and R are installed into the standard locations (you can tell them to skip the setting of any registry keys during the installation process), ensure R.bat is on your path or in current directory and run:
R.bat CMD INSTALL mypackage.tar.gz
without setting environment variables, registry keys or path.
If that does not work try Rpathset.bat also from the batchfiles which is not automatic like R.bat but on the other hand is extremely flexible since you must modify the SET statments in it to whatever you want.
There is a PDF document that comes with the batchfiles which gives more info.