JWrapper - JVM Options on launch - jwrapper

I'm experimenting with using JWrapper to create executable's for a Java application. I see we can set JVM options when we create the wrapper, but can you/how do you set the JVM options dynamically, post install, when launching?
For instance, on a x32 machine we want to use -Xmx1536m but on a x64 machine we want -Xmx4096m.
Further more, depending on application demands, one user may want/need a larger stack size so we need to set -Xss dynamically. Can this parameter be passed in some how to the executable? Or can we use one virtual app to launch another virtual app using programmable JVM options?

Related

Is it possible to modify the NotebookApp config options for a jupyter server instance while it's running?

I do most of my dev work locally but ocassionally I have to switch over to using a preconfigured jupyterlab instance on GCP. The way things are set up now, I'm unable to ssh into these notebook servers and the only way for me to interact with them is through the jupyterlab integrated terminal.
I have custom save hook functions set up in my local environment for testing, linting, etc.--comes in really handy for keeping everything in a production-ready state and I'd like to be able to set up a sort of "environment as code" system where when I pull updated code into a new environment, the customized configuration would move with it and take effect automatically. I suppose the proper way to do this would be to use a Docker image and rebuild the cloud instance from scratch every time it needs an update but it kind of seems like overkill for such minor changes. (Also the Google docker images don't work that well on my M1 MacBook ).

Running a compainion application at install

I have two WPF applications in the same solution. One is a configuration helper for the other and needs to be run before the 'big' app is run. In the VS Setup project I have included the Primary Output from both applications.
I want to run the configuration helper during the Commit phase of setup so I added a Custom Action consisting of the Primary Output of configuration helper and marked the Installer Class as false.
When I run the resulting msi, both applications are installed in the same folder as desired, but I then get an error that 'a program run as part of the setup did not finish as expected.' The msi then uninstalls.
I was hoping the configuration helper would be kicked off as the msi exits, but would also be happy with the installer hanging open until the configuration helper exits.
What am I missing?
The program you ran as a custom action has failed, probably crashed. It may need some extra error checking or tracing to see what's going on. Programs that run as custom actions are not in the same environment as running them from the interactive user's desktop. The working directory is probably not what you expect (so file paths must be specified in full) and it's probably running with the system account, because that's the way Everyone installs work, so any assumptions about user locations (including the interactive user's desktop, user folders, access to the network, access to databases, ability to show forms) will be wrong and are likely to be failure points. It's better to run configuration tools like this when the app first starts because you are now running in a normal user environment.

FinderSync invalidated on El Capitan

We have an application written in Mono that needs to communicate with an Finder Sync App extension.
All is working fine until we tried our app on El Capitan instead of on Yosemite.
We use a shared SQLite database to tell what paths are in which state and use NSDistributedNotificationCenter for communication between the two.
The shared SQLite database is outside of the sandboxed env so we have putted an excepention in our entitlements com.apple.security.temporary-exception.files.home-relative-path.read-write
If we remove this exception from the app extension, the extension works (but obviously we can't read our db)
Then we tought of putting the SQLite DB into memory, but shared memory databases isn't possible over multiple processes.
I can't find how I can create a NSFileHandle for a Sqlite Connection.
We could send over all the info to the application extension, but then that has to keep it in memory (preferably in a SQLite, cause we need to do some querying.)
Does anyone has some pointers of what we could do?
Try to look in The Application Group Container Directory it might do in your case. Basically it allows you to have shared folder between apps/extension.
App group container directories. A sandboxed app can specify an entitlement that gives it access to one or more app group container directories, each of which is shared among all apps with that entitlement.
After some research on similar problem I found it's much easier to have simple TCP server in main app that responds to extension with file status. This way you can easily broadcast file status change to all extension instances etc.

Dart lang app with open stack / docker / vagrant

I'm newbie for these techs (open stack / docker / vagrant), not sure if I understood them correctly (most likely did not), for me I understood it is something like having a portable application to run it with same development configuration to ensure all the development team have same setup, but did not understand, what after development, and how to get benefit from them with dart app.
my question is:
1. Correct my understanding
2. Do I need the end user to have these things installed in his system, and run my application through them, same as in the development stage?
3. How can I build/develop/distribute dart lang app through them, may be as hese as well as dart are new, I could not find enough info while googling.
thanks
Docker is similar to a virtual machine like VM-Ware or Virtualbox as it creates an abstraction layer between the host operating system and the operating system running within a Docker container. The difference is that Docker doesn't emulate the entire hardware. The disadvantage is that Docker only runs on Linux and only Linux can be run inside Docker. If your host is an Intel system you can't run an ARM Linux inside the container. (theoretically you can run Virtualbox inside Docker and run Windows. or other OSes in it)
With Docker you can test your application locally in the same environment as the application will run when deployed.
When you for example create an application you want to run in Google Compute Engine you install and test it locally inside a Docker container and then deploy the Docker container to Google Compute Engine as a whole unit. When there is a bug in the deployed application you should be able to reproduce it locally as well because it's just a 1:1 copy. No bug could have been introduce because the operating system or other dependencies were installed differently on the deployment environment than in the develeopment/test environment.
The Dockerfile is a set of instructions to set up a Docker container. If you want to create a new Docker container (for example for a new developer) you just let Docker process the Dockerfile and a new Docker container is created from it. This allows to easily create new Containers.
If you want to update one dependency to a newer version or want to add remove components to/from the environment you change the Dockerfile and create a new container from it. This way you avoid that manual addition/removal form/to an existing container manually lets containers of different developers/testers/deployment diverge from each other.
I haven't used OpenStack myself but from the web page it seems to provide components and tools to build and manage your own cloud infrastructure.
I also haven't used Vagrant myself but it seems to help to automate a lot of tasks related to creating and managing virtual machines like VM-Ware, Virtualbox, Docker and probably others.
When you have for example a server application it probably consist of a number of components you don't want all to run in one container but split up into several containers. One container for the Database, one for the web server, one for the backend application (created in Dart for example), and others. It can become cumbersome to manage all those containers. Vagrant helps to automate related tasks.

How to use a virtual machine with automated tests?

I am attempting to setup automated tests for our applications using a virtual machine environment.
What I would like to have is something like the following scenario:
Build server is automatically triggered to start an automated test for the application
A "build" script is then run which consist of:
Copy application files and a test script to a location accessible by the VM
Start the VM
In the VM, a special application looks in the shared folder and start the test script
The tests script do its job, results are output to shared folder
Test script ends
The special application then delete the test script
The special application somehow have the VM manager close the VM and revert to the previous snapshot
When the VM has exited, process the result and send to build server.
I am using TeamCity if that matters.
For virtual machines, we use VirtualBox but we are open to any other if needed.
Is there any applications/suite that would manage this scenario?
If there are none then I would then code it myself, should be easy but the only part I am not sure is the handling of the virtual machine.
What I need to be able to do is to have the VM close itself after the test and revert to a previous snapshot since I want it to be in a known state for the next test.
Any pointers?
I have a similar setup running and I chose to use Vagrant as its the same thing our developers where using for normalizing the development environment.
The initial state of the virtualmachine was scripted using puppet, but we didn't run the deployment scripts from scratch on each test, only once a day.
You could use puppet/chef for everything, but for all other operations on the VM, we would use Fabric scripts, as they were used for the real deployment too, and somehow fitted how we worked better. In sum the script would look something like the following:
vagrant up # fire up the vm, and run the puppet provisioning tool
fab vm run_test # run tests on vm
fab local process_result # process results on local shared folder
vagrant destroy # destroy the vm
The advantage is that your developers can also use vagrant to mimic your production environment without having to take care of that themselves (i.e. changes to your database settings get synced to all your developers vm's wherever they are) and the same scripts can be used in production too.
VirtualBox does have a COM API. I have no experience with it, but it may be possible to use that. One option would be to have TeamCity fire off a script to do this. I'd suggest starting with NAnt (supported natively by TeamCity) and possibly executing PowerShell if necessary.
Though I don't have any experience with either, I happen to have heard of a couple applications in this space recently:
http://www.infoq.com/news/2011/05/virtual_machine_test_harness
http://www.automatedqa.com/techpapers/testcomplete/automated-testing-in-virtual-labs/

Resources