chromium profile directory is already/used by another BrowserContext instance or process - jxbrowser

I used an evaluated jxbrowser, which version is 6.14, I write an demo to use it. but i have a problem with it.
Use the demo app to start an application, which can show web UI, keep this applciation with opened, but then I start demo app again, system will throw below exception:
chromium profile directory is already/used by another BrowserContext instance or process
jxbrwowser cannot start two clients in one PC? if can, how to resolve it?

We strongly recommend that you don't use several BrowserContext instances with the same profile directory. Chromium engine wasn't designed for such usage and doesn't support it. Even if you don't see any issues right now, the issues will appear later in end user environments. For example, in macOS environment you will get the Chromium's error message dialog every time when you run your application instance developed in such way.
Since it's a critical requirement in the Chromium engine, I don't think we will make it configurable in next versions. This is how Chromium engine works. These is a recommendation we have to follow when working with the Chromium engine.

Related

MRTK: How to fix SpectatorView from Android-device not pairing with HoloLens? (QR-code)

Problem summary
I'm attempting to establish a connection between HoloLens and an Android device, which worked sporadically in the beta-version of the MRTK.
However since moving to the MRTK RC1 (Also Refresh), I've encountered issues regarding the QR-scanning. When pressing connect, the two devices seemingly finds eachother, however when the wearer of the HoloLens1 looks directly into the QR-code, nothing happens. (the white dot and "Locating marker..." text is showing)
Background summary
1. The Setup:
Implemented working MRTK RC1 Refresh
Cloned Feature-SpectatorView separately, copying only the "MixedRealityToolkit.Extensions" folder to the MRTK project.
"Spectator View - HoloLens" prefab added to scene.
First pressing "HoloLens" in the PlatformSwitcher, building for HoloLens1, then switching to "Android" and exporting the project to Android Studio.
Building the .apk from Android Studio
(opencv binaries are downloaded and implemented since beta version, I haven't since changed them from when they worked the last time.)
2. The Proces:
On the HoloLens, I press the "Connect" button in which a white text appears saying "Locating Marker..."
The Android phone presses connect and it goes to "Waiting for User" then as soon as a HoloLens is connected, it switches immediately to a QR code that should be readible from said HoloLens.
Looking directly at the QR-code and nothing new happens, connection does not establish further.
I checked if something was not ticked in Player Settings/Capabilities, but I can't seem to find what the culprit would be. Did I forget something in this proces?
There are a few things that could be causing this issue.
If the Android device is showing a marker, this means the two devices have established a network connection and are communicating with one another. Typically, when I run spectator view I enable the following capabilities: "Internet (Client & Server), Internet (Client), Microphone, Pictures Library, Private Networks (Client & Server), Spatial Perception, Videos Library, Webcam" in the Package.appxmanifest in visual studio. Pressing "HoloLens" on spectator view's unity platform switcher should typically achieve enabling these capabilities, but sometimes the package.appxmanifest doesn't get updated correctly in the visual studio project with subsequent builds in Unity. You can fix this by deleting your visual studio directory and rebuilding the visual studio project in unity.
If these capabilities are checked in the package.appxmanifest, it may be that you rejected a capability request when first running the application. If you open Settings -> Privacy -> Camera on the HoloLens, you can check whether your deployed spectator view application has camera access granted. You should be able to enable the camera functionality here if it is disabled.
There have been changes to both MixedRealityToolkit and MixedRealityToolkit-Unity spectator view logic, so cloning these items at different points in time may cause functions to no longer resolve (We're hoping to consolidate this code into the same repo/commit history in the future to prevent this from continuing to happen). Typically, in the Unity logs there will be errors stating that a function was not found for SpectatorViewPlugin.dll if the dll functionality is not resolving correctly. It sounds like this is not the issue you are hitting if things worked previously. But if it does turn out the case, it may be that you need to rebuild the SpectatorViewPlugin.dll to match the feature/spectatorView code you are using.
If you recently copied the SpectatorViewPlugin.dll and its dependencies to a new unity project, it may be that they aren't getting registered as usable by the windows uwp unity player. Make sure these binaries are in a Plugins\WSA\x86 folder within your assets folder. Also check the *.dll.meta definitions in the unity inspector to ensure the dll's are declared as usable for the unity wsa player/x86 builds.

What to use instead of Azure Web Apps to allow installation of google chrome in app environment?

I've just created a feature for our application which generates a powerpoint report from the data a given user has in our system.
In short, the server spawns an instance of google chrome using Selenium's ChromeDriver, and from there scrapes out the charts from our application running in chrome. It was done this way to ensure the charts in the report look exactly the same as they appear in the clients' browsers.
We use Azure Web Apps to host our development and production environments, and while my reporting feature works fine in local environments, it doesn't work once deployed to any other environments, because it depends on chrome being installed, and I can't get it installed in the Azure Web App sandboxed environment.
(you can see this other question of mine for a bit of a reference to where things are going wrong: PowerShell StartProcess: invalid handle )
SO
What I pretty much want to know is, if an Azure Web App environment isn't going to allow me to install google chrome, where should I look next?
It looks like using Service Fabric may allow me to install what I need appropriately (https://learn.microsoft.com/en-us/azure/app-service/choose-web-site-cloud-service-vm), but it seems like a big change to make just to be able to facilitate this small part of the feature.
Another option is to just re-architect the feature so it doesn't depend on the server spawning an instance of google chrome.. but I'd just prefer to avoid that if there's a straightforward way for me to get what I have working.
Ideally, there'd just be a way to get google chrome installed in the given environment, but I've spent a good 10 hours trying to get that to happen now, and it's not looking promising.
There's a couple of solutions which would work - depending on your code and framework dependencies.
IMO - the simplest way would be to build your code in a docker container (that runs the Selenium ChromeDriver) and deploy it either through the container features on Web Apps or run it on demand through ACI (Azure container instances) and have it create the report and drop it in Azure Storage. In a container you have a lot more options - and you have a great amount of options on how to run it. Spinning up an ACI on-demand to do the job can be done in multiple ways (e.g. from Code or through logic-apps or Powershell/Azure automation).
Here are some links on running containers in your App Service:
https://learn.microsoft.com/en-us/azure/app-service/containers/
https://learn.microsoft.com/en-us/azure/app-service/containers/tutorial-custom-docker-image
You could start off by building and adding your code from this image: https://github.com/SeleniumHQ/docker-selenium
Other alternatives of course - you could have a VM that you can install and do what you want with on-demand - however - it'd add more management overhead and other implications to think about.
Many options - but in the regual Web App Sandbox - you're limited.
I have found myself this problem with chromedriver.exe needing a real Chrome. As I cannot install Chrome in Azure App Service I am trying a portable version of Chrome. When using the chrome webdriver I tell it where to find the chrome binary.
var options = new ChromeOptions();
options.AddArguments("headless"); // any options you need
options.BinaryLocation = "YOUR CHROME BINARY PATH HERE";
var driver = new ChromeDriver("YOUR CHROME DRIVER PATH HERE", options);
You should be able to copy the chrome portable files as no installation is required. Although it is heavy, 250 MB, because it includes the non portable version inside.
Be sure to use a Chrome version compatible with your ChromeDriver as pointed in the documentation

Deploying a salesforce.com flex app without visualforce

SHORT VERSION: I have a Flex app that uses Salesforce.com's API. I am trying to deploy it to a remote server but keep getting "Error during login process." when I try to have it log in to salesforce's servers. What gives?
LONG VERSION (maybe someone finds this useful later): I have a flex application that's an add-on for salesforce.com
If I upload it as a static file to salesforce and then embed it in a visualforce page, it works fine. This method uses "loginBySessionId" rather than loginByCredentials.
I would like to be able to run it outside of salesforce's servers. IE, I would like to host the app on my own server and have people enter their credentials in the app and have it login to salesforce's servers. This way, if someone wants to try my application, they do not have to be salesforce administrators and do not have to install the app into a visualforce page.
Here's where the trouble is. If I enter my login information and run it from the compiler, it connects and loads the right data. If I export it as a production release, it still runs fine. However, if I either upload the release files to my own server, or if I transfer them to another computer and run them locally, i get an "Error during login process" Seems some others have had similar issues, but no solutions and nothing new.
Weirder still, if I transfer the project files to another computer and recompile them, it suddenly works. So basically, seems like I have to recompile the app for each computer I plan on running it on, but that's not practical. Even still, I don't see how that could possibly be making a difference, compiling on one vs the other. And yes, same versions of flash, same versions of Flex.
Does anyone have any suggestions on how to resolve this? Am I just misunderstanding something with how to deploy flex applications or is this some screwy thing with the salesforce API and there's a workaround?
As one added thing that makes this problem particularly frustrating is that I can't use the debugger because if I compile it on another computer, it works, so in order for me to get the error I have to build, then transfer to another computer. I feel like this could be a key to the problem, but I'm not sure how.
Here is some applicable code, pretty basic:
<flexforforce:F3WebApplication
id="app" statusChanged="statusChangedHandler(event)"
loginComplete="loginCompleteHandler(event)"
loginFailed="loginFailedHandler(event)"
sessionExpired="sessionExpiredHandler(event)"
serverUrl="http://na9.salesforce.com/services/Soap/u/19.0"
requiredTypes="Account,Contact,Opportunity,Lead,Task,User" />
protected function loginClickHandler( event : MouseEvent ) : void {
_username = 'LOGIN#LOGIN.COM';
_password = 'PASSWORD+SECURITY_TOKEN';
CursorManager.setBusyCursor();
app.loginByCredentials( _username, _password );
}
To clarify, you probably need something like this on initialization :
flash.system.Security.loadPolicyFile("http://na9.salesforce.com/services/Soap/crossdomain.xml");
The reason it works when you compile it is that a lot of the default security is not applicable when on same machine as compiled. Heck, you can even access the hard drive in paths (like a relative URL path to an image on the hard drive) - try running the swf on another computer and bam- no go.
This is an excellent indicator you're hitting a player / VM security issue :)

How does Adobe CONNECTNOW load and run acaddin?

I am looking for options to download, Install and run a custom plugin/add-on(an exe or an installer) from my Flash Movie similar to how the connectnow does that?
When we initiate the screen sharing for the first time, connectnow prompts us for mandatory add-in by showing the message "To use this application, you need the Adobe ConnectNow Add-in.Would you like to install it now?". Once we agree, it downloads and installs acaddin.exe at the location %USERPROFILE%\Application Data\Macromedia\Flash Player\www.macromedia.com\bin\acaddin on our local machine. Then automatically launches the acaddin.exe and allows the user to close the browser window from where the acaddin.exe was launched.
From the next time onwards, when we login to connectnow, it launches the exe directly.
In this context:
If I were to load my own exe/add-in from flash, How can I acheive that?
How does connectnow application/flash determine whether an add-in was already installed or not?
Connect, and I assume ConnectNow, use hidden, undocumented, private APIs for much of their functionality.
You will not be able to do this.
The best you can hope for is to pass the location of your executable to the browser as a local URL and let the browser handle it. I assume in most cases the browser will reject its' execution. Can you imagine the potential for abuse of such a feature?
Instead of using a browser based app, you may want to investigate using AIR and Native Process.

Silverlight Multiple Application Debugging

I have three Silverlight 3 applications in the same solution. In my asp.net hosting project I have a seperate page for all three projects. When I navigate between the pages, the only Silverlight breakpoints that get hit are the ones the initial page I load.
This problem has only started recently. I used to be able to debug between all silverlight projects at the same time. Any ideas? I have deleted the ClientBin folder, I have deleted all files and re-retrieved from source control. Nothing seems to be working.
"The problem has only started recently". What changed? Here are some guesses:-
You upgraded to Windows 7
You installed some more memory
Some other memory guzzling app is no longer running when you are testing.
By default IE8 will run multiple processes at least 2. One for the browser frame and one for the content of the intial tab. As you open more windows and tabs IE may add new processes to the set it is currently using.
When you debug VS will launch an new IE8 session and will attach to the process handling the content of the single tab that is open, (it doesn't bother attaching to the parent frame process). However as you navigate about your application IE8 will start new process that VS won't be attached to. This forces you to open the Attach to Process dialog and do it manually.
You can control this IE8 feature (called BTW LCIE, Loosely Coupled IE) from the Registry.
In the Key HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Main add a new DWORD value TabProcGrowth. Set its value to 1. Now IE8 will only ever create 2 processes per session one for the frame and one for all the tab and window contents which is the one the VS will attach to.
This perhaps is a bit draconian if you also use IE8 as your general browser. One option is to leave IE8 for test purposes and use another browser for general browsing. Another option is a variation of the above. Instead of creating TabProcGrowth as a DWORD create it as a string type instead and set is value to "small". In this mode IE8 is much less aggressive in the number of processes it will open. Of course you could create a couple of scripts to create and delete the registry entry.
Note without the registry entry IE8 uses its own hueristics that depends on available memory etc to determine if a new process is warrented or not. This might explain why in the past your debugging worked and that for apparently no reason it stopped working.
Here was the issue:
One of my child windows had a Silverlight that calling a .Net Ria Service. The service call ended in an error.
The next time several I debugged, the debugger did not attach to the child windows. I had to attach to the child windows manually.
I fixed the Ria Service call so that it did not end in an error. And had to manually attach to the child windows in that debugging session. However in subsequent debugging sessions, the debugger automatically attached.
I tried breaking the Ria Service call and I had to manually attache again. What is a little weird is that closing Visual Studio and even rebooting the machine does not make Visual Studio automatically attach again. You have to have a debugging session where the child window make a sucessful call to a Ria Service to fix it.
NOTE:
The RIA error that was breaking my debugger was caused by a misspelled include in the domain query (ie...
return Context.SOME_ENTITY.Include("Misspelled_Association_Property");
) not all RIA exceptions cause this problem.
My scenario has a number of specific cases that I will go over. I don't have all the things handy to test a more general scenario, but I will when I finish my project unless someone does this first.
Here is what I have:
I am using the a LinqToEntitiesDomainService from the July 2009 Preview release of .Net RIA Services.
To complicate things a little more, since my application is using an Oracle backend, I am using DevArt's dotConnect Entities provider as the EntityFramework model for my domain service.
When I get time, I will try this on the Nov 2009 RIA and a standard SQL backend and EF to see if I still have the same issue. If this is the case I will report it to Microsoft as a visual studio bug.

Resources