Unable to disable GPU in CefSharp settings - cefsharp

CefSharp v99.2.120
I am getting the following error in my CefSharp application:
ERROR::gpu_process_host.cc (967) GPU process exited unexpetedly: exit_code=532462766 WARNING:gpu_process_host.cc(1273) The GPU process has crashed 1 time(s)
And this repeats until crashed 3 times until
FATAL:gpu_data_manager_impl_private.cc(417) GPU process isn't usable. Goodbye/
I am using the following settings:
CefSettings settings = new CefSettings() settings.CefCommandLineArgs.Add("disable-gpu");
settings.CefCommandLineArgs.Add("disable-gpu-compositing");
settings.CefCommandLineArgs.Add("disable-gpu-vsync");
settings.CefCommandLineArgs.Add("disable-software-rasterizer");
App still crashes with same error.
Added
settings.DisableGpuAcceleration()
Still the same.
Expected there to be no GPU in use, but using the above settings doesn't alter anything.

Related

mpirun error of oneAPI with Slurm (and PBS) in old cluster

Recently I installed Intel OneAPI including c compiler, FORTRAN compiler and mpi library and complied VASP with it.
Before presenting the question, there are some tricks I need to clarify during the installation of VASP:
GLIBC2.14: the cluster is an old machine with a glibc version of 2.12, where OneAPI needs a version of 2.14. So I compile the GLIBC2.14 and export the ld_path: export LD_LIBRARY_PATH="~/mysoft/glibc214/lib:$LD_LIBRARY_PATH"
ld 2.24: The ld version is 2.20 in the cluster, while a higher version is needed. So I installed binutils 2.24.
There is one master computer connected with 30 calculating nodes in the cluster. The calculation is executed with 3 ways:
When I do the calculation in the master, it's totally OK.
When I login the nodes manually with rsh command, the calculation in the logged node is also no problem.
But usually I submit the calculation script from the master (with slurm or pbs), and then do the calculation in the node. In that case, I met following error message:
[mpiexec#node3.alineos.net] poll_for_event (../../../../../src/pm/i_hydra/libhydra/demux/hydra_demux_poll.c:159): check exit codes error
[mpiexec#node3.alineos.net] HYD_dmx_poll_wait_for_proxy_event (../../../../../src/pm/i_hydra/libhydra/demux/hydra_demux_poll.c:212): poll for event error
[mpiexec#node3.alineos.net] HYD_bstrap_setup (../../../../../src/pm/i_hydra/libhydra/bstrap/src/intel/i_hydra_bstrap.c:1062): error waiting for event
[mpiexec#node3.alineos.net] HYD_print_bstrap_setup_error_message (../../../../../src/pm/i_hydra/mpiexec/intel/i_mpiexec.c:1015): error setting up the bootstrap proxies
[mpiexec#node3.alineos.net] Possible reasons:
[mpiexec#node3.alineos.net] 1. Host is unavailable. Please check that all hosts are available.
[mpiexec#node3.alineos.net] 2. Cannot launch hydra_bstrap_proxy or it crashed on one of the hosts. Make sure hydra_bstrap_proxy is available on all hosts and it has right permissions.
[mpiexec#node3.alineos.net] 3. Firewall refused connection. Check that enough ports are allowed in the firewall and specify them with the I_MPI_PORT_RANGE variable.
[mpiexec#node3.alineos.net] 4. pbs bootstrap cannot launch processes on remote host. You may try using -bootstrap option to select alternative launcher.
I only met this error with oneAPI compiled codes but Intel® Parallel Studio XE compiled. Do you have any idea of this error? Your response will be highly appreciated.
Best,
Léon
Could it be a permissions error with the Slurm agent not having the correct permissions or library path?

CF_STAGING_TIMEOUT for deploying Shiny app on Bluemix with Cloud Foundry

I have a Shiny app that runs as expected locally, and I am working to deploy it to Bluemix using Cloud Foundry. I am using this buildpack.
The default staging time for apps to build is 15 minutes, but that is not long enough to install R and the packages needed for my app. If I try to push my app with the defaults, I get an error about running out of time:
Error restarting application: sherlock-topics failed to stage within 15.000000 minutes
I changed my manifest.yml to increase the staging time:
applications:
- name: sherlock-topics
memory: 728M
instances: 1
buildpack: git://github.com/beibeiyang/cf-buildpack-r.git
env:
CRAN_MIRROR: https://cran.rstudio.com
CF_STAGING_TIMEOUT: 45
CF_STARTUP_TIMEOUT: 9999
Then I also changed the staging time for the CLI before pushing:
cf set-env sherlock-topics CF_STAGING_TIMEOUT 45
cf push sherlock-topics
What happens then is that the app tries to deploy. It installs R in the container and installs packages, but only for about 15 minutes (a little longer). When it gets to the first new task (package) after the 15 minute mark, it errors but with a different, sadly uninformative error message.
Staging failed
Destroying container
Successfully destroyed container
FAILED
Error restarting application: StagingError
There is nothing in the logs but info about the libraries being installed and then Staging failed.
Any ideas about why it isn't continuing to stage past the 15-minute mark, even after I increase CF_STAGING_TIMEOUT?
The platform operator controls the hard limit for staging timeout and application startup timeout. CF_STAGING_TIMEOUT and CF_STARTUP_TIMEOUT are cf cli configuration options that tell the cli how long to wait for staging and app startup.
See docs here for reference:
https://docs.cloudfoundry.org/devguide/deploy-apps/large-app-deploy.html#consid_limits
As an end user, it's not possible to exceed the hard limits put in place by your platform operator.
I ran into the same exact issue. I was also trying to deploy a shinyR app with a quite a number of dependencies.
The trick is to add the num_threads property to r.yml file. This will definitely speed up the build time.
Here's how one might look:
packages:
- packages:
- name: bupaR
- name: edeaR
num_threads: 8
See https://docs.cloudfoundry.org/buildpacks/r/index.html for more info

Why can an Rsession process continue after SIGSEGV and what does it mean

I'm developing an R package for myself that interacts with both Java code using rJava and C++ code using Rcpp. While trying to debug Rsession crashes when working under Rstudio using lldb, I noticed that lddb outputs the following message when I try to load the package I'm developing:
(lldb) Process 19030 stopped
* thread #1, name = 'rsession', stop reason = signal SIGSEGV: invalid address (fault address: 0x0)
frame #0: 0x00007fe6c7b872b4
-> 0x7fe6c7b872b4: movl (%rsi), %eax
0x7fe6c7b872b6: leaq 0xf8(%rbp), %rsi
0x7fe6c7b872bd: vmovdqu %ymm0, (%rsi)
0x7fe6c7b872c1: vmovdqu %ymm7, 0x20(%rsi)
(where 19030 is the pid of rsession). At this point, Rstudio stops waiting for lldb to resume execution, but instead of getting the dreaded "R session aborted" popup, entering the 'c' command in lldb resumes the rsession process and Rstudio continues chugging along just fine and I can use the loaded package with no problems. i.e.:
c
Process 19030 resuming
What is going on here? Why is Rstudio's rsession not crashing if lldb says it has "stopped"? Is this due to R's (or Rstudio's?) SIGSEGV handling mechanism? Does that mean that the original SIGSEGV is spurious and should not be a cause of concern? And of course (but probably off topic in this question): how do I make sense of lldb's output in order to ascertain if this SIGSEGV on loading my package should be debugged further?
The SIGSEGV does not occur in Rsession's process, but in the JVM process launched by rJava on package load. This behaviour is known and due to JVM's memory management, as stated here:
Java uses speculative loads. If a pointer points to addressable
memory, the load succeeds. Rarely the pointer does not point to
addressable memory, and the attempted load generates SIGSEGV ... which
java runtime intercepts, makes the memory addressable again, and
restarts the load instruction.
The proposed workaround for gdb works fine:
(gdb) handle SIGSEGV nostop noprint pass

Operation timed out: installing and running app

I'm getting the following error message when I try different operations such as installing an application to a Firefox OS simulator:
Operation timed out: installing and running app
I've tried versions 1.3, 1.4, 2.0 of the simulator - all have the same error.
The application does get deployed to the simulator and runs. However, console.log() output does not get displayed in the WebIDE Console.
For the location of the error message, see the screenshot below
My Firefox Version is - 42.0a1 (2015-07-16)
This is running on Ubuntu 14.04 x64.
When I click on the Troubleshooting link, I can't find any tips for troubleshooting a simulator.
Any ideas?
The steps I followed were:
Open WebIDE. Terminal output:
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
Start Firefox OS 2.0 simulator, simulator displays. Terminal output:
console.log: Connection status changed: connecting
console.log: Connection status changed: connected
Project >> Open Packaged App, and select my application. No Terminal output.
Project >> Install and Run. Application displays on simulator. Terminal output:
console.log: Installing app from /home/snowch/tmp/Scratch/myapp/hw_chs
console.log: Starting bulk upload
console.log: File size: 14253
console.log: Bulk upload done
After about 10 seconds the error message is shown as per the screenshot.
Update
The jconsole error messages:
Simulating large screen devices (Operation timed out: installing and running app)
Simulating small screen devices, after clicking on the spanner (Operation timed out: opening toolbox)
It looks like you've run into several bugs with older simulator versions.
For the issue with app install when the screen is set to Via Vixen, I filed bug 1186929. For myself, it only reproduced on 2.0, but please comment in that bug if you have more specifics.
For the "opening toolbox" issue, I believe that is a compatibility problem that needs to be resolved with older simulators, which is bug 1161131.
I tried asking several times in the comments above if you ever actually see the toolbox appear, or if you just get error messages. It would still help to know the answer here.
As a workaround, you should be able to use Simulator 2.2 or later to avoid these issues until they are resolved.

Zcatalog clear and rebuild quits plone client without any error

I have a strange problem with one of my plone: when I clear and rebuild the zcatalog the zope client quits silently after some time. No errors.
I did the process using the ZMI (zeo + zeoclient, standalone) and using 'zinstance debug'. Same result the client quits silently.
I'm using a standard Plone 4.3 with some addon Products on a Ubuntu Server 12.04 box.
Tasks I have done in order to find out the problem, without success:
I've checked the permissions on the filesystem.
I've reinstalled Plone 4.3
Packing the database works ok, but the problem persists.
Checked the free inodes on the filesystem.
Executing the process on other computer works successfully.
Executing the client with fg parameter, no messages when quitting.
Backup the db and restoring. Same result after restoring, but if restoring on other computer the process rebuilds the catalog (with same Plone version and addons).
Reindexing a index of the catalog causes the same fail: quitting with no messages.
ZODB/scripts/fstest.py shows no errors.
ZODB/scripts/fsrefs.py shows no errors.
Any clues?
David, you are right, I just found the problem yesterday, but it was too late (I was tired) to report it here.
This plone instance is installed on an VPS (OpenVZ) with 512 MB and the kernel killed the python process silently when there was no memory free.
One of my last tests was to rebuild the catalog with "log progress" enabled, there I show that the process quitted at different points, but all around 30%. Then by chance I executed dmesg, and "voila", the mistery was resolved, look:
[2233907.698115] Out of memory in UB: OOM killed process 17819 (python) score 0 vm:799612kB, rss:497324kB, swap:45480kB
[2235168.564053] Out of memory in UB: OOM killed process 445 (python) score 0 vm:790380kB, rss:498036kB, swap:46924kB
[2236752.744927] Out of memory in UB: OOM killed process 17964 (python) score 0 vm:790392kB, rss:494232kB, swap:45584kB
[2237461.280724] Out of memory in UB: OOM killed process 26584 (python) score 0 vm:790328kB, rss:497932kB, swap:45940kB
[2238443.104334] Out of memory in UB: OOM killed process 1216 (python) score 0 vm:799512kB, rss:494132kB, swap:44632kB
[2239457.938721] Out of memory in UB: OOM killed process 12821 (python) score 0 vm:794896kB, rss:502000kB, swap:42656kB}

Resources