I have an error executing the following code R:
hsc = h2o.init(ip="127.0.0.1",port=54321,nthreads=-1,max_mem_size="8G")
model_tf <- h2o.deepwater(
x = col_start:col_end,
y = col_class,
backend = "tensorflow",
training_frame = train)
Error from console h2o:
A fatal error has been detected by the Java Runtime Environment:
SIGILL (0x4) at pc=0x00007f49f117892d, pid=4616, tid=0x00007f4a7d88a700
JRE version: Java(TM) SE Runtime Environment (8.0_144-b01) (build 1.8.0_144-b01)
Java VM: Java HotSpot(TM) 64-Bit Server VM (25.144-b01 mixed mode linux-amd64 compressed oops)
Problematic frame:
C [libtensorflow_jni.so00358a4a-1301-4222-a4f6-273b7a1baf4c+0x211992d]
Are you running this on an Ubuntu 16.04 machine with an Nvidia GPU and all the requirements met from this page https://github.com/h2oai/deepwater ?
The reason I'm asking is that this is the error you get when you try to run the GPU version on a machine that does not have a GPU.
Deepwater won't work unless the requirements are met. A simple way to do this is to use one of the docker images
https://github.com/h2oai/deepwater#pre-release-docker-image
Related
I was following the steps given at Gluon Documentation to run JavaFX on Raspberry Pi 4 via DRM. I downloaded the JavaFX EA 16 builds from here.
javafx.properties file :
javafx.version=16-internal
javafx.runtime.version=16-internal+28-2020-11-10-180413
javafx.runtime.build=28
After cloning the samples repository containing hellofx, I compiled it via javac (according to the steps) and then ran this command to run it using DRM:
sudo -E java -Dmonocle.platform=EGL -Djava.library.path=/opt/arm32hfb-sdk/lib -Dmonocle.egl.lib=/opt/arm32fb-sdk/lib/libgluon_drm.so --module-path /opt/arm32fb-sdk/lib --add-modules javafx.controls -cp dist/. hellofx.HelloFX
However, this caused the following error :
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x9c3314dc, pid=734, tid=746
#
# JRE version: OpenJDK Runtime Environment (11.0.9+11) (build 11.0.9+11-post-Raspbian-1deb10u1)
# Java VM: OpenJDK Server VM (11.0.9+11-post-Raspbian-1deb10u1, mixed mode, serial gc, linux-)
# Problematic frame:
# C [libgluon_drm.so+0x14dc] getNativeWindowHandle+0x54
#
# No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /home/pi/samples/CommandLine/Modular/CLI/hellofx/hs_err_pid734.log
#
# If you would like to submit a bug report, please visit:
# Unknown
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
Aborted
It seems that while loading libgluon_drm.so in JavaFXSDK/lib/ fails at getNativeWindowHandle
What's weird is that after I ran sudo apt install libegl* mesa* libgl*, it actually succeeded but was asking me to set variable ENABLE_GLUON_COMMERCIAL_EXTENSIONS as true, which I had already done.
However, after rebooting, it started showing the same error.
I am using a Raspberry Pi 4 Model B with 2GB RAM. It is running on Raspberry Pi OS 32-Bit with desktop.
I had performed all of this on a clean installation.
Pi4 has both vc4 for render, and v3d for 3D. You can probe the devices for their capabilities - only one should acknowledge that it has DRIVER_RENDER or DRIVER_MODESET capabilities.
Pi4 DRM questions
The card which JavaFX selects by default is /dev/dri/card1. In my case, /dev/dri/card0 was the one to be used for render, and not card1. I solved the issue by using the following runtime argument :
-Degl.displayid=/dev/dri/card0
The JavaFX Version I used was 16-ea+5.
I'm attempting to use sparklyr to analyze a large dataset in R. Upon my attempts to establish a Spark connection with spark_connect, I receive the following error:
Error in get_java(throws = TRUE) : Java is required to connect to Spark. JAVA_HOME is set but does not point to a valid version. Please fix JAVA_HOME or reinstall from: https://www.java.com/en/
I've reinstalled Java but continue to get the same error. Any advice?
When I run:
sparklyr:::get_java()
java
"/usr/bin/java"
It appears that you don't have java set up in such a manner that the response for that sparklyr function is satisfactory. Unlike #Kerie I get nothing from the echo command. Instead, I can get sensible results from this command in a Terminal session:
$ java -version
#-------------------
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
Running MacOS 10.11.6 (not upgraded because my hardware is "obsolete" per Apple) and R 3.5.1.
There's an irony here in that it appears that the get_java function is supposed to set a value for the location if it cannot find an environment variable. Here's the code:
sparklyr:::get_java
#----------
function (throws = FALSE)
{
java_home <- Sys.getenv("JAVA_HOME", unset = NA)
if (!is.na(java_home)) {
java <- file.path(java_home, "bin", "java")
if (identical(.Platform$OS.type, "windows")) {
java <- paste0(java, ".exe")
}
if (!file.exists(java)) {
if (throws) {
stop("Java is required to connect to Spark. ",
"JAVA_HOME is set but does not point to a valid version. ",
"Please fix JAVA_HOME or reinstall from: ",
java_install_url())
}
java <- ""
}
}
else java <- Sys.which("java")
java
}
<bytecode: 0x7fb5c7f2db30>
<environment: namespace:sparklyr>
Because I do not have an environment variable for JAVA_HOME, but do have java registered with which, the get_java function returns a valid path. So my system returns:
Sys.which("java")
java
"/usr/bin/java"
From comments by #user6910411, I am reminded that you should not update to the current Java Dev Kit (which is 1.9), but do rather use the link provided by #Kerie to the prior major version, 1.8. You should also run:
Sys.unsetenv("JAVA_HOME")
to get rid of the misleading symlink. Or perhaps you could track it down at /Library/Java/Home (if that's where it is) and delete it before installing the newer (but not newest) version.
Run echo $JAVA_HOME in terminal, and see what is the output.
In my Mac OS, the output is :
/Library/Java/JavaVirtualMachines/jdk1.8.0_77.jdk/Contents/Home
Running eclipse plugin with a WebView component ends with SIGSEGV error, which happens to be an ancient bug as in here.
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x000000013a06a254, pid=23881, tid=775
#
# JRE version: Java(TM) SE Runtime Environment (9.0+11) (build 9.0.4+11)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (9.0.4+11, mixed mode, tiered, compressed oops, g1 gc, bsd-amd64)
# Problematic frame:
# C [libjfxwebkit.dylib+0x5ff254] WebCore::FrameTree::top() const+0x4
#
# No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /Applications/Eclipse.app/Contents/MacOS/hs_err_pid23881.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
Is there any new configuration or vm parameter we are missing specific to Java 9 ?
We run our set of tests in our own test tool (Java Based)..and random test fails.....we get the below JVM fail error......Please Help.....
A fatal error has been detected by the Java Runtime Environment:
EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x000000006a0d5422, pid=7560, tid=2052
JRE version: 7.0_06-b24
Java VM: Java HotSpot(TM) 64-Bit Server VM (23.2-b09 mixed mode windows-amd64 compressed oops)
Problematic frame:
V [jvm.dll+0x25422]
Failed to write core dump. Minidumps are not enabled by default on client versions of Windows
If you would like to submit a bug report, please visit:
http://bugreport.sun.com/bugreport/crash.jsp
--------------- T H R E A D ---------------
Current thread (0x0000000000dc6800): GCTaskThread [stack: 0x0000000004e80000,0x0000000004f80000] [id=2052]
siginfo: ExceptionCode=0xc0000005, reading address 0x00000000000000a8
Registers:
RAX=0x0000000000000000, RBX=0x000000073ae26d28, RCX=0x0000000000100010, RDX=0x000000073ae26d28
RSP=0x0000000004f7faf8, RBP=0x0000000000e6a660, RSI=0x000000076a5831d4, RDI=0x000000073ae26d28
R8 =0x0000000000000000, R9 =0x0000000000100010, R10=0x000000000000000c, R11=0x0000000000000000
R12=0x000000076a5831f0, R13=0x0000000000000020, R14=0x000000076a583160, R15=0x0000000000000020
RIP=0x000000006a0d5422, EFLAGS=0x0000000000010246
Top of Stack: (sp=0x0000000004f7faf8)
0x0000000004f7faf8: 000000006a15df4b 000000076a583030
0x0000000004f7fb08: 0000000000e6a660 000000076a583024
0x0000000004f7fb18: 000000000000000a 000000073ae26d28
0x0000000004f7fb28: 000000006a1a03fa 0000000000000ee9
0x0000000004f7fb38: 000000006a179dc9 000000076a583208
I know I'm a "bit" late, but thought i should share this, as I also encountered this problem with java8.
In this link, they have explained the reason (not the root cause) and how to solve it temporarily, till oracle fixes it.
I'm trying to load a hefty Excel workbook (.xlsm format, ~30 mb) that has a large number of array calcs.
> wb1 <- loadWorkbook("Mar_SP_20130227_V6.1.xlsm")
Error: POIXMLException (Java): java.lang.reflect.InvocationTargetException
But I am able to successfully load a values-only/no-macro version of the workbook.
> wb2 <- loadWorkbook("Mar_SP_20130227_V6.1_VALUES_ONLY.xlsx")
> wb2
[1] "Mar_SP_20130227_V6.1_VALUES_ONLY.xlsx"
What could be causing the error?
From the maintainer's website I can see that there can be issues with workbooks containing array calcs or unsupported formula functions, but this doesn't look like the same errror.
Java Info:
C:\> java -version
java version "1.6.0_21"
Java(TM) SE Runtime Environment (build 1.6.0_21-b07)
Java HotSpot(TM) Client VM (build 17.0-b17, mixed mode)
It turns out that the root of this error was the JVM running out of memory (even with options(java.parameters = "-Xmx1024m")).
I tried to increase the memory, but couldn't get the JVM to take more than -Xmx2048m, which still wasn't enough to load the workbook.
So I upgraded the JRE from 32 bit to 64 bit and ran 64 bit R.
I was then able to set -Xmx4096m and successfully load my 30mb workbook.