I have server running Xen 4.0. On this server an LV exists. This LV on its own is part of an VG which holds the LV's of a virtual machine (eg root, var).
Now when I boot the virtual machine, the root fs is not found. The shell shows that the given LV's have the status is 'Not Available'.
After executing 'vgchange -ay vg', the status turns to 'read/write' as expected, which means that the LV are not initialised correctly.
What could make these LV's work as expected?
Thank you.
Related
As the title says. I cannot run h20.init.
I have already downloaded the 64 bit version of the Java SE Development Kit 8u291. I also downloaded the xgboost library in R (install.packages("xgboost") ). Finally, I have updated all my NVIDIA drivers and downloaded the latest CUDA (although, tbh I don't even know what that does). I followed the steps described in the NVIDIA forums to avoid the crash I had when installing (i.e. remove integration with visual studio). FWIW I'm using a DELL Inspiron 15 Gaming and it has a NVIDIA GTX 1050 with 4GB.
Here's the full code I'm using (straight from the h2o download instructions except for the first line):
library(xgboost)
library(h2o)
localH2O = h2o.init()
demo(h2o.kmeans)
Any help would be much appreciated.
The full message I get when running the above code chunk:
H2O is not running yet, starting it now...
Note: In case of errors look at the following log files:
C:\Users\<my username>\AppData\Local\Temp\RtmpcdvCce\file1a106074110b/h2o_<my username>_started_from_r.out
C:\Users\<my username>\AppData\Local\Temp\RtmpcdvCce\file1a10253139db/h2o_<my username>_started_from_r.err
java version "15.0.2" 2021-01-19
Java(TM) SE Runtime Environment (build 15.0.2+7-27)
Java HotSpot(TM) 64-Bit Server VM (build 15.0.2+7-27, mixed mode, sharing)
Starting H2O JVM and connecting: ............................................................Diagnostic HTTP Request:
HTTP Status Code: -1
HTTP Error Message: Failed to connect to localhost port 54321: Connection refused
Cannot load library from path lib/windows_64/xgboost4j_gpu.dll
Cannot load library from path lib/xgboost4j_gpu.dll
Failed to load library from both native path and jar!
Cannot load library from path lib/windows_64/xgboost4j_omp.dll
Cannot load library from path lib/xgboost4j_omp.dll
Failed to load library from both native path and jar!
Cannot load library from path lib/windows_64/xgboost4j_minimal.dll
Cannot load library from path lib/xgboost4j_minimal.dll
Failed to load library from both native path and jar!
Failed to add native path to the classpath at runtime
java.io.IOException: Failed to get field handle to set library path
at ai.h2o.xgboost4j.java.NativeLibLoader.addNativeDir(NativeLibLoader.java:229)
at ai.h2o.xgboost4j.java.NativeLibLoader.initXGBoost(NativeLibLoader.java:43)
at ai.h2o.xgboost4j.java.NativeLibLoader.getLoader(NativeLibLoader.java:66)
at hex.tree.xgboost.XGBoostExtension.initXgboost(XGBoostExtension.java:70)
at hex.tree.xgboost.XGBoostExtension.isEnabled(XGBoostExtension.java:51)
at water.ExtensionManager.isEnabled(ExtensionManager.java:189)
at water.ExtensionManager.registerCoreExtensions(ExtensionManager.java:103)
at water.H2O.main(H2O.java:2203)
at water.H2OStarter.start(H2OStarter.java:22)
at water.H2OStarter.start(H2OStarter.java:48)
at water.H2OApp.main(H2OApp.java:12)
Cannot initialize XGBoost backend! Xgboost (enabled GPUs) needs:
- CUDA 8.0
XGboost (minimal version) needs:
- GCC 4.7+
For more details, run in debug mode: `java -Dlog4j.configuration=file:///tmp/log4j.properties -jar h2o.jar`
ERROR: Unknown argument (<my username>/AppData/Local/Temp/RtmpcdvCce)
Usage: java [-Xmx<size>] -jar h2o.jar [options]
(Note that every option has a default and is optional.)
-h | -help
Print this help.
-version
Print version info and exit.
-name <h2oCloudName>
Cloud name used for discovery of other nodes.
Nodes with the same cloud name will form an H2O cloud
(also known as an H2O cluster).
-flatfile <flatFileName>
Configuration file explicitly listing H2O cloud node members.
-ip <ipAddressOfNode>
IP address of this node.
-port <port>
Port number for this node (note: port+1 is also used by default).
(The default port is 54321.)
-network <IPv4network1Specification>[,<IPv4network2Specification> ...]
The IP address discovery code will bind to the first interface
that matches one of the networks in the comma-separated list.
Use instead of -ip when a broad range of addresses is legal.
(Example network specification: '10.1.2.0/24' allows 256 legal
possibilities.)
-ice_root <fileSystemPath>
The directory where H2O spills temporary data to disk.
-log_dir <fileSystemPath>
The directory where H2O writes logs to disk.
(This usually has a good default that you need not change.)
-log_level <TRACE,DEBUG,INFO,WARN,ERRR,FATAL>
Write messages at this logging level, or above. Default is INFO.
-max_log_file_size
Maximum size of INFO and DEBUG log files. The file is rolled over after a specified size has been reached.
(The default is 3MB. Minimum is 1MB and maximum is 99999MB)
-flow_dir <server side directory or HDFS directory>
The directory where H2O stores saved flows.
(The default is 'C:\Users\<my username>\h2oflows'.)
-nthreads <#threads>
Maximum number of threads in the low priority batch-work queue.
(The default is.)
-client
Launch H2O node in client mode.
-notify_local <fileSystemPath>
Specifies a file to write when the node is up. The file contains one line with the IP and
port of the embedded web server. e.g. 192.168.1.100:54321
-context_path <context_path>
The context path for jetty.
Authentication options:
-jks <filename>
Java keystore file
-jks_pass <password>
(Default is 'h2oh2o')
-jks_alias <alias>
(Optional, use if the keystore has multiple certificates and you want to use a specific one.)
-hostname_as_jks_alias
(Optional, use if you want to use the machine hostname as your certificate alias.)
-hash_login
Use Jetty HashLoginService
-ldap_login
Use Jetty Ldap login module
-kerberos_login
Use Jetty Kerberos login module
-spnego_login
Use Jetty SPNEGO login service
-pam_login
Use Jetty PAM login module
-login_conf <filename>
LoginService configuration file
-spnego_properties <filename>
SPNEGO login module configuration file
-form_auth
Enables Form-based authentication for Flow (default is Basic authentication)
-session_timeout <minutes>
Specifies the number of minutes that a session can remain idle before the server invalidates
the session and requests a new login. Requires '-form_auth'. Default is no timeout
-internal_security_conf <filename>
Path (absolute or relative) to a file containing all internal security related configurations
Cloud formation behavior:
New H2O nodes join together to form a cloud at startup time.
Once a cloud is given work to perform, it locks out new members
from joining.
Examples:
Start an H2O node with 4GB of memory and a default cloud name:
$ java -Xmx4g -jar h2o.jar
Start an H2O node with 6GB of memory and a specify the cloud name:
$ java -Xmx6g -jar h2o.jar -name MyCloud
Start an H2O cloud with three 2GB nodes and a default cloud name:
$ java -Xmx2g -jar h2o.jar &
$ java -Xmx2g -jar h2o.jar &
$ java -Xmx2g -jar h2o.jar &
So... after a lot of poking around I found the answer. Windows Defender ughhh was blocking access to the h2o.jar. The solution was to open PowerShell on the h2o java folder and run the h2o.jar using java -jar h2o.jar. Then you'll get the security prompt asking you to authorize the program (I've had to do it every time, so you might want to check your settings). Once you do that h2o.init() runs very smoothly in R.
✔ Deploy complete!
Project Console: https://console.firebase.google.com/project/socialape-6b2f7/overview
Ayhan-MacBookPro:socialape-functions macbook$ firebase serve
=== Serving from '/Users/macbook/Desktop/socialape-functions'...
Error: Port 5000 is not open, could not start functions emulator.
Run lsof -t -i tcp:5000 | xargs kill from your Terminal.
A common cause for this error occurs when the Firebase emulator is not cleanly shut down (e.g., closing your IDE that's running the emulator in an embedded Terminal session) This will leave the process running in the background and occupies the emulator's default port.
To resolve the conflict, find the process ID running on the port (here 5000) from your Terminal command line and then kill it.
The above one-liner finds the process ID and pipes it directly to kill (h/t #manav).
For additional info, check out: Find (and kill) process locking port 3000 on Mac
The bug seems to be on not your end
It is caused by a bug in a dependency (node portfinder).
A quick fix to edit it might be to use the old version of node portfinder (v 1.0.21). Alternatively, you can do it by editing node_modules/firebase-tools/lib/emulator/controller.js and changing yield pf.getPortPromise({ port, stopPort: port }) to yield pf.getPortPromise({ port, stopPort: port + 1 }).
You can see the answer to your question completely here in this SO link.
If you are facing this issue in macOS Pro then this solution is for you.
In MacOS , Port 5000 may be claimed by a new "AirPlay Receiver".
This can be disabled in Settings -> Sharing:
I'm also adding the Screenshot of settings panel for disabling AirPlay Receiver.
Disabling the AirPlay Receiver (if you do not need it) frees up port 5000.
I have a very simple ASP.NET v4.7 web application that runs in a docker container on my local development laptop.
The web application tries to connect to DocumentDb, but this fails because the container's timestamp is completely wrong, so naturally Jwt token verification fails. The exact ASP.NET code does not really matter; this is more about why the docker container's timestamp is different from the host machine.
Note that I'm using windows containers. My dockerFile looks as follows:
FROM microsoft/aspnet:4.7.2-windowsservercore-1803
ARG source
WORKDIR /inetpub/wwwroot
COPY ${source:-obj/Docker/publish} .
When I connect to the container, and run "time", I get the following:
docker container exec -it <containerId> cmd
c:\>time
The current time is: 0:29:49.87
c:\>tzutil /g
South Africa Standard Time
When I do this on the host machine, my laptop, I get:
C:\>time
The current time is: 15:42:45.72
Enter the new time:
C:\>tzutil /g
South Africa Standard Time
Where does docker get that crazy timestamp? Is there a way to sync with the host machine on startup?
My laptop's operating system version is: Win 10, v1803 (build 17134.471)
I'm running: Docker for Windows CE v2.0.0.0-win81 (29211) with Docker Engine version 18.09.0
Unfortunately the only solution so far is to "Wait and let it marinate" as pointed out in http://github.com/Microsoft/aspnet-docker/issues/120
Thanks #ikkentim for pointing out the issue.
I suspected that perhaps my container did not have internet access and could not sync its clock, but was unable to verify this as it just started working fine this morning.
I will post a better answer if I find a more useful solution than "wait".
I was in riak-shell when ssh lost its connection to the server. After reconnecting, I do the following:
sudo riak-shell
and get:
An instance of riak-shell is already running
So, I restarted the riak node in question. This did not seem to solve the problem. I do not see anything using ps -aux to kill. According to the docs, only one instance can run at a time. That makes sense, but when I run riak-shell from another node and try to connect to any node, I now get the following:
Error: invalid function call : connection_EXT:connect ["riak#<<<ip_address_elided>>>"]
You can connect to a specific node (whether in your riak_shell.config
or not) by typing 'connect "dev1#127.0.0.1";' substituting your
node name for dev1.
You may need to change the Erlang cookie to do this.
See also the 'reconnect' command.
Unhandled message received is {#Ref<0.0.0.135>,disconnected}
riak-shell(3)>
I have not changed the cookies during this process, and the cookie appears to be the same (at least in /etc/riak/riak_shell.config). (I am running the Riak TS AMI on AWS.)
riak-shell runs in its own Erlang VM - entirely separate from the riak node
(You don't need to run riak-shell from the machine your node is on - it uses the normal riak-erlang-client to talk to riak)
If you you are on a Linux do ps aux | grep riak_shell_app it will give you the process number you need to kill that instance:
08:30:45:~ $ ps aux | grep riak_shell_app
vagrant 4671 0.0 0.3 493260 34884 pts/4 Sl+ Aug17 0:03 /home/vagrant/riak_ee/dev/dev1/erts-5.10.3/bin/beam.smp -- -root /home/vagrant/riak_ee/dev/dev1 -progname erl -- -home /home/vagrant -- -boot /home/vagrant/riak_ee/dev/dev1/releases/2.1.1/start_clean -run riak_shell_app boot debug_off /home/vagrant/riak_ee/dev/dev1/bin/../log/riak_shell/riak_shell -noshell -config /home/vagrant/riak_ee/dev/dev1/bin/../etc/riak
I wrote a good chunk of it so let me know how you got on:
https://github.com/basho/riak_shell/graphs/contributors
When trying to run the Flex Builder 3 profiler on any I don't get the profiler dialog window and then after a few seconds I get "Socket timeout" in the console window. Any ideas why it can't connect?
I've got the latest debug version of Flash player and have tried shutting off my firewall.
I'm running it on XP from the local drive, ie. not through localhost.
Thanks,
Alex
It looks like the browser (Firefox in my case) has to be shutdown before the profiler is started. Step 1. in the livedocs even says this -- wish I had read it earlier. :)
http://livedocs.adobe.com/flex/3/html/help.html?content=profiler_3.html
Make sure that your firewall does not block port 9999, you can customize the port number too: Open Preferences->Flex->Profiler->Connections.
Check /etc/hosts (C:\Windows\System32\drivers\etc\hosts), and see if it contains a line:
127.0.0.1 localhost
In my case, it was somehow changed to ::1 localhost, and that's why it stopped working.
Thanks to Ziv for the (poorly formatted) answer.
While I try to run my Flex Profiler I got this error message:
In the flash application I got the following exception:
Error #2044: Unhandled securityError:. text=Error #2048: Security sandbox violation:
file:///C|%2Fwork%2Flabsense%2Fbranches%2Frel%5F1%5F2%5F5%5FEA%2Fsources%2Fui%2F.metadata%2F.plugins%2Fcom.adobe.flash.profiler%2FProfilerAgent.swf?host=localhost&port=9999
cannot load data from localhost:9999.
at ProfilerAgent()[C:\SVN\branches\3.2.0\modules\profiler3\as\ProfilerAgent.as:127]
And in the flex Profiler console (at the eclipse) I got : Socket timeout.
I am run on windows vista,
Flex builder: 3.2
Flash debugger: 10,0,22,87
Things that I have done to resolve this issue:
Switch the connection port of the profiler to 9998 (and back)
Remove and reinstall the flash debugger player.
Install flex builder 3.2 (instead of 3.0)
Delete all the enters in the mm.cfg file
Add enter to the mm.cfg:
PreloadSwf=C:\work\labsense\Sources\ui\.metadata\.plugins\com.adobe.flash.profiler\ProfilerAgent.swf?host=localhost&port=9999
or
PreloadSwf=C:\work\labsense\Sources\ui\.metadata\.plugins\com.adobe.flash.profiler\ProfilerAgent.swf?host=localhost&port=9998
or
PreloadSwf=C:/work/labsense/Sources/ui/.metadata/.plugins/com.adobe.flash.profiler/ProfilerAgent.swf?host=localhost&port=9999
or with spaces:
PreloadSwf=C: \ work \ labsense \ Sources \ ui \ .metadata \ .plugins \ com.adobe.flash.profiler \ ProfilerAgent.swf?host=localhost&port=9999
or
C:\work\labsense\Sources\ui\.metadata\.plugins\com.adobe.flash.profiler\ProfilerAgent.swf?
or add all or some of the enters:
TraceOutputFileName=C:\Users\zivo\AppData\Roaming\Macromedia\Flash Player\Logs\flashlog.txt
ErrorReportingEnable=1
MaxWarnings=0
TraceOutputFileEnable=1
ProfilingFileOutputEnable=1
Turn on and off the vista firewall
Add exception for port 9999 in the vista firewall
Try to run the profiler SWF separately
Same result.
Try one last thing:
Because I have expreins problem little bit similar before with the flash debugger, the resolution then was:
Right click on flash player (debugger),
choose “Debugger”,
choose “other machine”
add “127.0.0.1”
click ok
then, it solve the issue (but apparently he connect to the debugger with host 127.0.0.1 instead of localhost (which is a same)
I now add to the mm.cfg file, the follow entry:
PreloadSwf=C:/work/labsense/branches/rel_1_2_5_EA/sources/ui/.metadata/.plugins/com.adobe.flash.profiler/ProfilerAgent.swf?host=127.0.0.1&port=9999
Then, after saving, I run the profiler, and its work!!
And the reasons for all this was:
Some program change the file C:\Windows\System32\drivers\etc\hosts to:
# Copyright (c) 1993-2006 Microsoft Corp.
#
# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
#
# This file contains the mappings of IP addresses to host names. Each
# entry should be kept on an individual line. The IP address should
# be placed in the first column followed by the corresponding host name.
# The IP address and the host name should be separated by at least one
# space.
#
# Additionally, comments (such as these) may be inserted on individual
# lines or following the machine name denoted by a '#' symbol.
#
# For example:
#
# 102.54.94.97 rhino.acme.com # source server
# 38.25.63.10 x.acme.com # x client host
::1 localhost
127.0.0.1 iDBO # LMS GENERATED LINE
This means that localhost is not lead to 127.0.0.1!!!
Fixing is easy:
# ::1 localhost
# 127.0.0.1 iDBO # LMS GENERATED LINE
127.0.0.1 localhost
Instead (remark the problem and fix the problem
After trying all the other suggestions here, this post on Adobe's forum clued me in.
Adobe forum
When the Flash debug player launches, it looks for mm.cfg in %HOMEDRIVE%%HOMEPATH%. On this particular computer that path is not my home directory on C: but on the file server mapped to I:. So once I created I:\mm.cfg with the contents
PreloadSwf=C:\Users\ehedstrom\Documents\FLEXBU~1\.metadata\.plugins\com.adobe.flash.profiler\ProfilerAgent.swf?host=localhost&port=9999
everything magically started working!
Browser tabs, make sure you have latest debug as you said you did, also make sure that the port is correct, for some reason the port sometimes changes(1001 or 20957) from the default 9999, be sure that your mm.cfg has ProfilingFileOutputEnable=1 and that bittorrent isn't on.
hth