Grakn Error; trying to load schema for "phone calls" example - vaticle-typedb

I am trying to run the example grakn migration "phone_calls" (using python and JSON files).
Before reaching there, I need to load the schema, but I am having trouble with getting the schema loaded, as shown here: https://dev.grakn.ai/docs/examples/phone-calls-schema
System:
-Mac OS 10.15
-grakn-core 1.8.3
-python 3.7.3
The grakn server is started. I checked and the 48555 TCP port is open, so I don't think there is any firewall issue. The schema file is in the same folder (phone_calls) as where the json data files is, for the next step. I am using a virtual environment. The error is below:
(project1_env) (base) tiffanytoor1#MacBook-Pro-2 onco % grakn server start
Storage is already running
Grakn Core Server is already running
(project1_env) (base) tiffanytoor1#MacBook-Pro-2 onco % grakn console --keyspace phone_calls --file phone_calls/schema.gql
Unable to create connection to Grakn instance at localhost:48555
Cause: io.grpc.StatusRuntimeException
UNKNOWN: Could not reach any contact point, make sure you've provided valid addresses (showing first 1, use getErrors() for more: Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5f59fd46): com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...] init query OPTIONS: error writing ). Please check server logs for the stack trace.
I would appreciate any help! Thanks!

Nevermind -- I found the solution, in case any one else runs into a similar problem. The server configuration file needs to be edited: point the data directory to your project data files (here: the phone_calls data files) & change the server IP address to your own.

Related

Unable to communicate with the runtime for 'R' script in SQL Server 2017

I'm having trouble getting R to work on SQL Server 2017 on one server (I've successfully installed it on about 8 other servers). I've already installed that latest cumulative update.
When I execute a stored procedure that runs a simple hello world R script, I can see that LaunchPad.exe and rterm.exe are both running. After 60 seconds, however, I get the following error:
Msg 39012, Level 16, State 1, Line 0
Unable to communicate with the runtime for 'R' script. Please check the requirements of 'R' runtime.
STDERR message(s) from external script: Fatal error: creation of tmpfile failed -- set TMPDIR suitably?
This is the script that fails:
EXEC sp_execute_external_script
#language =N'R', #script=N'print("hello")';
Any ideas on what I need to do to resolve this error?
The problem was that Named Pipes wasn't enabled for SQL Server. Enabling that, and restarting the services solved my issue.
My assumption is that you applied the CU after the installation of Machine Learning Services? If so, the CU somehow messes up the folder permissions.
I wrote a blog post about how to fix it here. The blog post is about CU7, but it should apply to any CU.
I do not guarantee that it works, as I have seen other issues when the ML Services stop working, for those cases what fixes it is to do a repair of the SQL installation.

virsh restore with modified xml "Error: xml modification unsupported"

I'm trying to restore an a Xen VM (domain) from state file which I create previously. At the restore I need to modify the XML of this VM with the following command:
virsh restore domU.state --xml newconfig.xml
This command triggers an error with the following text:
error: Failed to restore domain from domU.state
error: argument unsupported: xml modification unsupported
What I already try:
restore without XML, which works perfectly.
run the command with the original xml the domain was created from
run the command with a totally different file which is not even an XML
At step 2. & 3. the error output was always the same.
Used versions:
xen 4.11.1
libvirt 5.1.0
os fedora 30
As the error message suggests, the ability to pass in custom XML when restoring a guest from a snapshot, is unfortunately not supported by the libvirt Xen (libxl) driver. This feature only works with QEMU/KVM at this time.

csync/sqlite error when running ownCloud command

I am running owncloudcmd to sync files from a local* path to an ownCloud/Nextcloud server, all running Debian 8. However it fails with the error:
[5] csync_statedb_query sqlite3_compile error: disk I/O error - on
query PRAGMA quick_check; [6] csync_statedb_load ERR: sqlite3
integrity check failed - bail out: disk I/O error. #### ERROR during
csync_update : "CSync failed to load the journal file. The journal
file is corrupted."
I am not very familiar with csync or sqlite so I am a bit in the dark and although I can find talk of this issue through googling, I can't find a fix. The data in this case can be dumped to start over so I'm happy to flush any database or anything else. I've trying removing the created csync and journal files assuming one of them was corrupted but it doesn't seem to change anything.
I have read talk about changing PRAGMA settings to ignore the error (or check) but I can't see how this is implemented either.
Is anyone able to show me how to clear out the corruption?
*the local file is a mounted path to an AWS S3 bucket but I think this is irrelevant because it is working on other systems fine.

Lucene index gets broken segments after every restart of liferay-tomcat

I have a corrupted Lucene index. If I run "CheckIndex -fix" the problem is resolved, but as soon as I restart tomcat it becomes corrupted again.
The index directory is shared between two application servers running Liferay-Tomcat. I am fixing the index on 1 server and restarting that whilst the other is running. This is a production environment so I cannot bring them both down.
Any suggestions please?
Before fix, CheckIndex says:
Opening index # /usr/local/tomcat/liferay/lucene/0
Segments file=segments_5yk numSegments=1 version=FORMAT_SINGLE_NORM_FILE [Lucene 2.2]
1 of 1: name=_2vg docCount=31
compound=false
hasProx=true
numFiles=8
size (MB)=0.016
no deletions
test: open reader.........FAILED
WARNING: fixIndex() would remove reference to this segment; full exception:
java.io.IOException: read past EOF
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:151)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:38)
at org.apache.lucene.store.IndexInput.readVInt(IndexInput.java:78)
at org.apache.lucene.index.FieldInfos.read(FieldInfos.java:335)
at org.apache.lucene.index.FieldInfos.<init>(FieldInfos.java:71)
at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(SegmentReader.java:119)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:652)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:605)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:491)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
WARNING: 1 broken segments (containing 31 documents) detected
WARNING: would write new segments file, and 31 documents would be lost, if -fix were specified
If you access your search index with more than one application server, I would suggest integrating a Solr Server. So you don't have the problem that 2 app servers are trying to write on the same file. This could be error prone as you already found out.
To get Solr up and running you have to follow those steps:
Install a Solr Server on any machine you like. A machine running only Solr would be quite preferable.
Install the Solr search portlet in Liferay
Adjust the config files according to the setup document of Sol Search portlet.
Here are some additional links:
http://www.liferay.com/de/marketplace/-/mp/application/15193648
http://www.liferay.com/de/community/wiki/-/wiki/Main/Pluggable+Enterprise+Search+with+Solr

pgpool-II connection pooling - ERROR: "MD5" authentication with pgpool failed

Using the following for just connection pooling no master_slave or replication: rhel 6, postgresql 9.1.9, & pgpool-II 3.1.3 (also tried 3.2.5)
Followed solution suggested in http://www.pgpool.net/pipermail/pgpool-general/2013-May/001773.html
After following the instructions for MD5 I also tried setting both pg_hba.conf and pool_hba.conf to trust for local and subnet, but still get the following error when attempting to connect to the pool locally:
ERROR: "MD5" authentication with pgpool failed for user foo
Tried locally on Fedora 18 with pg9.2 and pgpool from Fedora repo and worked right out of the box.
At the end of all routes suggested everywhere I could find.
Help would be greatly appreciated.
After having hit the same problem the solution was to change ownership of the pool_passwd file to postgres.
Even though this file has a 644 permission, if owner isn't postgres you'll always get the aforementioned error. I guess this file's owner and the user running pgpool must match.
I'm running PosgreSQL 9.2 and pgpool-II 3.3.2, BTW.

Resources