I'm using ArangoDB 3.4.2 and I have a weird problem that I'm not able to explain...
I create a graph (myGraph) in the following in arangosh:
var graph_module = require('#arangodb/general-graph');
var myGraph = graph_module._create('mygraph');
myGraph._addVertexCollection('vertexes');
var edges = graph_module._relation('edges', ['vertexes'], ['vertexes']);
myGraph._extendEdgeDefinitions(edges);
Being vertexes and edges the collections for vertexes and edges, respectively.
Now, I create two vertexes:
db.vertexes.save({"name": "A", "_key": "A"});
db.vertexes.save({"name": "B", "_key": "B"});
So far so good. But now I try to create the edge between both and I get a fail:
127.0.0.1:8529#myDB> db.edges.save("vertexes/A", "vertexes/B", {"name": "A-to-B"});
JavaScript exception: TypeError: Cannot read property 'save' of undefined
!db.edges.save("vertexes/A", "vertexes/B", {"name": "A-to-B"});
! ^
stacktrace: TypeError: Cannot read property 'save' of undefined
at <shell command>:1:9
It seems that db.edges is undefined:
127.0.0.1:8529#MyDB> console.log(db.edges)
2019-01-26T19:01:52Z [98311] INFO undefined
But now, if I run db._collections() it seems that db.edges gets defined (weird!)
127.0.0.1:8529#MyDB> db._collections()
...
127.0.0.1:8529#MyDB> console.log(db.edges)
2019-01-26T19:02:58Z [98311] INFO [ArangoCollection 16807, "edges" (type edge, status loaded)]
and in this moment, the db.edges.save(...) operation works:
127.0.0.1:8529#MyDB> db.edges.save("vertexes/A", "vertexes/B", {"name": "A-to-B"});
{
"_id" : "edges/16899",
"_key" : "16899",
"_rev" : "_YGsKKq2--_"
}
Why db.edges is undefined at the first save()? Why a show colletions operation (which I understand is read-only) is getting it defined? Maybe I'm doing something wrong?
When executing db.edges.save() an internal cache is accessed. If this cache is clear, executing db.edges.save() works to save an edge. Since db._collections() resets this cache, it is possible to run the command afterwards. However if this cache is not clear, an error is thrown as you observed.
The correct and safe way is to access the collection via db._collection("collection-name").
Therefore you can use the following command to save an edge in the edges collection:
db._collection("edges").save("vertexes/A", "vertexes/B", {"name": "A-to-B"});
Related
When i try to Set a query and alert condition under the alert role I keep getting the below error. I search other blogs could not able to resolve this issue
An unexpected error happened
Details
TypeError: Cannot read properties of undefined (reading 'values')
at ni (http://localhost:3000/public/build/AlertingRuleForm.f65c22acec748d21665e.js:145:233)
at div
at de (http://localhost:3000/public/build/9512.8f35f1c2de682ecc2973.js:2:92536)
at li (http://localhost:3000/public/build/AlertingRuleForm.f65c22acec748d21665e.js:252:28)
at div
at de
Query i was using:
from(bucket:"API_BE")
|> range(start: -15m)
|> filter(fn: (r) => r._measurement == "3 API competition")
|> yield()
Grafana Version: 9.3.2
(DataSource)Influx Db version: v2.6.0
I am running grafana and influxd in local machine(Windows:10)
I'm trying to my map local pojo to an autogenerated domain objects using mapstruct. Expect for a specific complex structure everything else seems to map and the mapper implementation class gets generation. Below is the error that I get.
My mapper class is:
#Mappings({
#Mapping(source = "sourcefile", target = "sourceFILE"),
#Mapping(source = "id", target = "ID"),
#Mapping(source = "reg", target = "regID"),
#Mapping(source = "itemDetailsType", target = "ItemDetailsType") //This is the structure that does not map
})
AutoGenDomainType map(LocalPojo localPojo);
#Mappings({
#Mapping(source = "line", target = "LINE"),
#Mapping(source = type", target = "TYPE")
})
ItemDetailsType map(ItemDetailsTypes itemDetailsType);
Error:
Internal error in the mapping processor: java.lang.NullPointerException at org.mapstruct.ap.internal.processor.creation.MappingResolverImpl$ResolvingAttempt.hasCompatibleCopyConstructor(MappingResolverImpl.java:547) at org.mapstruct.ap.internal.processor.creation.MappingResolverImpl$ResolvingAttempt.isPropertyMappable(MappingResolverImpl.java:522) at org.mapstruct.ap.internal.processor.creation.MappingResolverImpl$ResolvingAttempt.getTargetAssignment(MappingResolverImpl.java:202) at org.mapstruct.ap.internal.processor.creation.MappingResolverImpl$ResolvingAttempt.access$100(MappingResolverImpl.java:153) at org.mapstruct.ap.internal.processor.creation.MappingResolverImpl.getTargetAssignment(MappingResolverImpl.java:121) at
.....
.....
[ERROR]
[ERROR] Found 1 error and 16 warnings.
[ERROR] -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project uwo-services: Compilation failure
The target object ItemDetailsType does have other properties that need not be mapped. The error says compilation issue, but I dont find any. Also I have tried adding have tried the unmappedTargetPolicy = ReportingPolicy.IGNORE at my mapper class level just to avoid if this is caused by the unmapped properties, but still no solution.
This is a known bug in MapStruct. The bug is reported in #729, it has been fixed in 1.1.0.Final. You are using 1.0.0.Final. I would highly suggest switching to the either 1.1.0.Final or 1.2.0.Beta2.
Once you update you will see a better error message and you will know exactly what the problem in the mapping is.
By looking at this first it looks like that target in #Mapping(source = "itemDetailsType", target = "ItemDetailsType") is wrong. Are you sure that you need a capital letter there?
I have a immutable map in my class. When I run my code in local mode, there is no problem and I can reach every key in the map. However, when I run my code in cluster mode, nodes throw error about not finding the key in the map.
What I've tried up to now are these;
-Broadcast the immutable map over cluster.
broadcast = sc.broadcast(my_immutable_map)
-Parallelize the map as pair RDD
my_map_rdd = sc.parallelize( my_immutable_map.toSeq)
When i examine the logs, I see key not found exception.
My error stacktrace is as follows:
Driver stacktrace:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 15.0 failed 4 times, most recent failure: Lost task 1.3 in stage 15.0 (TID 25, datanode1.big.com): java.util.NoSuchElementException: key not found: 905053199731
at scala.collection.MapLike$class.default(MapLike.scala:228)
at scala.collection.AbstractMap.default(Map.scala:58)
at scala.collection.MapLike$class.apply(MapLike.scala:141)
at scala.collection.AbstractMap.apply(Map.scala:58)
at havelsan.CDRGenerator$.generate_random_target(CDRGenerator.scala:95)
at havelsan.CDRGenerator$$anonfun$main$2$$anonfun$6.apply(CDRGenerator.scala:167)
at havelsan.CDRGenerator$$anonfun$main$2$$anonfun$6.apply(CDRGenerator.scala:165)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:389)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1251)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Can you explain how spark distribute maps and how it is possible that some nodes can't find some keys in this map, please? Btw my spark version is 1.6.0
What am I missing?
UPDATE
This part is for initializing the map on driver.
...
var pd = sc.textFile( "hdfs://...")
my_immutable_map = pd.map( line => line.split(":") ).map{ line => (line(0), line(1).split(","))}.collectAsMap
...
broadcast = sc.broadcast(my_immutable_map)
my_map_rdd = sc.parallelize( my_immutable_map.toSeq)
And this is the part where i got the error.
def my_func(key:String):String={
...
my_value = broadcast.value(key)
...
}
my_func is called inside a map as;
my_another_rdd.map{ line =>
val key = line.split(",")(0)
my_func(key)
}
The solution that i found is to pass the broadcast value to the function as a parameter. Still, I couldn't find a solution for parallelize method.
https://stackoverflow.com/a/34912887/4668959
When invoking a custom command, I noticed that only the logs are displayed. For example, if my Custom Comand script contains a retrun statement return "great custom command", I can't find it in the result. Both in API Java client or shell execution cases.
What can I do to be able to retrieve that result at the end of an execution?
Thanks.
Command definition in service description file:
customCommands ([
"getText" : "getText.groovy"
])
getText.groovy file content:
def text = "great custom command"
println "trying to get a text"
return text
Assuming that you service file contains the following :
customCommands ([
"printA" : {
println "111111"
return "222222"
},
"printB" : "fileB.groovy"
])
And fileB.groovy contains the following code :
println "AAAAAA"
return "BBBBBB"
Then if you run the following command : invoke yourService printA
You will get this :
Invocation results:
1: OK from instance #1..., Result: 222222
invocation completed successfully.
and if you run the following command : invoke yourService printB
You will get this :
Invocation results:
1: OK from instance #1..., Result: AAAAAA
invocation completed successfully.
So if your custom command's implementation is a Groovy closure, then its result is its return value.
And if your custom command's implementation is an external Groovy file, then its result is its last statement output.
HTH,
Tamir.
I want to add an item to a collection using RacerJs/DerbyJs, but it just doesn't work. I must be really overlooking something...
What I tried
model.set('news', [
{ text: "something" }
]);
And that does set a news-item. However, when I do this another time, it will just overwrite the existing item, and not add a new one. How to do that?
model.push('news', {text:"someText"}) also fails with "Object is not an array".
Basically, I just want the most basic version of a "post an update and show on 'wall' app", without any rooms nor making use of Arrays. Just one collection, and that's it.
Stacktrace of the .push() variant:
Wed May 22 2013 09:35:24 GMT+0200 (W. Europe Daylight Time) (23168) d7564d2d-f23
8-4ce0-a0a2-6e376e9b5cb1 ? ver: 0 - push 'news', { text: 'adsf' }
C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\
Memory.js:185
throw new TypeError(arr + ' is not an Array');
^
TypeError: [object Object] is not an Array
at Object.arrayLookupSet [as _arrayLookupSet] (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\Memory.js:185:11)
at Object.applyArrayMethod [as _applyArrayMethod] (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\Memory.js:145:18)
at Object.push (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\Memory.js:118:15)
at applyTxn (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\transaction.js:114:32)
at Object.exports.applyTxnToDoc (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\transaction.js:126:3)
at Function.QueryInterface.publish (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\adapters\pubsub-memory\channel-interface-query.js:25:24)
at PubSub.publish (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\pubSub\PubSub.js:63:10)
at Store.module.exports.proto.publish (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\pubSub\pubSub.Store.js:174:20)
at publish (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\txns\txns.Store.js:230:15)
at next (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\middleware.js:7:26)
at module.exports.events.middleware.txn (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\txns\txns.Store.js:220:11)
at Store._sendToDb.lockingDone (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\Store.js:294:12)
at mergeAll.setupRoutes (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\adapters\db-memory\index.js:70:13)
at DbMemory.mergeAll.get (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\adapters\db-memory\index.js:44:5)
at mergeAll.setupRoutes (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\adapters\db-memory\index.js:62:16)
at DbMemory.mergeAll.get (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\adapters\db-memory\index.js:44:5)
at mergeAll.setupRoutes (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\adapters\db-memory\index.js:60:14)
at next (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\Store.js:321:15)
at Store._sendToDb (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\Store.js:324:10)
at writeToDb (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\txns\txns.Store.js:216:15)
at next (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\middleware.js:7:26)
at serialEmitPrep (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\txns\txns.Store.js:125:9)
at next (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\middleware.js:7:26)
at incrVer (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\modes\lww.js:18:12)
at next (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\middleware.js:7:26)
at Object.module.exports.events.init.store.eachContext.context.guardWrite (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\accessControl\accessControl.Store.js:54:51)
at accessController (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\txns\txns.Store.js:103:17)
at next (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\middleware.js:7:26)
at Object.run (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\middleware.js:10:12)
at Socket.module.exports.events.socket (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\txns\txns.Store.js:267:26)
at Socket.racer.log.sockets.sockets.on.socket.on (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\lib\log.server.js:150:20)
at Socket.EventEmitter.emit [as $emit] (events.js:91:17)
at SocketNamespace.handlePacket (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\node_modules\socket.io\lib\namespace.js:335:22)
at Manager.onClientMessage (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\node_modules\socket.io\lib\manager.js:488:38)
at WebSocket.Transport.onMessage (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\node_modules\socket.io\lib\transport.js:387:20)
at Parser. (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\node_modules\socket.io\lib\transports\websocket\hybi-16.js:39:10)
at Parser.EventEmitter.emit (events.js:88:17)
at opcodeHandlers.1.finish (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\node_modules\socket.io\lib\transports\websocket\hybi-16.js:288:16)
at Parser.opcodeHandlers.1.expectData [as expectHandler] (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\node_modules\socket.io\lib\transports\websocket\hybi-16.js:299:15)
at Parser.add (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\node_modules\socket.io\lib\transports\websocket\hybi-16.js:466:24)
at Parser.expect (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\node_modules\socket.io\lib\transports\websocket\hybi-16.js:499:10)
at Parser.opcodeHandlers.1.expectData (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\node_modules\socket.io\lib\transports\websocket\hybi-16.js:298:18)
at Parser.add (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\node_modules\socket.io\lib\transports\websocket\hybi-16.js:466:24)
at Parser.expect (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\node_modules\socket.io\lib\transports\websocket\hybi-16.js:499:10)
at opcodeHandlers.1.expectData (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\node_modules\socket.io\lib\transports\websocket\hybi-16.js:296:16)
at opcodeHandlers.1 (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\node_modules\socket.io\lib\transports\websocket\hybi-16.js:313:9)
at Parser.processPacket (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\node_modules\socket.io\lib\transports\websocket\hybi-16.js:533:8)
at Parser.add (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\node_modules\socket.io\lib\transports\websocket\hybi-16.js:466:24)
at Socket.WebSocket.onSocketConnect (C:\xampp\htdocs\Derbyjs\KnowEdge\app1\node_modules\derby\node_modules\racer\node_modules\socket.io\lib\transports\websocket\hybi-16.js:141:17)
at Socket.EventEmitter.emit (events.js:88:17)
at TCP.onread (net.js:396:14)
If you are trying to add a document to a collection you can also call model.add
model.add('news', {
text: "Something"
})
Which will add a new document to the news collection and generate the id for you.
More documentation can be found under http://derbyjs.com/#getters_and_setters
To create an item in the collection you can call model.set with explicitly specified path containing document ID, for example:
model.set('news.' + model.id(), {
text: "something"
})
model.id method will generate unique ID on each call to it.