I have a tiny graph DB, 400k nodes and 150k Edges.
I was following the "Type Definition Overview" and "Indexing Backend Overview" directions to create keys, external indexes and query them, I've done:
g.makeKey('State').dataType(String.class).indexed('dev-titan', Vertex.class).make();
'dev-titan' = Elasticsearch index name
And I can find the states values in elasticsearch, titan index, under a field name "4O".
When I run this query, after 20 minutes or more I got this
rexster[groovy]> g=rexster.getGraph('graph')
==>titangraph[cassandra:xx.xx.x.xxx]
rexster[groovy]> g.query().has("State",EQUAL,"TN").vertices()
Mar 13, 2014 5:10:58 PM org.glassfish.grizzly.filterchain.DefaultFilterChain execute
WARNING: Exception during FilterChain execution
java.lang.ClassCastException: com.tinkerpop.rexster.protocol.msg.ErrorResponseMessage cannot be cast to org.glassfish.grizzly.asyncqueue.WritableMessage
at org.glassfish.grizzly.nio.transport.TCPNIOTransportFilter.handleWrite(TCPNIOTransportFilter.java:111)
at org.glassfish.grizzly.filterchain.TransportFilter.handleWrite(TransportFilter.java:191)
at org.glassfish.grizzly.filterchain.ExecutorResolver$8.execute(ExecutorResolver.java:111)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:265)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:200)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:134)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:112)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:78)
at org.glassfish.grizzly.filterchain.FilterChainContext.write(FilterChainContext.java:652)
at org.glassfish.grizzly.filterchain.FilterChainContext.write(FilterChainContext.java:533)
at com.tinkerpop.rexster.client.RexProClientFilter.handleRead(RexProClientFilter.java:155)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:265)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:200)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:134)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:112)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:78)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:815)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:115)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:55)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:135)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:567)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:547)
at java.lang.Thread.run(Thread.java:744)
Standard indexes work good.
What I can be doing wrong?
I appriciate any help
Related
I installed the DB with updates and I got this error:
Opening DatabasePool 'wotlk_world'. Asynchronous connections: 1, synchronous connections: 1. MySQL client library: 5.6.42 MySQL server ver: 5.6.42 MySQL client library: 5.6.42 MySQL server ver: 5.6.42 [ERROR]: In mysql_stmt_prepare() id: 4, sql: "SELECT entryorguid, source_type, id, link, event_type, event_phase_mask, event_chance, event_flags, event_param1, event_param2, event_param3, event_param4, event_param5, action_type, action_param1, action_param2, action_param3, action_param4, action_param5, action_param6, target_type, target_param1, target_param2, target_param3, target_param4, target_x, target_y, target_z, target_o FROM smart_scripts ORDER BY entryorguid, source_type, id, link" [ERROR]: Unknown column 'event_param5' in 'field list' [ERROR]: In mysql_stmt_prepare() id: 54, sql: "SELECT difficulty_entry_1, difficulty_entry_2, difficulty_entry_3, KillCredit1, KillCredit2, modelid1, modelid2, modelid3, modelid4, name, subname, IconName, gossip_menu_id, minlevel, maxlevel, exp, faction, npcflag, speed_walk, speed_run, scale, rank, mindmg, maxdmg, dmgschool, attackpower, DamageModifier, BaseAttackTime, RangeAttackTime, unit_class, unit_flags, unit_flags2, dynamicflags, family, trainer_type, trainer_spell, trainer_class, trainer_race, minrangedmg, maxrangedmg, rangedattackpower, type, type_flags, lootid, pickpocketloot, skinloot, resistance1, resistance2, resistance3, resistance4, resistance5, resistance6, spell1, spell2, spell3, spell4, spell5, spell6, spell7, spell8, PetSpellDataId, VehicleId, mingold, maxgold, AIName, MovementType, InhabitType, HoverHeight, HealthModifier, ManaModifier, ArmorModifier, RacialLeader, movementId, RegenHealth, mechanic_immune_mask, flags_extra, ScriptName FROM creature_template WHERE entry = ?" [ERROR]: Unknown column 'DamageModifier' in 'field list' DatabasePool wotlk_world NOT opened. There were errors opening the MySQL connections. Check your SQLDriverLogFile for specific errors. Cannot connect to world database
127.0.0.1;3306;root;ascent;wotlk_world
The problem is:
[ERROR]: Unknown column 'DamageModifier' in 'field list'
It looks like your world DB is not up to date. So you need to update it properly. To do that you can either use the DB assembler script (bin/acore-db-asm) or manually importing the missing sql files from data/sql/updates/db_world.
To make sure your DB is up to date, check the name of the last column of the table version_db_world of your world database. It should match with the most recent sql file name of the direcotry data/sql/updates/db_world.
I recommend reading:
How to make sure that the DB is up to date
When I create the openstack server, I get bellow Exception:
Resource 7bed8adc-9ed9-49dc-b15e-6660e2fc3285 transitioned to failure state ERROR
My code is bellow:
server_args = {
"name":server_name,
"image_id":image_id,
"flavor_id":flavor_id,
"networks":[{"uuid":network.id}],
"admin_password": admin_password,
}
try:
server = user_conn.conn.compute.create_server(**server_args)
server = user_conn.conn.compute.wait_for_server(server)
except Exception as e: # there I except the Exception
raise e
When create_server, my server_args data is bellow:
{'flavor_id': 'd4424892-4165-494e-bedc-71dc97a73202', 'networks': [{'uuid': 'da4e3433-2b21-42bb-befa-6e1e26808a99'}], 'admin_password': '123456', 'name': '133456', 'image_id': '60f4005e-5daf-4aef-a018-4c6b2ff06b40'}
My openstacksdk version is 0.9.18.
In the end, I find the flavor data is too big for openstack compute node, so I changed it to a small flavor, so I create success.
I'm using DSS 3.5.0 with PostgreSQL, and a set of operations in a request box is not working in a very peculiar way. I've successfully used request boxes containing thousands of operations in this same project, including operations very similar to those that fail. One of these large request boxes failed, and after spending some time looking for the operations that caused the problem, we were able to reduce it to a set of five operations.
The problem
Looking at the PostgreSQL logs, the query issued by one of the operations is not executed because it never reaches the database.
I'll call the operations O1, O2, O3, O4 and O5 and their queries Q1, Q2, Q3, Q4 and Q5. Playing with the request and checking the resulting database log, we ended up with:
Request box contains O1-O2-O3-O4-O5: database executes Q1-Q2-Q3-Q5
Request box contains O1-O2-O4-O5: database executes Q1-Q2-Q4-Q5
Request box contains O1-O2-O3-O4: database executes Q1-Q2-Q3-Q4
Request box contains O1-O2-O3-O4-O4-O5: database executes Q1-Q2-Q3-Q5
So, it looks weird and it doesn't seem to follow a clearly discernible pattern.
All operations perform correctly if sent separately to the DSS, or in two different request boxes. The exact nature of the operations doesn't seem to be directly linked to the problem because the same operations are used countless times in other scenarios. The queries are not especially long or complex.
Operation 1: updates a record in table A
Operation 2: deletes a record from table B
Operation 3: inserts a record in table B
Operation 4: inserts a record in table A
Operation 5: inserts a record in table B (same as operation 3)
Errors and logs
The actual error message issued by PostgreSQL for operation 5 is
ERROR: null value in column "element_id" violates not-null constraint
This is expected because operation 4 (the one that disappears) inserts a value that is later used to resolve element_id for operation 5.
The PostgreSQL log reports:
LOG: execute <unnamed>: BEGIN
LOG: execute <unnamed>: UPDATE public.project_element SET element_uuid=$1,location_id=$2,from_revit=$3,name=$4,type=$5,model=NULLIF($6,0),parent_element=(SELECT PE.ELEMENT_ID FROM PROJECT_ELEMENT PE WHERE PE.PROJECT_ID = $7 AND (PE.ELEMENT_ID = $8 OR (PE.ELEMENT_UUID = $9 AND PE.ELEMENT_UUID IS NOT NULL))) ,left_border=$10,right_border=$11 WHERE element_id=$12
DETAIL: parameters: $1 = '(element-uuid)', $2 = '85', $3 = '1', $4 = '(some-text)', $5 = '3', $6 = '0', $7 = '22', $8 = NULL, $9 = '(parent-uuid)', $10 = NULL, $11 = NULL, $12 = '9983'
LOG: execute <unnamed>: DELETE FROM ELEMENT_PROPERTY WHERE ELEMENT_ID = (SELECT PE.ELEMENT_ID FROM PROJECT_ELEMENT PE WHERE PE.ELEMENT_ID = $1 AND PE.PROJECT_ID = $2) AND NAME = $3
DETAIL: parameters: $1 = '9983', $2 = '22', $3 = 'num_ports'
LOG: execute <unnamed>: INSERT INTO public.element_property(name,value,type,element_id) VALUES($1,$2,$3,(^M SELECT PE.ELEMENT_ID FROM PROJECT_ELEMENT PE WHERE PE.PROJECT_ID = $4 AND (PE.ELEMENT_ID = $5 OR (PE.ELEMENT_UUID = $6 AND PE.ELEMENT_UUID IS NOT NULL))))
DETAIL: parameters: $1 = 'num_ports', $2 = '48', $3 = '0', $4 = '22', $5 = NULL, $6 = '(element-uuid)'
LOG: execute <unnamed>: INSERT INTO public.element_property(name,value,type,element_id) VALUES($1,$2,$3,(SELECT PE.ELEMENT_ID FROM PROJECT_ELEMENT PE WHERE PE.PROJECT_ID = $4 AND (PE.ELEMENT_ID = $5 OR (PE.ELEMENT_UUID = $6 AND PE.ELEMENT_UUID IS NOT NULL))))
DETAIL: parameters: $1 = 'port_num', $2 = '6', $3 = '0', $4 = '22', $5 = NULL, $6 = '(other-uuid)'
ERROR: null value in column "element_id" violates not-null constraint
DETAIL: Failing row contains (port_num, 6, 0, null).
STATEMENT: INSERT INTO public.element_property(name,value,type,element_id) VALUES($1,$2,$3,(SELECT PE.ELEMENT_ID FROM PROJECT_ELEMENT PE WHERE PE.PROJECT_ID = $4 AND (PE.ELEMENT_ID = $5 OR (PE.ELEMENT_UUID = $6 AND PE.ELEMENT_UUID IS NOT NULL))))
LOG: execute S_2: BEGIN
LOG: execute S_1: ROLLBACK
DSS log starts with an exception, but I'm not sure if it's really related to this problem. The following log goes from the request box start to the first time it complains about the error message returned from PostgreSQL. DSS complains multiple times after that.
DEBUG - {org.apache.axis2.transport.http.AxisServlet}
java.lang.NullPointerException
at javax.servlet.GenericServlet.getServletContext(GenericServlet.java:123)
at org.apache.axis2.transport.http.AxisServlet.createMessageContext(AxisServlet.java:715)
at org.apache.axis2.transport.http.AxisServlet$RestRequestProcessor.<init>(AxisServlet.java:819)
at org.apache.axis2.transport.http.AxisServlet.doPost(AxisServlet.java:227)
at org.wso2.carbon.core.transports.CarbonServlet.doPost(CarbonServlet.java:231)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:646)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.eclipse.equinox.http.servlet.internal.ServletRegistration.service(ServletRegistration.java:61)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:128)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:68)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.service(DelegationServlet.java:68)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.wso2.carbon.ui.filters.CSRFPreventionFilter.doFilter(CSRFPreventionFilter.java:88)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.wso2.carbon.ui.filters.CRLFPreventionFilter.doFilter(CRLFPreventionFilter.java:59)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.wso2.carbon.tomcat.ext.filter.CharacterSetFilter.doFilter(CharacterSetFilter.java:61)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:504)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:99)
at org.wso2.carbon.tomcat.ext.valves.CarbonTomcatValve$1.invoke(CarbonTomcatValve.java:47)
at org.wso2.carbon.webapp.mgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:57)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:47)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:159)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:421)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1074)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:611)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1739)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1698)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:744)
DEBUG - Input contentType (application/json) {org.apache.axis2.builder.BuilderUtil}
DEBUG - CharSetEncoding defaulted (UTF-8) {org.apache.axis2.builder.BuilderUtil}
DEBUG - [MessageContext: logID=f9462531f982d008b3e2aacd88bfd07f4a7e4905c354170e] Checking for Service using target endpoint address : /services/iims {org.apache.axis2.dispatchers.RequestURIBasedServiceDispatcher}
DEBUG - org.apache.axis2.i18n.resource::handleGetObject(servicefound) {org.apache.axis2.i18n.ProjectResourceBundle}
DEBUG - [MessageContext: logID=f9462531f982d008b3e2aacd88bfd07f4a7e4905c354170e] Found AxisService : iims {org.apache.axis2.engine.AbstractDispatcher}
DEBUG - Attempt to check for Operation using HTTP Location failed {org.apache.axis2.dispatchers.HTTPLocationBasedDispatcher}
DEBUG - [MessageContext: logID=f9462531f982d008b3e2aacd88bfd07f4a7e4905c354170e] Attempted to check for Operation using target endpoint URI, but the operation fragment was missing {org.apache.axis2.dispatchers.RequestURIBasedOperationDispatcher}
DEBUG - getAction (null) from org.apache.axis2.client.Options#279e70a {org.apache.axis2.client.Options}
DEBUG - SoapAction is (null) {org.apache.axis2.context.MessageContext}
DEBUG - createSOAPEnvelope using Builder (class org.apache.axis2.json.JSONOMBuilder) selected from type (application/json) {org.apache.axis2.transport.TransportUtils}
DEBUG - getAction (null) from org.apache.axis2.client.Options#279e70a {org.apache.axis2.client.Options}
DEBUG - SoapAction is (null) {org.apache.axis2.context.MessageContext}
DEBUG - [MessageContext: logID=f9462531f982d008b3e2aacd88bfd07f4a7e4905c354170e] Checking for Operation using Action : null {org.apache.axis2.dispatchers.ActionBasedOperationDispatcher}
DEBUG - [MessageContext: logID=f9462531f982d008b3e2aacd88bfd07f4a7e4905c354170e] Attempted to check for Operation using target endpoint URI, but the operation fragment was missing {org.apache.axis2.dispatchers.RequestURIBasedOperationDispatcher}
DEBUG - Axis operation is null {org.apache.axis2.json.gson.JSONMessageHandler}
DEBUG - No headers present corresponding to http://www.w3.org/2005/08/addressing {org.apache.axis2.handlers.addressing.AddressingInHandler}
DEBUG - No headers present corresponding to http://schemas.xmlsoap.org/ws/2004/08/addressing {org.apache.axis2.handlers.addressing.AddressingInHandler}
DEBUG - getAction (null) from org.apache.axis2.client.Options#279e70a {org.apache.axis2.client.Options}
DEBUG - SoapAction is (null) {org.apache.axis2.context.MessageContext}
DEBUG - [MessageContext: logID=f9462531f982d008b3e2aacd88bfd07f4a7e4905c354170e] Checking for Operation using Action : null {org.apache.axis2.dispatchers.ActionBasedOperationDispatcher}
DEBUG - getAction (null) from org.apache.axis2.client.Options#279e70a {org.apache.axis2.client.Options}
DEBUG - SoapAction is (null) {org.apache.axis2.context.MessageContext}
DEBUG - [MessageContext: logID=f9462531f982d008b3e2aacd88bfd07f4a7e4905c354170e] Checking for Operation using Action : null {org.apache.axis2.dispatchers.ActionBasedOperationDispatcher}
DEBUG - Get operation for request_box {org.apache.axis2.description.AxisService}
DEBUG - Found axis operation: org.apache.axis2.description.InOutAxisOperation#682d0c2c {org.apache.axis2.description.AxisService}
DEBUG - org.apache.axis2.i18n.resource::handleGetObject(operationfound) {org.apache.axis2.i18n.ProjectResourceBundle}
DEBUG - [MessageContext: logID=f9462531f982d008b3e2aacd88bfd07f4a7e4905c354170e] Found AxisOperation : request_box {org.apache.axis2.engine.AbstractDispatcher}
DEBUG - getAddressingRequirementParemeterValue: value: 'null' {org.apache.axis2.addressing.AddressingHelper}
DEBUG - [MessageContext: logID=f9462531f982d008b3e2aacd88bfd07f4a7e4905c354170e] isReplyRedirected: ReplyTo is null. Returning false {org.apache.axis2.addressing.AddressingHelper}
DEBUG - getAction (null) from org.apache.axis2.client.Options#112f42cb {org.apache.axis2.client.Options}
DEBUG - Old WSAAction is (null) {org.apache.axis2.context.MessageContext}
DEBUG - New WSAAction is (urn:request_boxResponse) {org.apache.axis2.context.MessageContext}
DEBUG - setAction Old action is (null) {org.apache.axis2.client.Options}
DEBUG - setAction New action is (urn:request_boxResponse) {org.apache.axis2.client.Options}
DEBUG - messageID is null. {org.apache.axis2.context.ConfigurationContext}
DEBUG - forceExpand: changing prefix from to {org.apache.axiom.om.impl.llom.OMSourcedElementImpl}
DEBUG - DXXATransactionManager.begin() {org.wso2.carbon.dataservices.core.description.xa.DSSXATransactionManager}
DEBUG - Creating data source connection {org.wso2.carbon.dataservices.core.description.config.SQLConfig}
ERROR - ERROR: null value in column "element_id" violates not-null constraint_ Detalhe: Failing row contains (port_num, 6, 0, null). (Sanitized) {org.wso2.carbon.dataservices.core.description.query.SQLQuery}
org.postgresql.util.PSQLException: ERROR: null value in column "element_id" violates not-null constraint
The implementation
This is the actual request box that fails (some field contents replaced to reduce noise):
{
"request_box":{
"update_project_element_operation":{
"name":"(some-text)",
"element_id":9983,
"element_uuid":"(element-uuid)",
"from_revit":1,
"project_id":22,
"parent_element_uuid":"(parent-uuid)",
"type":3,
"location_id":85,
"model":0
},
"delete_element_property_operation":{
"name":"num_ports",
"element_id":9983,
"project_id":22
},
"insert_element_property_operation":{
"project_id":22,
"element_uuid":"(element-uuid)",
"name":"num_ports",
"value":"48"
},
"insert_project_element_operation":{
"name":"(this operation disappears)",
"element_id":0,
"element_uuid":"(other-uuid)",
"from_revit":1,
"project_id":22,
"parent_element_uuid":"(element-uuid)",
"type":10,
"location_id":85,
"model":0
},
"insert_element_property_operation":{
"project_id":22,
"element_uuid":"(other-uuid)",
"name":"port_num",
"value":"6"
}
}
}
I can provide detailed table, query and operation definitions if necessary. All operations were used before, and each one of them work if issued separately or in two different request boxes. It seems to be a issue directly linked to DSS boxcarring.
Any ideas?
After a few weeks of investigation, including direct contact to the WSO2 support, we concluded that this unusual problem was caused by the JSON to XML conversion inside DSS. This may be related to the fact that the request box representation in JSON format can contain non-unique names (and according to RFC 7159 the behavior in this case is unpredictable and implementation-defined). It should be noted that we also used a request box with thousands of repetitions of the same name without any visible problem, so it isn't a straightforward consequence of all non-unique names not being correctly processed.
When we tried the same request box in XML, all operations were correctly executed. To avoid changing the application, we followed WSO2's advice and had the ESB converting the application-generated JSON to XML. Preliminar tests showed that in this case the XML was correctly generated, however we decided to slightly adjust the JSON generator to issue an array of operation objects instead of an object containing members with non-unique names, to avoid the undefined behavior and the possibility of new, unpredictable problems in JSON parsing.
WSO2 is aware of this problem and it may or may not be fixed by an upcoming release of DSS. Until then, the safer way to avoid request box suprises seems to be to use XML instead of JSON when sending transactions to DSS using request boxes.
I have a immutable map in my class. When I run my code in local mode, there is no problem and I can reach every key in the map. However, when I run my code in cluster mode, nodes throw error about not finding the key in the map.
What I've tried up to now are these;
-Broadcast the immutable map over cluster.
broadcast = sc.broadcast(my_immutable_map)
-Parallelize the map as pair RDD
my_map_rdd = sc.parallelize( my_immutable_map.toSeq)
When i examine the logs, I see key not found exception.
My error stacktrace is as follows:
Driver stacktrace:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 15.0 failed 4 times, most recent failure: Lost task 1.3 in stage 15.0 (TID 25, datanode1.big.com): java.util.NoSuchElementException: key not found: 905053199731
at scala.collection.MapLike$class.default(MapLike.scala:228)
at scala.collection.AbstractMap.default(Map.scala:58)
at scala.collection.MapLike$class.apply(MapLike.scala:141)
at scala.collection.AbstractMap.apply(Map.scala:58)
at havelsan.CDRGenerator$.generate_random_target(CDRGenerator.scala:95)
at havelsan.CDRGenerator$$anonfun$main$2$$anonfun$6.apply(CDRGenerator.scala:167)
at havelsan.CDRGenerator$$anonfun$main$2$$anonfun$6.apply(CDRGenerator.scala:165)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:389)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1251)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Can you explain how spark distribute maps and how it is possible that some nodes can't find some keys in this map, please? Btw my spark version is 1.6.0
What am I missing?
UPDATE
This part is for initializing the map on driver.
...
var pd = sc.textFile( "hdfs://...")
my_immutable_map = pd.map( line => line.split(":") ).map{ line => (line(0), line(1).split(","))}.collectAsMap
...
broadcast = sc.broadcast(my_immutable_map)
my_map_rdd = sc.parallelize( my_immutable_map.toSeq)
And this is the part where i got the error.
def my_func(key:String):String={
...
my_value = broadcast.value(key)
...
}
my_func is called inside a map as;
my_another_rdd.map{ line =>
val key = line.split(",")(0)
my_func(key)
}
The solution that i found is to pass the broadcast value to the function as a parameter. Still, I couldn't find a solution for parallelize method.
https://stackoverflow.com/a/34912887/4668959
I am using Apache Spark 1.5.1 and trying to connect to a local SQLite database named clinton.db. Creating a data frame from a table of the database works fine but when I do some operations on the created object, I get the error below which says "SQL error or missing database (Connection is closed)". Funny thing is that I get the result of the operation nevertheless. Any idea what I can do to solve the problem, i.e., avoid the error?
Start command for spark-shell:
../spark/bin/spark-shell --master local[8] --jars ../libraries/sqlite-jdbc-3.8.11.1.jar --classpath ../libraries/sqlite-jdbc-3.8.11.1.jar
Reading from the database:
val emails = sqlContext.read.format("jdbc").options(Map("url" -> "jdbc:sqlite:../data/clinton.sqlite", "dbtable" -> "Emails")).load()
Simple count (fails):
emails.count
Error:
15/09/30 09:06:39 WARN JDBCRDD: Exception closing statement
java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (Connection is closed)
at org.sqlite.core.DB.newSQLException(DB.java:890)
at org.sqlite.core.CoreStatement.internalClose(CoreStatement.java:109)
at org.sqlite.jdbc3.JDBC3Statement.close(JDBC3Statement.java:35)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.org$apache$spark$sql$execution$datasources$jdbc$JDBCRDD$$anon$$close(JDBCRDD.scala:454)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1$$anonfun$8.apply(JDBCRDD.scala:358)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1$$anonfun$8.apply(JDBCRDD.scala:358)
at org.apache.spark.TaskContextImpl$$anon$1.onTaskCompletion(TaskContextImpl.scala:60)
at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:79)
at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:77)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:77)
at org.apache.spark.scheduler.Task.run(Task.scala:90)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
res1: Long = 7945
I got the same error today, and the important line is just before the exception:
15/11/30 12:13:02 INFO jdbc.JDBCRDD: closed connection
15/11/30 12:13:02 WARN jdbc.JDBCRDD: Exception closing statement
java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (Connection is closed)
at org.sqlite.core.DB.newSQLException(DB.java:890)
at org.sqlite.core.CoreStatement.internalClose(CoreStatement.java:109)
at org.sqlite.jdbc3.JDBC3Statement.close(JDBC3Statement.java:35)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.org$apache$spark$sql$execution$datasources$jdbc$JDBCRDD$$anon$$close(JDBCRDD.scala:454)
So Spark succeeded to close the JDBC connection, and then it fails to close the JDBC statement
Looking at the source, close() is called twice:
Line 358 (org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD, Spark 1.5.1)
context.addTaskCompletionListener{ context => close() }
Line 469
override def hasNext: Boolean = {
if (!finished) {
if (!gotNext) {
nextValue = getNext()
if (finished) {
close()
}
gotNext = true
}
}
!finished
}
If you look at the close() method (line 443)
def close() {
if (closed) return
you can see that it checks the variable closed, but that value is never set to true.
If I see it correctly, this bug is still in the master. I have filed a bug report.
Source: JDBCRDD.scala (lines numbers differ slightly)