I am copying data from one Cosmosdb collection to another within the same account.
I am getting following exception
{"StatusCode":"DFExecutorUserError","Message":"Job failed due to
reason: Errors encountered in bulk update API execution. Number of
failures corresponding to exception of type:
java.lang.RuntimeException = 500; FAILURE: java.lang.RuntimeException:
Stored proc returned failure 404\n\tat
com.microsoft.azure.documentdb.bulkexecutor.internal.BatchUpdater$1.call(BatchUpdater.java:199)\n\tat
com.microsoft.azure.documentdb.bulkexecutor.internal.BatchUpdater$1.call(BatchUpdater.java:148)\n\tat
com.microsoft.azure.documentdb.repackaged.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)\n\tat
com.microsoft.azure.documentdb.repackaged.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)\n\tat
com.microsoft.azure.documentdb.repackaged.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)\n\tat
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
java.util.concurrent.ThreadPoo","Details":"Errors encountered in bulk
update API execution. Number of failures corresponding to exception of
type: java.lang.RuntimeException = 500; FAILURE:
java.lang.RuntimeException: Stored proc returned failure 404\n\tat
com.microsoft.azure.documentdb.bulkexecutor.internal.BatchUpdater$1.call(BatchUpdater.java:199)\n\tat
com.microsoft.azure.documentdb.bulkexecutor.internal.BatchUpdater$1.call(BatchUpdater.java:148)\n\tat
com.microsoft.azure.documentdb.repackaged.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)\n\tat
com.microsoft.azure.documentdb.repackaged.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)\n\tat
com.microsoft.azure.documentdb.repackaged.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)\n\tat
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(Threa"}
Sharing the answer as per the comment by the original poster:
It started to work after multiple retries.
Related
When Exchange imports Hive data, I get the following error:
Exception in thread "main" org.apache.spark.sql.AnalysisException: Table or view not found
Check whether the -h parameter is omitted in the command for submitting the NebulaGraph Exchange task and whether the table and database are correct, and run the user-configured exec statement in spark-SQL to verify the correctness of the exec statement.
I am using Sqoop 1.4.6v and hadoop-2.7.1v.
I am importing data from Oracle DB and using ojdbc6.jar.
It is working fine but sometimes I am getting following error:-
19/03/15 16:27:23 INFO mapreduce.Job: Task Id : attempt_1552649108375_0013_m_000000_0, Status : FAILED
Error: java.lang.RuntimeException: java.lang.RuntimeException: java.sql.SQLRecoverableException: IO Error: Connection reset
How do I resolve this issue?
Any help regarding this would be appreciated.
I found something for you let me know if it helps :
This problem occurs primarily due to the lack of a fast random number generation device on the host where the map tasks execute
Please refer the sqoop guide for detailed explanation:
https://sqoop.apache.org/docs/1.4.6/SqoopUserGuide.html#_oracle_connection_reset_errors
I am running a flow and receive the following error message:
java.lang.IllegalStateException: Was expecting to find transaction set
on current strand: Thread[Mock node 1 thread,5,main]
And:
Terminated by unexpected exception {} java.lang.AssertionError:
Unexpected task state (fiber parking or parked has no chance to to
call park): -2 at
co.paralleluniverse.fibers.RunnableFiberTask.park(RunnableFiberTask.java:213)
~[quasar-core-0.7.9-jdk8.jar:0.7.9] at
What is the cause of this issue?
This issue was caused by calling a custom lambda method from within my flow:
myMethod {
subflow(xyz)
}
If the method is converted to a method without a lambda, the error disappears.
This is due to an issue in how Quasar serialises Kotlin lambdas.
I have a simple SSIS package which has an excel source dumping data in SQL table.
It works fine in BIDS when run manually.
It also works when called from an ASP.NET application on my local.
When the same ASP.NET application was deployed on IIS server it is giving me the following error:
The AcquireConnection method call to the connection manager "10.xxx.xx.xxx.<DB>.<User>" failed with error code 0xC0202009
Note:
Run64bitRunTime is set to false as suggested in many posts for similar issue.
What could be the issue?
Error Log from SQL Job running the package:
SSIS Error Code DTS_E_OLEDB_NOPROVIDER_ERROR. The requested OLE DB provider Microsoft.ACE.OLEDB.12.0 is not registered. Error code: 0x00000000. An OLE DB record is available. Source: "Microsoft OLE DB Service Components" Hresult: 0x80040154 Description: "Class not registered". End Error Error: 2015-06-02 14:26:22.79 Code: 0xC020801C Source: Data Flow Task Excel Source [1] Description: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "Excel Connection Manager" failed with error code 0xC0209302. There may be error messages posted before this with more information on why the AcquireConnection method call failed. End Error Error: 2015-06-02 14:26:22.79 Code: 0xC0047017 Source: Data Flow Task SSIS.Pipeline Description: component "Excel Source" (1) failed validation and returned error code 0xC020801C. End Error Error: 2015-06-02 14:26:22.79 Code: 0xC004700C Source: Data Flow Task SSIS.Pipeline Description: One or more component failed validation. End Error Error: 2015-06-02 14:26:22.79 Code: 0xC0024107 Source: Data Flow Task Description: There were errors during task validation. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 2:26:21 PM Finished: 2:26:22 PM Elapsed: 1.031 seconds. The package execution failed. The step failed.,00:00:01,0,0,,,,0
Typically when a package runs in BIDS and in the integration services, but fails as a job or when called from an external application, it is a permissions issue associated with the SSIS proxy. The default logon being applied in the ASP.NET application should also be set up in a credential and that credential should then be applied in a Proxy with permission to apply the SSIS subsystem.
Apply the steps outlined in the following link:
http://www.mssqltips.com/sqlservertip/2163/running-a-ssis-package-from-sql-server-agent-using-a-proxy-account/
I am using solr 4.10.2. I tried to perform a shard split on my solr cloud test cluster. It fails all the time if the index type is set to "native" or "simple".
Is that normal? I can perform shard splitting if the index type is set to "single" or "none".
They advertise that shard splitting can be done while solr is running and i hardly imagine poking around changing the lock type of a production server...
Here is the test environment:
1 shard, 2 nodes, 1 collection.
Initially the collection was empty. I added few documents, verified that they have been replicated. All worked.
Issued the split shard command:
server1:port/solr/admin/collections?action=SPLITSHARD&collection=mycollection&shard=shard1&async=myhandle
After verifying that the operation had finished, by calling
server1:port/solr/admin/collections?action=REQUESTSTATUS&requestid=myhandle
The status was "complete".
Here is the log:
OverseerCollectionProcessor.processMessage : splitshard , {
"operation":"splitshard",
"shard":"shard1",
"collection":"mycollection",
"async":"myhandle"}
1/26/2015, 1:49:02 PM
ERROR
CoreContainer
Error creating core [mycollection_shard1_0_replica1]: Error opening new searcher
org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:873)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:646)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:491)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:466)
at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:575)
at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:199)
at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1234)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1565)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1677)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:845)
... 9 more
Caused by: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock#/nfs/solr/index/write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:89)
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:753)
at org.apache.solr.update.SolrIndexWriter.<init>(SolrIndexWriter.java:77)
at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
at org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:279)
at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:111)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1528)
... 11 more
org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:873)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:646)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:491)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:466)
at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:575)
at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:199)
at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1234)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1565)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1677)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:845)
... 9 more
Caused by: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock#/nfs/solr/index/write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:89)
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:753)
at org.apache.solr.update.SolrIndexWriter.<init>(SolrIndexWriter.java:77)
at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
at org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:279)
at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:111)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1528)
... 11 more
1/26/2015, 1:49:26 PM
ERROR
SolrIndexWriter
SolrIndexWriter was not closed prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!!
1/26/2015, 1:49:26 PM
ERROR
SolrIndexWriter
Error closing IndexWriter
java.lang.NullPointerException
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3230)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3203)
at org.apache.lucene.index.IndexWriter.shutdown(IndexWriter.java:907)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:984)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:954)
at org.apache.solr.update.SolrIndexWriter.close(SolrIndexWriter.java:129)
at org.apache.solr.update.SolrIndexWriter.finalize(SolrIndexWriter.java:182)
at java.lang.System$2.invokeFinalize(System.java:1213)
at java.lang.ref.Finalizer.runFinalizer(Finalizer.java:98)
at java.lang.ref.Finalizer.access$100(Finalizer.java:34)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:210)
I fixed the problem. Here is how:
When i created the solr cloud environment, i used the -Dsolr.data.dir property to map the collection storage to a different file system. This was because i was running VMs with limited storage capacity. Once i removed this property everything started working.
I think solr tries to use the same solr.data.dir path for the new cores created by the shard split causing the lock problem.