JGIT api exeception - jgit

I am using JGIT API to clone and push code but facing below mentioned problem with the some git project
Exception in thread "main" org.eclipse.jgit.api.errors.JGitInternalException: Exception caught during execution of fetch command at org.eclipse.jgit.api.FetchCommand.call(FetchCommand.java:138) at org.eclipse.jgit.api.CloneCommand.fetch(CloneCommand.java:175) at org.eclipse.jgit.api.CloneCommand.call(CloneCommand.java:121) at com.huawei.ccm.sync.service.GitSyncservice.cloneGit(GitSyncservice.java:139) at com.huawei.test.Test.doSync(Test.java:146) at com.huawei.test.Test.main(Test.java:132) Caused by: org.eclipse.jgit.errors.TransportException: Packfile is truncated. at org.eclipse.jgit.transport.BasePackFetchConnection.doFetch(BasePackFetchConnection.java:291) at org.eclipse.jgit.transport.BasePackFetchConnection.fetch(BasePackFetchConnection.java:229) at org.eclipse.jgit.transport.FetchProcess.fetchObjects(FetchProcess.java:225) at org.eclipse.jgit.transport.FetchProcess.executeImp(FetchProcess.java:151) at org.eclipse.jgit.transport.FetchProcess.execute(FetchProcess.java:113) at org.eclipse.jgit.transport.Transport.fetch(Transport.java:1062) at org.eclipse.jgit.api.FetchCommand.call(FetchCommand.java:129) ... 5 more Caused by: java.io.EOFException: Packfile is truncated. at org.eclipse.jgit.transport.PackParser.fill(PackParser.java:1129) at org.eclipse.jgit.transport.PackParser.access$000(PackParser.java:97) at org.eclipse.jgit.transport.PackParser$InflaterStream.read(PackParser.java:1647) at java.io.InputStream.read(InputStream.java:101) at org.eclipse.jgit.transport.PackParser.whole(PackParser.java:974) at org.eclipse.jgit.transport.PackParser.indexOneObject(PackParser.java:908) at org.eclipse.jgit.transport.PackParser.parse(PackParser.java:486) at org.eclipse.jgit.storage.file.ObjectDirectoryPackParser.parse(ObjectDirectoryPackParser.java:179) at org.eclipse.jgit.transport.PackParser.parse(PackParser.java:448) at org.eclipse.jgit.transport.BasePackFetchConnection.receivePack(BasePackFetchConnection.java:676) at org.eclipse.jgit.transport.BasePackFetchConnection.doFetch(BasePackFetchConnection.java:284) ... 11 more

Related

SFTP On New or Update throws error Pipe close

I'm using the "On New or Update" source of the Mule 4 SFTP Connector, to process files from an SFTP server directory. The process works fine, however while reading the last file the SFTP connector throws an error as shown below and the file remains in directory waiting next schedule time to be picked up and the same thing will happen for the last file of other new set of files.
Any thoughts on how to fix this issue?
ERROR:
11:20:45.315 05/04/2022 Worker-0 [MuleRuntime].uber.27: [sftp-demo-app].prcsACKFiles-Error-SuccessFlow.CPU_INTENSIVE #1648077b ERROR
event:c458bc90-cbbd-11ec-85e2-06a565d43154
********************************************************************************
Message : "org.mule.weave.v2.module.reader.ReaderParsingException: org.mule.runtime.api.exception.MuleRuntimeException - Exception was found trying to retrieve the contents of file /home/messages/file_8ddb7674.json
org.mule.runtime.api.exception.MuleRuntimeException: Exception was found trying to retrieve the contents of file /home/messages/file_8ddb7674.json
at org.mule.extension.sftp.internal.connection.SftpClient.exception(SftpClient.java:427)
at org.mule.extension.sftp.internal.connection.SftpClient.exception(SftpClient.java:423)
at org.mule.extension.sftp.internal.connection.SftpClient.getFileContent(SftpClient.java:349)
at org.mule.extension.sftp.internal.connection.SftpFileSystem.retrieveFileContent(SftpFileSystem.java:117)
at org.mule.extension.sftp.internal.SftpInputStream$SftpFileInputStreamSupplier.getContentInputStream(SftpInputStream.java:111)
at org.mule.extension.sftp.internal.SftpInputStream$SftpFileInputStreamSupplier.getContentInputStream(SftpInputStream.java:93)
at org.mule.extension.file.common.api.AbstractConnectedFileInputStreamSupplier.getContentInputStream(AbstractConnectedFileInputStreamSupplier.java:81)
at org.mule.extension.file.common.api.AbstractFileInputStreamSupplier.get(AbstractFileInputStreamSupplier.java:65)
at org.mule.extension.file.common.api.AbstractFileInputStreamSupplier.get(AbstractFileInputStreamSupplier.java:33)
at org.mule.extension.file.common.api.stream.LazyStreamSupplier.lambda$new$1(LazyStreamSupplier.java:29)
at org.mule.extension.file.common.api.stream.LazyStreamSupplier.get(LazyStreamSupplier.java:42)
at org.mule.extension.file.common.api.stream.AbstractNonFinalizableFileInputStream.lambda$createLazyStream$0(AbstractNonFinalizableFileInputStream.java:48)
at $java.io.InputStream$$EnhancerByCGLIB$$55e4687e.read(<generated>)
at org.apache.commons.io.input.ProxyInputStream.read(ProxyInputStream.java:102)
at org.mule.runtime.core.internal.streaming.bytes.AbstractInputStreamBuffer.consumeStream(AbstractInputStreamBuffer.java:111)
at com.mulesoft.mule.runtime.core.internal.streaming.bytes.FileStoreInputStreamBuffer.consumeForwardData(FileStoreInputStreamBuffer.java:239)
at com.mulesoft.mule.runtime.core.internal.streaming.bytes.FileStoreInputStreamBuffer.consumeForwardData(FileStoreInputStreamBuffer.java:202)
at com.mulesoft.mule.runtime.core.internal.streaming.bytes.FileStoreInputStreamBuffer.doGet(FileStoreInputStreamBuffer.java:125)
at org.mule.runtime.core.internal.streaming.bytes.AbstractInputStreamBuffer.get(AbstractInputStreamBuffer.java:93)
at org.mule.runtime.core.internal.streaming.bytes.BufferedCursorStream.assureDataInLocalBuffer(BufferedCursorStream.java:126)
at org.mule.runtime.core.internal.streaming.bytes.BufferedCursorStream.doRead(BufferedCursorStream.java:101)
at org.mule.runtime.core.internal.streaming.bytes.AbstractCursorStream.read(AbstractCursorStream.java:124)
at org.mule.runtime.core.internal.streaming.bytes.BufferedCursorStream.read(BufferedCursorStream.java:26)
at java.io.InputStream.read(InputStream.java:101)
at org.mule.runtime.core.internal.streaming.bytes.ManagedCursorStreamDecorator.read(ManagedCursorStreamDecorator.java:96)
at org.mule.weave.v2.el.SeekableCursorStream.read(MuleTypedValue.scala:306)
at org.mule.weave.v2.module.reader.UTF8StreamSourceReader.handleBOM(SeekableStreamSourceReader.scala:179)
at org.mule.weave.v2.module.reader.UTF8StreamSourceReader.readAscii(SeekableStreamSourceReader.scala:163)
at org.mule.weave.v2.module.json.reader.JsonTokenizer.$init$(JsonTokenizer.scala:21)
at org.mule.weave.v2.module.json.reader.indexed.IndexedJsonTokenizer.<init>(IndexedJsonTokenizer.scala:15)
at org.mule.weave.v2.module.json.reader.indexed.IndexedJsonParser.parser(IndexedJsonParser.scala:17)
at org.mule.weave.v2.module.json.reader.JsonReader.readValue(JsonReader.scala:40)
at org.mule.weave.v2.module.json.reader.JsonReader.doRead(JsonReader.scala:30)
at org.mule.weave.v2.module.reader.Reader.read(Reader.scala:35)
at org.mule.weave.v2.module.reader.Reader.read$(Reader.scala:33)
at org.mule.weave.v2.module.json.reader.JsonReader.read(JsonReader.scala:20)
at org.mule.weave.v2.el.MuleTypedValue.value(MuleTypedValue.scala:147)
at org.mule.weave.v2.model.values.wrappers.DelegateValue.valueType(DelegateValue.scala:17)
at org.mule.weave.v2.model.values.wrappers.DelegateValue.valueType$(DelegateValue.scala:16)
at org.mule.weave.v2.el.MuleTypedValue.valueType(MuleTypedValue.scala:177)
at org.mule.weave.v2.model.types.ObjectType$.accepts(Type.scala:1068)
Caused by: org.mule.extension.sftp.api.SftpConnectionException: Error occurred while trying to connect to host
... 112 more
Caused by: org.mule.runtime.api.connection.ConnectionException:
at org.mule.extension.sftp.api.SftpConnectionException.<init>(SftpConnectionException.java:38)
... 112 more
Caused by: org.mule.runtime.api.connection.ConnectionException:
... 112 more
Caused by: 4:
at com.jcraft.jsch.ChannelSftp.get(ChannelSftp.java:1540)
at com.jcraft.jsch.ChannelSftp.get(ChannelSftp.java:1290)
at org.mule.extension.sftp.internal.connection.SftpClient.getFileContent(SftpClient.java:347)
... 110 more
Caused by: java.io.IOException: Pipe closed
Error Pipe closed in SFTP indicates a communication error that the SFTP connector can not resolve, so the operation fails. I don't believe that there is anything that you can do about that. You might try to test a newer version of the connector if you are using an older one, just in case.

Corda 3.3 and Open JDK

Corda 3.3 is not working with openjdk version "1.8.0_202"
I'm able to build the cordapp, start it and also list all the flows using flow list command. However, when I try to run any flow in the shell I'm getting following exceptions and stack trace
[ERROR] 12:02:15+0530 [pool-8-thread-2] command.CRaSHSession.execute - Error while evaluating request 'flow start WhoAmI' flow start WhoAmI: exception: UndeclaredThrowableException
...
java.lang.reflect.UndeclaredThrowableException: null
...
Caused by: java.util.concurrent.ExecutionException: java.lang.AssertionError: java.lang.NoSuchFieldException: target
...
Caused by: java.lang.AssertionError: java.lang.NoSuchFieldException: target
...
Caused by: java.lang.NoSuchFieldException: target
Starting Corda 4, the support for Open JDK has been added.
See here for more details: https://docs.corda.net/docs/corda-os/4.4/getting-set-up.html

CorDapp with java.lang.ClassNotFoundException: sun.security.x509.X509CertImpl with IBM JDK 8

I got the following error message when I tried to build a CorDapp sample. If it is caused by certificate missing the header and footer lines, how can I get a correct certification?
Logs can be found in : /opt/corda/samples/cordapp-example/workflows-java/build/nodes/PartyA/logs
java.io.IOException: Sequence tag error
... ...
at net.corda.node.Corda.main(Corda.kt:13)
Exception in thread "main" java.lang.NoClassDefFoundError: sun.security.x509.X509CertImpl
at net.corda.serialization.internal.DefaultWhitelist.<clinit>(DefaultWhitelist.kt:65)
... ...
at net.corda.cliutils.CordaCliWrapperKt.start(CordaCliWrapper.kt:72)
at net.corda.node.Corda.main(Corda.kt:13)
Caused by: java.lang.ClassNotFoundException: sun.security.x509.X509CertImpl
Take a look at the Corda getting started guide here
Java 8 is required and the following JDK's have been tested
Oracle JDK
Amazon Corretto
Red Hat’s OpenJDK
Note: OpenJDK builds usually exclude JavaFX, which Corda's GUI tools require.

Solr cloud shard splitting fails when index type is native or simple

I am using solr 4.10.2. I tried to perform a shard split on my solr cloud test cluster. It fails all the time if the index type is set to "native" or "simple".
Is that normal? I can perform shard splitting if the index type is set to "single" or "none".
They advertise that shard splitting can be done while solr is running and i hardly imagine poking around changing the lock type of a production server...
Here is the test environment:
1 shard, 2 nodes, 1 collection.
Initially the collection was empty. I added few documents, verified that they have been replicated. All worked.
Issued the split shard command:
server1:port/solr/admin/collections?action=SPLITSHARD&collection=mycollection&shard=shard1&async=myhandle
After verifying that the operation had finished, by calling
server1:port/solr/admin/collections?action=REQUESTSTATUS&requestid=myhandle
The status was "complete".
Here is the log:
OverseerCollectionProcessor.processMessage : splitshard , {
"operation":"splitshard",
"shard":"shard1",
"collection":"mycollection",
"async":"myhandle"}
1/26/2015, 1:49:02 PM
ERROR
CoreContainer
Error creating core [mycollection_shard1_0_replica1]: Error opening new searcher
org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:873)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:646)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:491)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:466)
at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:575)
at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:199)
at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1234)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1565)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1677)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:845)
... 9 more
Caused by: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock#/nfs/solr/index/write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:89)
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:753)
at org.apache.solr.update.SolrIndexWriter.<init>(SolrIndexWriter.java:77)
at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
at org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:279)
at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:111)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1528)
... 11 more
org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:873)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:646)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:491)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:466)
at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:575)
at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:199)
at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1234)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1565)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1677)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:845)
... 9 more
Caused by: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock#/nfs/solr/index/write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:89)
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:753)
at org.apache.solr.update.SolrIndexWriter.<init>(SolrIndexWriter.java:77)
at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
at org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:279)
at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:111)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1528)
... 11 more
1/26/2015, 1:49:26 PM
ERROR
SolrIndexWriter
SolrIndexWriter was not closed prior to finalize(),​ indicates a bug -- POSSIBLE RESOURCE LEAK!!!
1/26/2015, 1:49:26 PM
ERROR
SolrIndexWriter
Error closing IndexWriter
java.lang.NullPointerException
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3230)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3203)
at org.apache.lucene.index.IndexWriter.shutdown(IndexWriter.java:907)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:984)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:954)
at org.apache.solr.update.SolrIndexWriter.close(SolrIndexWriter.java:129)
at org.apache.solr.update.SolrIndexWriter.finalize(SolrIndexWriter.java:182)
at java.lang.System$2.invokeFinalize(System.java:1213)
at java.lang.ref.Finalizer.runFinalizer(Finalizer.java:98)
at java.lang.ref.Finalizer.access$100(Finalizer.java:34)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:210)
I fixed the problem. Here is how:
When i created the solr cloud environment, i used the -Dsolr.data.dir property to map the collection storage to a different file system. This was because i was running VMs with limited storage capacity. Once i removed this property everything started working.
I think solr tries to use the same solr.data.dir path for the new cores created by the shard split causing the lock problem.

NoAvailableHostsException while starting Titan with Cassandra

I have configured Cassandra-1.2.2 in cluster mode. But while starting Titan with my cassandra configuration, it Showing the following exception
Exception in thread "main" java.lang.IllegalArgumentException: Could not instantiate implementation: com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxStoreManager
at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:268)
at com.thinkaurelius.titan.diskstorage.Backend.getStorageManager(Backend.java:226)
at com.thinkaurelius.titan.diskstorage.Backend.<init>(Backend.java:97)
at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.getBackend(GraphDatabaseConfiguration.java:406)
at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.<init>(StandardTitanGraph.java:62)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:40)
at com.thinkaurelius.titan.tinkerpop.rexster.RexsterTitanServer.start(RexsterTitanServer.java:70)
at com.thinkaurelius.titan.tinkerpop.rexster.RexsterTitanServer.startDaemon(RexsterTitanServer.java:80)
at com.thinkaurelius.titan.tinkerpop.rexster.RexsterTitanServer.main(RexsterTitanServer.java:118)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:257)
... 8 more
Caused by: com.thinkaurelius.titan.diskstorage.TemporaryStorageException: Temporary failure in storage backend
at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxStoreManager.ensureKeyspaceExists(AstyanaxStoreManager.java:394)
at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxStoreManager.<init>(AstyanaxStoreManager.java:164)
... 13 more
Caused by: com.netflix.astyanax.connectionpool.exceptions.NoAvailableHostsException: NoAvailableHostsException: [host=None(0.0.0.0):0, latency=0(0), attempts=0] No hosts to borrow from
at com.netflix.astyanax.connectionpool.impl.RoundRobinExecuteWithFailover.<init>(RoundRobinExecuteWithFailover.java:31)
at com.netflix.astyanax.connectionpool.impl.TokenAwareConnectionPoolImpl.newExecuteWithFailover(TokenAwareConnectionPoolImpl.java:74)
at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:229)
at com.netflix.astyanax.thrift.ThriftClusterImpl.executeSchemaChangeOperation(ThriftClusterImpl.java:131)
at com.netflix.astyanax.thrift.ThriftClusterImpl.addKeyspace(ThriftClusterImpl.java:252)
at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxStoreManager.ensureKeyspaceExists(AstyanaxStoreManager.java:389)
... 14 more
Please help to find what is wrong with?
Sometimes there are Astyanax connections issues on AWS. If you are running this on AWS, try changing the configuration to:
storage.backend = cassandrathrift

Resources