inputstream is closed error while uploading zip file through jsch to sftp site - inputstream

While uploading a zip file to SFTP, we are getting the below error. The same code is working fine for another application. We are using jsch-0.1.44.jar for SFTP connection.
java.io.IOException: inputstream is closed
at com.jcraft.jsch.ChannelSftp._put(ChannelSftp.java:571)
at com.jcraft.jsch.ChannelSftp.put(ChannelSftp.java:431)
at com.jcraft.jsch.ChannelSftp.put(ChannelSftp.java:398)
aused by: java.io.IOException: inputstream is closed
at com.jcraft.jsch.ChannelSftp.fill(ChannelSftp.java:2326)
at com.jcraft.jsch.ChannelSftp.header(ChannelSftp.java:2350)
at com.jcraft.jsch.ChannelSftp.checkStatus(ChannelSftp.java:1923)
at com.jcraft.jsch.ChannelSftp._put(ChannelSftp.java:559)
... 6 more

I searched stackoverflow and many other sources in internet to get an answer.
There were two reasons I found which was not the cause for my specific issue.
1) This exception usually means the connection was closed abruptly. I would look at the logs on the server to see if there was an error.
2)The root cause of this error was that in the code, remote path was being opened twice. So, even if, no channel is actually closed, but, when the remote path is tried to open 2nd time, the existing path/channel is also closed or something and this exception is raised.
After doing some POC, whatever changes we made to code did not have any impact. One of the thing which was looked at was passing inputstream object as a parameter to a method where actually put method of channelsftp is being called.
this.channelSftp.put(inputstream, strFileName );
Instead of passing inputstream from another method, code was written to form inputstream inside the method where this put is being called. This does not have any impact.
Tried uploading a file to sftp site through code. Same error was being thrown.
Later what we found was that there was no code issue. Even a manual upload was failing. This indicated us to dig further about this SFTP engagement details and found that the FILENAME format we are using is not what SFTP has configured. When we matched the filename format, the issue was resolved.

I've recently encountered an issue similar to this, in my case it was a an issue when logging on to the remote machine using JSch.
When trying to manually connect to the machine I found that the password had expired and was prompting for a new one on login. It was able to connect and authenticate however once connected it was unable to get any further. This explained why it was an inputstream failure and not an authentication failure.
I know this is an old question but to anyone else in the same position trawling the web for an answer it might just be a simple solution like this.

Nikola and I were facing this issue as well. It was not a problem due to permissions or passwords or whatever.
We are using the CachingSessionFactory. The sessions are closed arbitrarily by the server OR by the client application. We described our solution in another StackOverFlow: Sftp File Upload Fails

I also had a similar issue. Also want to add that it was not related to code but to access rights. I connected with a user that could connect via ssh but that could not send file through stp. Changing the user with proper access rights solved the issue

I tried above options, and none seems to be worked for my use case.
In my use case, we upload files to sftp using JSCH parallelly. Most of the uploads were successful but very few used to fail with above error.
On investigating the implementation, I found that the library returns the connection from pool which is not close. In our case one of the thread was closing the connection before another concurrent thread could finish upload hence causing IOException: input stream is closed error.
We have enabled testSession feature on cachingConnectionFactory. It will return the pooled connection if it is not closed and can actually list files from the SFTP. This solved our issue.
Posting the sample code here
DefaultSftpSessionFactory sessionFactory1 = new DefaultSftpSessionFactory();
sessionFactory1.setHost("host");
sessionFactory1.setPort(21);
sessionFactory1.setUser("userName");
sessionFactory1.setPassword("password");
CachingSessionFactory<ChannelSftp.LsEntry> cachingSessionFactory = new CachingSessionFactory<>(sessionFactory, 10);
cachingSessionFactory.setSessionWaitTimeout(2000);
cachingSessionFactory.setTestSession(true);`

Related

Artifactory Users Management not loading

I'm trying to open the Artifactory Users Management page, following the Admin->Security->Users tab.
Then I'm getting the following error:
Any idea of what might be causing it? Also, which log I can check this? Couldn't find anything yet.
The server error generally indicates there is problem fetching the user details from Artifactory. This can happen due to any of the following reasons:
when you have a high volume of users and the request is timing out.
There is a chance that you might have created a username with a
special character which is not allowed (using the REST method or some
other method)
There is an issue with the backend database
And the best place to troubleshoot is to first check the request log a good valid entry looks like below:
20200715164402|104|REQUEST|165.225.104.49|admin|GET|/ui/users|HTTP/1.1|200|0
Next check the artifactory.log file for java stack or check catalina.out under tomcat/logs directory.

Alfresco share ClientAbortException

Sometimes when we open folder, Alfresco shows spinning wheel and never opens the folder. The log has below exception.
2016-03-08 11:45:40,652 INFO [webscripts.connector.RemoteClient] [http-bio-8080-exec-494] Exception calling (GET) http://localhost:8080/alfresco/s/slingshot/doclib/treenode/site/test/documentLibrary/Books/science?children=true&max=-1&alf_ticket=TICKET_400a73c20348346eed011695af270f837f27a654
2016-03-08 11:45:40,652 INFO [webscripts.connector.RemoteClient] [http-bio-8080-exec-494] Error status 500 null
ClientAbortException: java.net.SocketException: Connection reset
at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:413)
If I curl the above URL or open directly in webrowser I am able to get the json response successfully.
I am using only Alfresco Share and not anyother client. The localhost:8080 is working perfectly fine in most of the cases except this one.
Can anyone please tell me what is the issue and why connection is closed or ClientAbortException exception is occurring?
Mostly this is an issue of timeout and you'll need active monitoring on your Alfresco & Share environment to see how Alfresco is running.
Easy check is to install some java monitoring or use Jmeter to load test the system and see how it responds on different load.
Mostly the outcome is more CPU/RAM for Alfresco :).
As Tahir Malik mentioned above, the issue is related to performance.
The ClientAbort error itself occurs when the client (in this case, Share) times out or the user cancels a download. The message on the log is type INFO. More details here: https://issues.alfresco.com/jira/browse/ALF-20349
If you are on SSO and using Alfresco Enterprise 5.2.3 or 5.2.4, there is a chance that you may hit a similar bug, which is discussed in the Alfresco Forum. However, this particular bug would not show the ClientAbortException.

Spring integration sftp move remote file issue

I'am using the inbound-channel-adapter from Spring Integration to retrieve files over sftp from a remote server. Everything works fine.
But I have an additional requirement: after a file is received on the local side, that file needs to moved to a "send" directory on the remote server.
The "SFTP Outbound Gateway" has the appropriate method for that move action, but my problem is when to call it.
Situation: 10 files on remote server, 0 on local server
When I start my application it will receive all 10 files from the remote server and write them to my local file system. Perfect.
Situation: 1 file on remote server, 10 on local server
In this situation the remote file is received, but for every file on the local file system the receive method of the QueueChannel is also called.
Example log from one file: (file1.zip)
18:12:52.118 [task-scheduler-1] INFO o.s.i.file.FileReadingMessageSource - Created message: [[Payload File content=C:\Downloads\sftpTest\file1.zip][Headers=...]
18:12:52.119 [task-scheduler-1] DEBUG o.s.i.e.SourcePollingChannelAdapter - Poll resulted in Message: [Payload File content=C:\Downloads\sftpTest\file1.zip][Headers=...]
18:12:52.119 [task-scheduler-1] DEBUG o.s.integration.channel.QueueChannel - preSend on channel 'fromChannel', message: [Payload File content=C:\Downloads\sftpTest\file1.zip][...]
18:12:52.119 [task-scheduler-1] DEBUG o.s.integration.channel.QueueChannel - postSend (sent=true) on channel 'fromChannel', message: [Payload File content=C:\Downloads\sftpTest\file1.zip][Headers=...]
18:12:52.119 [main] DEBUG o.s.integration.channel.QueueChannel - postReceive on channel 'fromChannel', message: [Payload File content=C:\Downloads\sftpTest\file1.zip][Headers=......]
So even when the file it not physicaly retreived from the remote server, the channel.receive() method will still receive a message with that file as payload.
This confuses me, because I can't determine from the message if the file was already on the local file system or was just retrieved from the remote server.
I experimented using a custom org.springframework.messaging.support.ChannelInterceptorAdapter, FileFilter, ServiceActivator, but the problem still remains.
My application will process high volumes, so sending the received file to the required directory on the remote server is not an option. And also simply trying to move the file remote for every message that is locally received, is not an option since it will clutter the logfiles with exceptions of not being able to move the file. This way in case of an real error situation the problem will not be detected.
One solution might be a hook in the method copyFileToLocalDirectory of the org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.
There is a check performed if the remote file should be deleted and that loop is only called for the files that were actually transferred from the remote server. My attempts to override this method and add my move behaviour did not succeed, since Spring has already instantiated the classes
that will handle this.
So what is the best way to achieve this? I know the problem will probably be located between my keyboard and my chair, but I've run out of options and any help is highly appreciated.
Thanks a lot,
Frank
You would probably be better off using MGET and an outbound gateway to retrieve the files instead of using the inbound adapter which, as you say, is two-stage - synchronize, and emit message(s) for file(s) in the local dir (unless you use a persistent file list filter, in which case you'll only see "new" files).

Get CloudQueueMessage in Windows Azure worker role throws IO Exception in Azure Emulator

As can be seen from the image below I'm doing a pretty standard bit of code in a Worker Role to get a message from a queue in Azure Storage but it's throwing this IOException with text saying the specified registry key does not exist.
CloudQueueMessage message = queue.GetMessage(TimeSpan.FromSeconds(30));
is the code that's causing the problem, sometimes it works fine or other times it doesn't. Very strange.
Have you set your Local Config file Connection String to use UseDevelopmentStorage=true
From a quick google, this error can crop up as well if you have a mis-match between the
Microsoft.WindowsAzure.StorageClient (v1.7) and the Microsoft.WindowsAzure.Storage (v2.0) libraries referenced in your application.

Error when running TcmReindex.exe

I am currently trying to get search working in my Tridion 2011 installation. I read in another article that I should run the TcmReIndex.exe tool in the Tridion/bin folder to re-index all my sites. So I tried this and it failed with a message box giving the following details
Unable to get list of Publication items.
Unable to Intialize TDSE object.
The wait operation timed out
Connection Timeout Expired. The timeout period elapsed while attempting to consume the pre-login handshake acknowledgement. This could be because the pre-login handshake failed or the server was unable to respond back in time. The duration spent while attempting to connect to this server was - [Pre-Login] initialization=21054; handshake=35;
The wait operation timed out
A database error occurred while executing Stored Procedure "EDA_TRUSTEES_GETTRUSTEEETOKEN"
I have four fairly large publications (100 000+ items in total) which I am trying to index.
Any ideas?
Whenever I get "Unable to Intialize TDSE object." errors, I typically write a small test script using VBScript, and try running it on the CMS server. Whilst this does not directly solve the problem, it often gives some insight into the issue by logging information in the event viewer. Try creating a test.vbs file as follows and running it:
Set tdse = CreateObject("TDS.TDSE")
tdse.initialize()
msgbox(tdse.User.Description)
Set tdse = Nothing
If it throws any errors, please let me know, and it may help us solve the problem. If it gives you a popup with your user description, then I am completely barking up the wrong tree.
I haven't come to anything conclusive but it seems like my issue may have been a temporary one as it just started working. I did increase all timeouts in Tridion MMC > Timeout Settings by 100 times their amounts but I suspect that this wasn't the issue, when it works the connection is almost instant.
If anyone else has this issue
Restart the computer the content manager is installed on, try again.
Wait an hour or two, try again.
Increase timeouts, try again.
I've run the process a few more times and it seems to be working correctly.

Resources