Trouble writing to HttpServletResponse ServletOutputStream - spring-mvc

I have a Spring application running on Tomcat 8.5 that queries our database, generates a PDF file, and then serves it to the user via the ServletOutputStream:
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
//PDDocument from org.apache.pdfbox.pdmodel
this.document.save(byteArrayOutputStream);
int pdfDocumentSize = byteArrayOutputStream.size();
response.setContentLength(pdfDocumentSize);
ServletOutputStream resOutputStream = response.getOutputStream();
byteArrayOutputStream.writeTo(resOutputStream);
response.flushBuffer();
byteArrayOutputStream.close();
resOutputStream.close();
this.document.close();
For the vast majority of users, this works fine. However, we have received reports that some users are having a lot of trouble downloading this file. They click on the download link, and the page sits for about 3 minutes before crashing (They get an ERR_EMPTY_RESPONSE message). Occasionally, the file opens properly after this wait period.
Unfortunately, we are personally unable to replicate this issue.
According to our logs, the file is generated correctly. The file size is relatively small (105631 bytes, or about .1MB), so I don't think it's due to size. The logs also seem to indicate that the file was created correctly.
The logs also show a
org.apache.catalina.connector.ClientAbortException: java.io.IOException: Broken pipe
Which would normally indicate that the user aborted the download. However, we watched the user replicate this issue via screencast, and no actions were taken that would abort it. This user was also on a MAC. We don't have any MACs to test on here, but we do have an iPad. The iPad was able to download the file successfully as well.
What could be causing this?
Update: We've heard from another user that they are also experiencing this problem. They are on a Windows operating system and using Chrome. Different browsers have been tried, but they all behave the same.
In addition, the exception being generated in the log is about 10 minutes later than the time the error was reported.

Related

Failing to upload JSON file through Chrome to Firebase Database

This is really frustrating. I have a 104 MB JSON file that I want to upload to my Firebase database through the web front end, but after a random period of time (I've timed it, it's not constant, anywhere from 2 to 20 seconds) I get the error:
There was a problem contacting the server. Try uploading your file again.
So I do try again, and it just keeps failing. I've uploaded files nearly this big before, and the limit for stored data in the realtime DB is 1 GB,
I'm not even close to that. Why does it keep failing to upload?
This is the error I get in chrome dev tools:
Failed to load resource: net::ERR_CONNECTION_ABORTED
https://project.firebaseio.com/.upload?auth=eyJhbGciOiJIUzI1NiIsInR5cCI6…Q3NiwiYWRtaW4iOnRydWUsInYiOjB9.CihvjvLSlx43nOBynAJeyibkBRtygeRlG4Yo1t3jKVA
Failed to load resource: net::ERR_CONNECTION_ABORTED
If I click on the link that shows up in the error, it's a page with the words POST request required.
Turns out the answer is to ignore the web importer entirely and use firebase-import. It worked perfectly first time, and only took a minute to upload the whole json. And it also has merging capabilities.
Using firebase-import as the accepted answer suggested, I get error:
Error: WRITE_TOO_BIG: Data to write exceeds the maximum size that can be modified with a single request.
However, with the firebase-cli I was successful in deleting my entire database:
firebase database:remove /
It seems like it automatically traverses down your database tree to find requests that are under the limit size, then it does multiple delete requests automatically. It takes some time, but definitely works.
You can also import via a json file:
firebase database:set / data.json
I'm unsure if firebase database:set supports merging.

inputstream is closed error while uploading zip file through jsch to sftp site

While uploading a zip file to SFTP, we are getting the below error. The same code is working fine for another application. We are using jsch-0.1.44.jar for SFTP connection.
java.io.IOException: inputstream is closed
at com.jcraft.jsch.ChannelSftp._put(ChannelSftp.java:571)
at com.jcraft.jsch.ChannelSftp.put(ChannelSftp.java:431)
at com.jcraft.jsch.ChannelSftp.put(ChannelSftp.java:398)
aused by: java.io.IOException: inputstream is closed
at com.jcraft.jsch.ChannelSftp.fill(ChannelSftp.java:2326)
at com.jcraft.jsch.ChannelSftp.header(ChannelSftp.java:2350)
at com.jcraft.jsch.ChannelSftp.checkStatus(ChannelSftp.java:1923)
at com.jcraft.jsch.ChannelSftp._put(ChannelSftp.java:559)
... 6 more
I searched stackoverflow and many other sources in internet to get an answer.
There were two reasons I found which was not the cause for my specific issue.
1) This exception usually means the connection was closed abruptly. I would look at the logs on the server to see if there was an error.
2)The root cause of this error was that in the code, remote path was being opened twice. So, even if, no channel is actually closed, but, when the remote path is tried to open 2nd time, the existing path/channel is also closed or something and this exception is raised.
After doing some POC, whatever changes we made to code did not have any impact. One of the thing which was looked at was passing inputstream object as a parameter to a method where actually put method of channelsftp is being called.
this.channelSftp.put(inputstream, strFileName );
Instead of passing inputstream from another method, code was written to form inputstream inside the method where this put is being called. This does not have any impact.
Tried uploading a file to sftp site through code. Same error was being thrown.
Later what we found was that there was no code issue. Even a manual upload was failing. This indicated us to dig further about this SFTP engagement details and found that the FILENAME format we are using is not what SFTP has configured. When we matched the filename format, the issue was resolved.
I've recently encountered an issue similar to this, in my case it was a an issue when logging on to the remote machine using JSch.
When trying to manually connect to the machine I found that the password had expired and was prompting for a new one on login. It was able to connect and authenticate however once connected it was unable to get any further. This explained why it was an inputstream failure and not an authentication failure.
I know this is an old question but to anyone else in the same position trawling the web for an answer it might just be a simple solution like this.
Nikola and I were facing this issue as well. It was not a problem due to permissions or passwords or whatever.
We are using the CachingSessionFactory. The sessions are closed arbitrarily by the server OR by the client application. We described our solution in another StackOverFlow: Sftp File Upload Fails
I also had a similar issue. Also want to add that it was not related to code but to access rights. I connected with a user that could connect via ssh but that could not send file through stp. Changing the user with proper access rights solved the issue
I tried above options, and none seems to be worked for my use case.
In my use case, we upload files to sftp using JSCH parallelly. Most of the uploads were successful but very few used to fail with above error.
On investigating the implementation, I found that the library returns the connection from pool which is not close. In our case one of the thread was closing the connection before another concurrent thread could finish upload hence causing IOException: input stream is closed error.
We have enabled testSession feature on cachingConnectionFactory. It will return the pooled connection if it is not closed and can actually list files from the SFTP. This solved our issue.
Posting the sample code here
DefaultSftpSessionFactory sessionFactory1 = new DefaultSftpSessionFactory();
sessionFactory1.setHost("host");
sessionFactory1.setPort(21);
sessionFactory1.setUser("userName");
sessionFactory1.setPassword("password");
CachingSessionFactory<ChannelSftp.LsEntry> cachingSessionFactory = new CachingSessionFactory<>(sessionFactory, 10);
cachingSessionFactory.setSessionWaitTimeout(2000);
cachingSessionFactory.setTestSession(true);`

Get CloudQueueMessage in Windows Azure worker role throws IO Exception in Azure Emulator

As can be seen from the image below I'm doing a pretty standard bit of code in a Worker Role to get a message from a queue in Azure Storage but it's throwing this IOException with text saying the specified registry key does not exist.
CloudQueueMessage message = queue.GetMessage(TimeSpan.FromSeconds(30));
is the code that's causing the problem, sometimes it works fine or other times it doesn't. Very strange.
Have you set your Local Config file Connection String to use UseDevelopmentStorage=true
From a quick google, this error can crop up as well if you have a mis-match between the
Microsoft.WindowsAzure.StorageClient (v1.7) and the Microsoft.WindowsAzure.Storage (v2.0) libraries referenced in your application.

Issue running ASPX page using Scheduled Task

I have a scheduled task set up to run Scan.aspx every 3 minutes in IE7. Scan.aspx reads data from 10 files in sequence. These files are constantly being updated. The values from the file are inserted into a database.
Sporadically, the value being read is truncated or distorted. For example, if the value in the file was "Hello World", random entries such as "Hello W", "Hel", etc. will be in the database. The timestamps on these entries appear completely random. Sometimes at 1:00 am, sometimes at 3:30 am. And some nights, this doesn't occur at all.
I'm unable to reproduce this issue when I debug the code. So I know under "normal" circumstances, the code executes correctly.
UPDATE:
Here is the aspx codebehind (in Page_Load) to read a text file (this is called for each of the 10 text files):
Dim filename As String = location
If File.Exists(filename) Then
Using MyParser As New FileIO.TextFieldParser(filename)
MyParser.TextFieldType = FileIO.FieldType.Delimited
MyParser.SetDelimiters("~")
Dim currentrow As String()
Dim valueA, valueB As String
While Not MyParser.EndOfData
Try
currentrow = MyParser.ReadFields()
valueA= currentrow(0).ToUpper
valueB = currentrow(1).ToUpper
//insert values as record into DB if does not exist already
Catch ex As Exception
End Try
End While
End Using
End If
Any ideas why this might cause issues when running multiple times throughout the day (via scheduled task)?
First implement a Logger such as Log4Net in your ASP.NET solution and Log method entry and exit points in your Scan.aspx as well as your method for updating the DB. There is a chance this may provide some hint of what is going on. You should also check the System Event Log to see if any other event is associated with your failed DB entries.
ASP.NET is not the best thing for this scenario especially when paired with a Windows scheduled task; this is not a robust design. A more robust system would run on a timer inside a Windows-Service-Application. Your code for reading the files and updating to the DB could be ported across. If you have access to the server and can install a Windows Service, make sure you also add Logging to the Windows Service too!
Make sure you read the How to Debug below
Windows Service Applications intro on MSDN: has further links to:
How to: Create Windows Services
How to: Install and Uninstall Services
How to: Start Services
How to: Debug Windows Service Applications]
Walkthrough: Creating a Windows Service
Application in the Component Designer
How to: Add Installers to Your Service Application
Regarding your follow up comment about the apparent random entries that sometimes occur at 1am and 3.30am: you should:
Investigate the IIS Log for the site when these occur and find out what hit(visited) the page at that time.
Check if there is an indexing service on the server which is visiting your aspx page.
Check if Anti-Virus software is installed and ascertain if this is visiting your aspx page or impacting the Asp.Net cache; this can cause compilation issues such as file-locks on the aspnet page in the aspnet cache; (a scenario for aspnet websites as opposed to aspnet web applications) which could give weird behavior.
Find out if the truncated entries coincide with the time that the files are updated: cross reference your db entries timestamp or logger timestamp with the time the files are updated.
Update your logger to log the entire contents of the file being read to verify you've not got a 'junk-in > junk-out' scenario. Be careful with diskspace on the server by running this for one night.
Find out when the App-Pool that your web app runs under is recycled and cross reference this with the time of your truncated entries; you can do this with web.config only via ASP.NET Health Monitoring.
Your code is written with a 'try catch' that will bury errors. If you are not going to do something useful with your caught error then do not catch it. Handle your edge cases in code, not a try catch.
See this try-catch question on this site.

Error when running TcmReindex.exe

I am currently trying to get search working in my Tridion 2011 installation. I read in another article that I should run the TcmReIndex.exe tool in the Tridion/bin folder to re-index all my sites. So I tried this and it failed with a message box giving the following details
Unable to get list of Publication items.
Unable to Intialize TDSE object.
The wait operation timed out
Connection Timeout Expired. The timeout period elapsed while attempting to consume the pre-login handshake acknowledgement. This could be because the pre-login handshake failed or the server was unable to respond back in time. The duration spent while attempting to connect to this server was - [Pre-Login] initialization=21054; handshake=35;
The wait operation timed out
A database error occurred while executing Stored Procedure "EDA_TRUSTEES_GETTRUSTEEETOKEN"
I have four fairly large publications (100 000+ items in total) which I am trying to index.
Any ideas?
Whenever I get "Unable to Intialize TDSE object." errors, I typically write a small test script using VBScript, and try running it on the CMS server. Whilst this does not directly solve the problem, it often gives some insight into the issue by logging information in the event viewer. Try creating a test.vbs file as follows and running it:
Set tdse = CreateObject("TDS.TDSE")
tdse.initialize()
msgbox(tdse.User.Description)
Set tdse = Nothing
If it throws any errors, please let me know, and it may help us solve the problem. If it gives you a popup with your user description, then I am completely barking up the wrong tree.
I haven't come to anything conclusive but it seems like my issue may have been a temporary one as it just started working. I did increase all timeouts in Tridion MMC > Timeout Settings by 100 times their amounts but I suspect that this wasn't the issue, when it works the connection is almost instant.
If anyone else has this issue
Restart the computer the content manager is installed on, try again.
Wait an hour or two, try again.
Increase timeouts, try again.
I've run the process a few more times and it seems to be working correctly.

Resources