size of remote file is not known of the fetch run time - unix

When I carried out a fetch command, the following messages were output.
"size of remote file is not known"
Will this be an error? Will it be what or the thing which disappears if I appoint an option? Or will it be all right even if I do not mind it?

This is the Freebsd fetch command, right?
That is not an error, just a warning. I don't think there's an option to suppress that warning, though.

This is because the HTTP server doesn't send the Content-Length header with the response. That way, the client doesn't know in advance how long the file is, and has to assume that it ends when the connection is closed by the server, with the side effect that if the connection drops prematurely, you'll end up with an incomplete download without knowing it.
This doesn't sound very good, but is in fact quite usual practice on the web, especially for dynamic content generated by scripts.

Here is the way in UNIX
//File size of Google Image
echo "Weberdev Logo Size : " . getRemoteFileSize'http://www.weberdev.com/images/BlueLogo150x45.gif');

Related

Uploading larger files with User-Agent python-requests/2.2.1 results in RemoteDisconnected

Using the python library requests and uploading larger files I will get the error RemoteDisconnected('Remote end closed connection without response').
However it will work if I change the default User-Agent of the library to something like "Mozilla/5.0".
Does anybody know the reason for this behaviour ?
Edit: Only happens with Property X-Explode-Archive: true
Are there any specific pattern of timeout that you could highlight in this case?
For example: It times out after 60 seconds every time (of that sort)?
I would suggest to check the logs from all the medium configured with the Artifactory instance. Like, Reverse-proxy & the embedded-tomcat too. As the issue is specific to large-sized files, correlate the timeout pattern with the timeouts configured from all the entities which would give us a hint towards this issue.

NServiceBus MessageDeserializationException on DataBusProperty<byte[]>

We have an NServiceBus 6 environment with a number of services that send files between each other using DataBusProperty over a custom SqlDataBus : IDataBus.
This works fine on NSB6 using the built in JSON serializer, but is now broken after we moved to NSB7 and the NewtonsoftSerializer.
Removing DataBusProperty from our classes and just using byte[] works fine. We also tried changing the DataBus to FileShareDataBus but got the same exception:
NServiceBus.MessageDeserializationException: An error occurred while attempting to extract logical messages from incoming physical message c7b5cd47-c1b7-4610-9f6c-aa7800cc9b64 --->
Newtonsoft.Json.JsonReaderException: Error reading bytes. Unexpected token: StartObject. Path 'Data.Key', line 1, position 68.
This fails even if a service is sending messages to itself. Also we can see the files written to the file store, whether on Sql or File Share, so they're serializing fine.
An example payload from the error queue is
{"ExecutionId":"1db85105-a71c-4b29-87da-9b7ae6518c1c","Data":{"Key":"2019-06-26_13\\6a2b63c7-12b0-46dd-8b92-f1fc743d27c1","HasValue":true}}
How can we get this to deserialize in NSB7+NewtonsoftSerializer when it works fine in NSB6+JsonSerializer?
Thanks
I just spent about 8 hours trying to figure out what was going on, and realized, that for whatever reason, NSB7 wants a paramaterless constructor and settable properties. I am going back to Particular to see if this change is expected, but I expect we will have to adjust all of our message classes to fit that paradigm.
Although data bus properties should work there is also an alternative to data bus properties which is using stream attachments via send options:
https://docs.particular.net/nservicebus/messaging/attachments-sql
Depending on the use case using Streams might be a more efficient approach.

Quectel BG96 MQTT publish error

I'm try to publish my data to ThingsBoard server i use this types of AT commands
AT+QIACT=1
OK
AT+QMTOPEN=1,"demo.thingsboard.io",1883
OK
AT+QMTCONN=1,"demo.thingsboard.io","MY_ACCESS_TOKEN",""
OK
AT+QMTPUB=1,0,0,0,"v1/devices/me/telemetry"
>{"temperature":35.00,"humidity":80.00} // MY_POST_DATA This line hanging my module
All AT commands response is ok But i finally enter MY_POST_DATA the module doesn't provide no response hanging the previous command.. and i check my ThinksBoard data never post telemetry..
Please help any one how can i fix this problem and publish MQTT server.
Step 1: Get hold of the official AT command documentation for the modem (Quectel BG96 I assume?). It should document how the AT+QMTPUB command behaves and what it expects. Everything else is just guessing. The manufacturer should provide this, and if not you should demand to get one.
...
Step 873, when you have exhausted absolutely all possible ways of getting hold of the official AT command documentation for the modem: You can try my guess that the command behaves similar to other commands that read arbitrary length user data, most notably AT+CMGS which sends SMS messages, which expect a Ctrl-Z (ascii value 26) as an end of data indicator.
+QMTPUB: 1,0,0 simply mean that BG96 has successfully published and your broker (thingsboard) have also acknowledged publication of message.
If you can't see data on broker, then please check if the topic you are publishing is correct or not.
It may happen you are publishing to another topic (or to a different PATH).
Ask 'thingsboard' for help regarding proper topic.

inputstream is closed error while uploading zip file through jsch to sftp site

While uploading a zip file to SFTP, we are getting the below error. The same code is working fine for another application. We are using jsch-0.1.44.jar for SFTP connection.
java.io.IOException: inputstream is closed
at com.jcraft.jsch.ChannelSftp._put(ChannelSftp.java:571)
at com.jcraft.jsch.ChannelSftp.put(ChannelSftp.java:431)
at com.jcraft.jsch.ChannelSftp.put(ChannelSftp.java:398)
aused by: java.io.IOException: inputstream is closed
at com.jcraft.jsch.ChannelSftp.fill(ChannelSftp.java:2326)
at com.jcraft.jsch.ChannelSftp.header(ChannelSftp.java:2350)
at com.jcraft.jsch.ChannelSftp.checkStatus(ChannelSftp.java:1923)
at com.jcraft.jsch.ChannelSftp._put(ChannelSftp.java:559)
... 6 more
I searched stackoverflow and many other sources in internet to get an answer.
There were two reasons I found which was not the cause for my specific issue.
1) This exception usually means the connection was closed abruptly. I would look at the logs on the server to see if there was an error.
2)The root cause of this error was that in the code, remote path was being opened twice. So, even if, no channel is actually closed, but, when the remote path is tried to open 2nd time, the existing path/channel is also closed or something and this exception is raised.
After doing some POC, whatever changes we made to code did not have any impact. One of the thing which was looked at was passing inputstream object as a parameter to a method where actually put method of channelsftp is being called.
this.channelSftp.put(inputstream, strFileName );
Instead of passing inputstream from another method, code was written to form inputstream inside the method where this put is being called. This does not have any impact.
Tried uploading a file to sftp site through code. Same error was being thrown.
Later what we found was that there was no code issue. Even a manual upload was failing. This indicated us to dig further about this SFTP engagement details and found that the FILENAME format we are using is not what SFTP has configured. When we matched the filename format, the issue was resolved.
I've recently encountered an issue similar to this, in my case it was a an issue when logging on to the remote machine using JSch.
When trying to manually connect to the machine I found that the password had expired and was prompting for a new one on login. It was able to connect and authenticate however once connected it was unable to get any further. This explained why it was an inputstream failure and not an authentication failure.
I know this is an old question but to anyone else in the same position trawling the web for an answer it might just be a simple solution like this.
Nikola and I were facing this issue as well. It was not a problem due to permissions or passwords or whatever.
We are using the CachingSessionFactory. The sessions are closed arbitrarily by the server OR by the client application. We described our solution in another StackOverFlow: Sftp File Upload Fails
I also had a similar issue. Also want to add that it was not related to code but to access rights. I connected with a user that could connect via ssh but that could not send file through stp. Changing the user with proper access rights solved the issue
I tried above options, and none seems to be worked for my use case.
In my use case, we upload files to sftp using JSCH parallelly. Most of the uploads were successful but very few used to fail with above error.
On investigating the implementation, I found that the library returns the connection from pool which is not close. In our case one of the thread was closing the connection before another concurrent thread could finish upload hence causing IOException: input stream is closed error.
We have enabled testSession feature on cachingConnectionFactory. It will return the pooled connection if it is not closed and can actually list files from the SFTP. This solved our issue.
Posting the sample code here
DefaultSftpSessionFactory sessionFactory1 = new DefaultSftpSessionFactory();
sessionFactory1.setHost("host");
sessionFactory1.setPort(21);
sessionFactory1.setUser("userName");
sessionFactory1.setPassword("password");
CachingSessionFactory<ChannelSftp.LsEntry> cachingSessionFactory = new CachingSessionFactory<>(sessionFactory, 10);
cachingSessionFactory.setSessionWaitTimeout(2000);
cachingSessionFactory.setTestSession(true);`

Is there a way to force a FTP client to issue an ALLO command before transfer?

I'm developing a FTP server, and I would need the file size before the STOR happens, I have seen that there is a command called ALLO, and I was wondering if there is a way to force the client to issue that command, because the client, of course, knows the file size before hand.
Cheers.
I think #Rob Adams is going the right direction, but I disagree with sending 552, which says you are aborting the transfer; I think that holding the request until they send ALLO is a more useful approach. Reading through RFC 959:
This (ALLO) command may be required by some servers to reserve
sufficient storage to accommodate the new file to be
transferred...
Section 4.2 lists the valid format of replies, where you can display the error on the first line, and elaborate on the requirement for ALLO on the 2nd line.
Furthermore, Section 4.2.2 lists this message... 350 Requested file action pending further information.
It seems reasonable that if your server receives a request to store before recieving an ALLO, it should throw a 350 and hold the transfer until either you get a session timeout, close or they send an ALLO.
Just a guess -- you could return 552 - Exceeded storage allocation (for current directory or
dataset).
for the STOR.

Resources