I am beginner to WebDAV/HTTP protocol. I want to transfer files across network using WebDAV. The source received camera image raw data files has to be placed in a webDAV server (a folder). These files are then copied by a remote client via webDAV. my questions below
Is it necessary to use webDAV/HTTP GET/POST/PUT method calls on the server machine to copy images
or just a normal Linux/QNX command shall work?
If we do not need to go via WebDAV just to copy files then how can we attach properties to these
files like file name and size? are these automatically supplied to remote client by webDAV using
some OS file system calls?
Thank you in advance for your answers
Related
I am working on an API that needs to download a file from server A and upload it to server B in the same network. It's for internal use. Each of the files will have multiple versions and will need to be uploaded to server B multiple times and all the versions of the same file will share the same file name. This is my first time dealing with file manipulation so please bare with me if my question sounds ignorant. Can I use HttpClient.PostAsync for the uploading part in this effort? Or can I just use Stream.CopyToAsync if it's ok to just copy over? Thanks!
Stream.CopyToAsync copies stream from one to another inside memory of same server.
In your case, you can use HttpClient.PostAsync, but one the other server there should be some api to receive the stream content and save to disk.
There is a torrent file in which the web seed is configured. Most files are loaded normally, but when downloading some files (text\american.ini) the connection to the server suddenly terminates and the download stops. This can be checked if, when adding a torrent, select only this file to download. At the same time, this file is loaded normally from the browser. Because of what it can be? Tested on uTorrent and libtorrent.
Here you can download the torrent file and check it personally.
Download
There are two different kinds of Web Seeds, BEP 19 and BEP 17, and one assumes that the server is configured to handle working with torrent clients, your torrent has a BEP 19 link that's supposed to point to a file or a directory that has the same name as the torrent and that directory should contain the files in the torrent.
Your torrent name looks like this:
files/licence.txt
and your Web Seed looks like this:
https://website.com/projects/crmp/
It's not working because the Web Seed URL is wrong.
The problem was that FileZilla, when downloading some files to FTP, changed them and the torrent considered these files different. Solution: change the transfer mode from ASCII to binary.
My scenario as follows:
One java program is updating some random files to a SFTP location.
My requirement is as soon as a file is uploaded by the previous java program, using java I need to download the file. The files can be of size 100MB. I am searching for some java API which is helpful in this way. Here I even don't know the name of files. But I can keep a regular expression for this. A same file can be uploaded by previous program periodically. Since file size is high I need to wait until the complete file to be uploaded.
I used Jsch to download files, but I am not getting how to poll using jsch.
Polling
All you can do is to keep listing remote directory periodically, until you find a new file. There's no better way with SFTP. For that you obviously use ChannelSftp.ls().
Regarding selecting files matching certain pattern, see:
JSch ChannelSftp.ls - pass match patterns in java
Waiting until the upload is complete
Again, there's no support for this in widespread implementations of SFTP.
For details, see my answer at:
SFTP file lock mechanism.
I'm developing an application using Adobe Flex 4.5 SDK, in which the user would be able to export multiple files bundled in one zip file. I was thinking that I must need to take the following steps in order for performing this task:
Create a temporary folder on the server for the user who requested the download. Since it is an anonymous type of user, I have to read Sate/Session information to identify the user.
Copy all the requested files into the temporary folder on the server
Zip the copied file
Download the zip file from the server to the client machine
I was wondering if anybody knows any best-practice/sample-code for the task
Thanks
The ByteArray class has some methods for compressing, but this is more for data transport, not for packaging up multiple files.
I don't like saying things are impossible, but I will say that this should be done on the server-side. Depending on your server architecture I would suggest sending the binary files to a server script which could package the files for you.
A quick google search for your preferred server-side language and zipping files should give you some sample scripts to get you started.
I have a legacy system which can create files visible in WebDav as an output. I'd like to trigger a BizTalk orchestration receive port when a file matching a filter appears - so a lot like the standard File adapter, but for WebDav.
I found the BizTalk Scheduled Task Adapter, which can pull in a file by HTTP, but it looks abandoned, poorly documented, and out of date.
So, how is it done? Can I use the standard HTTP adapter perhaps?
If you're able access the WebDAV via a UNC path from the BizTalk server the File Adapter should do the trick.
Have you tried to assign a drive letter to the WebDav folder?
http://en.wikipedia.org/wiki/WebDAV
We've had to go with a workaround on this where we made a completely unrelated separate process to make a copy of the file from the legacy system appear in a Samba share, which we in turn attach to with an ordinary FILE adapter.