I am working on an API that needs to download a file from server A and upload it to server B in the same network. It's for internal use. Each of the files will have multiple versions and will need to be uploaded to server B multiple times and all the versions of the same file will share the same file name. This is my first time dealing with file manipulation so please bare with me if my question sounds ignorant. Can I use HttpClient.PostAsync for the uploading part in this effort? Or can I just use Stream.CopyToAsync if it's ok to just copy over? Thanks!
Stream.CopyToAsync copies stream from one to another inside memory of same server.
In your case, you can use HttpClient.PostAsync, but one the other server there should be some api to receive the stream content and save to disk.
Related
I have a remote MonetDB server running and I want to bulk upload a csv file as it is much faster.
Based on the params in MonetDB.R, there is a csvdump=TRUE option but I don't think it works when you are trying to do this against a remote server. The server has to be local.
https://rdrr.io/github/MonetDB/monetdb-r/man/dbWriteTable.html
First, am I correct that I can't do this and if not, is there a workaround? I have a dataframe with +5M rows so it takes a long time with insert statements rather than using COPY INTO.
When I try using csvdump=TRUE against the remote server, it can't find the csv file because it is local to computer that called the dbWriteTable command.
I think you are right. As a workaround either use explicit COPY INTO ON CLIENT SQL statements or first use some file transfer tool to copy the file to the remote server before calling dbWriteTable.
It reads from MonetDB's documentation on COPY INTO:
FROM files ON SERVER
With ON SERVER, which is the default, the file name must be an
absolute path on the system on which the database server (mserver5) is
running. ...
Interestingly enough pymonetdb, the Python driver for MonetDB, uses ON CLIENT for bulk loads. From the pymonetdb's doc:
File Uploads and Downloads
Classes related to file transfer requests as used by COPY INTO ON
CLIENT.
You might want to file an issue for the MonetDB R-driver project to have similar behavior as pymonetdb.
I am beginner to WebDAV/HTTP protocol. I want to transfer files across network using WebDAV. The source received camera image raw data files has to be placed in a webDAV server (a folder). These files are then copied by a remote client via webDAV. my questions below
Is it necessary to use webDAV/HTTP GET/POST/PUT method calls on the server machine to copy images
or just a normal Linux/QNX command shall work?
If we do not need to go via WebDAV just to copy files then how can we attach properties to these
files like file name and size? are these automatically supplied to remote client by webDAV using
some OS file system calls?
Thank you in advance for your answers
I am having hard time figuring out how to get the file InputStream from file upload Post request to server, before it gets completely loaded into memory.
This is not problematic for smaller files, but I am worried what happens after trying to upload a larger file (1 or more GB). I found a possible solution with using HttpContext.Request.GetBufferlessInputStream(true), but this stream includes the whole request not just the uploading file and if I use it to upload a file for example into the Azure Blob Storage or anywhere else I end up with the corrupted file. I also lose all the information about the file (file name, size, etc.).
Is there any convenient way of uploading a large file to server without filling its memory? I would like to get the stream and then use it to upload a file anywhere in chunks.
Thank you.
I used DevExpress UploadControl for a similar task. It supports large file upload by chunks. A temporary file is saved on a server hard drive and you can get it using FileSteam without full loading in server memory. It also supports direct upload to Azure, Amazon and Dropbox.
The same is true for their MVC Upload control.
My scenario as follows:
One java program is updating some random files to a SFTP location.
My requirement is as soon as a file is uploaded by the previous java program, using java I need to download the file. The files can be of size 100MB. I am searching for some java API which is helpful in this way. Here I even don't know the name of files. But I can keep a regular expression for this. A same file can be uploaded by previous program periodically. Since file size is high I need to wait until the complete file to be uploaded.
I used Jsch to download files, but I am not getting how to poll using jsch.
Polling
All you can do is to keep listing remote directory periodically, until you find a new file. There's no better way with SFTP. For that you obviously use ChannelSftp.ls().
Regarding selecting files matching certain pattern, see:
JSch ChannelSftp.ls - pass match patterns in java
Waiting until the upload is complete
Again, there's no support for this in widespread implementations of SFTP.
For details, see my answer at:
SFTP file lock mechanism.
I'm developing an application using Adobe Flex 4.5 SDK, in which the user would be able to export multiple files bundled in one zip file. I was thinking that I must need to take the following steps in order for performing this task:
Create a temporary folder on the server for the user who requested the download. Since it is an anonymous type of user, I have to read Sate/Session information to identify the user.
Copy all the requested files into the temporary folder on the server
Zip the copied file
Download the zip file from the server to the client machine
I was wondering if anybody knows any best-practice/sample-code for the task
Thanks
The ByteArray class has some methods for compressing, but this is more for data transport, not for packaging up multiple files.
I don't like saying things are impossible, but I will say that this should be done on the server-side. Depending on your server architecture I would suggest sending the binary files to a server script which could package the files for you.
A quick google search for your preferred server-side language and zipping files should give you some sample scripts to get you started.