c# Ftp file Upload Async | Sync - asynchronous

I've asp.net web API which upload some specific documents from end users to FTP file location on the server.
the website is public and will have many concurrent users, so many uploads can be invoked .
I want to tell user that the upload is success or fails.
I'm using the following code to upload
FtpWebRequest clsRequest = (FtpWebRequest)System.Net.WebRequest.Create(fileName);
clsRequest.Credentials = new NetworkCredential(ftpUsername, ftpPassword);
clsRequest.Method = Ftp.UploadFile;
using (System.IO.Stream clsStream = clsRequest.GetRequestStream())
{
clsStream.Write(bytes, 0, bytes.Length);
clsStream.Close();
clsStream.Dispose();
}
the code is running but I'm afraid the performance and stability with concurrent users
What's the best way from performance ,stability,and server health with concurrent users and with return success / Failure for each user
Do I need to use Async calls instead and how code should be changed?
Thanks

Here is the async version of your code:
FtpWebRequest clsRequest = (FtpWebRequest)System.Net.WebRequest.Create(fileName);
clsRequest.Credentials = new NetworkCredential(ftpUsername, ftpPassword);
clsRequest.Method = Ftp.UploadFile;
using (System.IO.Stream clsStream = await clsRequest.GetRequestStreamAsync())
{
await clsStream.WriteAsync(bytes, 0, bytes.Length);
// These lines not required since they are located inside using statement.
//clsStream.Close();
//clsStream.Dispose();
}
Remember your method must be marked with async keyword and returns a Task if it is a void method. Also, I have to mention that this is a client-side code. Making it async doesn`t affect server-side performance.

Related

Stream on the fly zipped files to client via rest endpoint

I am trying to stream on the fly zipped files but memory consumption is high. For example, to zip total file size of 2.8 GB is taking nearly 5 GB of processor memory.
[Route("zip")]
public class ZipController : ControllerBase
{
private readonly HttpClient _httpClient;
public ZipController()
{
_httpClient = new HttpClient();
}
[HttpPost]
public async Task Zip([FromBody] JsonToZipInput input)
{
Response.ContentType = "application/octet-stream";
Response.Headers.Add($"Content-Disposition", $"attachment; filename=\"{input.FileName}\"");
using var zipArchive =
new ZipArchive(Response.BodyWriter.AsStream(), ZipArchiveMode.Create);
foreach (var (key, value) in input.FilePathsToUrls)
{
var zipEntry = zipArchive.CreateEntry(key, CompressionLevel.Optimal);
await using var zipStream = zipEntry.Open();
await using var stream = await _httpClient.GetStreamAsync(value);
await stream.CopyToAsync(zipStream);
}
}
}
I believe you should be able to call Response.StartAsync:
[HttpPost]
public async Task Zip([FromBody] JsonToZipInput input)
{
Response.ContentType = "application/octet-stream";
Response.Headers.Add($"Content-Disposition", $"attachment; filename=\"{input.FileName}\"");
await Response.StartAsync();
using var zipArchive = new ZipArchive(Response.BodyWriter.AsStream(), ZipArchiveMode.Create);
foreach (var (key, value) in input.FilePathsToUrls)
{
var zipEntry = zipArchive.CreateEntry(key, CompressionLevel.Optimal);
await using var zipStream = zipEntry.Open();
await using var stream = await _httpClient.GetStreamAsync(value);
await stream.CopyToAsync(zipStream);
}
}
StartAsync should start the response being sent. Note that neither the response headers nor the status code can be modified once StartAsync is called.
In particular, this means that your exception handling will be different. Previously, an exception (e.g., from a bad URL in the request) would cause an exceptional status code (i.e., 500). With a streaming response, any exceptions after StartAsync cannot change the status code; it's already been sent. Instead, it will appear to the client as though the connection was terminated without a clean close. Complicating this a bit further, this behavior is not uncommon for web servers to do in the successful case, so clients may not complain - they would just end up with truncated (invalid) zip files. (In the case of streaming zips, the "file table" in the zip is sent last instead of first).
So, this should work, but I also recommend:
Ensure your exception logging works for exceptions after StartAsync. There is no way to return error details to the client, so you must rely on logging.
If you control the client, test out this new error situation, and see if you can detect it. If it's not detectable using that client, then ensure your code validates the zip.
Nothing about the zip file format should require a large amount of memory for this use case. It's essential all the files in order, with a table at the end describing the zip structure, and file offsets. This makes it possible to stream very efficiently without using much memory at all.
You may not need to write this yourself, ZipStreamer is a micro service you host that does exactly this (disclosure, I'm the author). It's designed to solve the exact problems you are hitting by streaming the bytes out as soon as they come in, with a fixed buffer size to prevent blowing up memory. It can stream hundreds of zips files in parallel using only a few MB of memory.
If you need this to be part of your application, here are some suggestions.
Disable compression will save CPU, and a bit of memory. Depending on your files, compression might not be a major benefit (jpegs actually get bigger after zip compression). If you're zipping just to combine many files into one, this will really help. But this doesn't explain using GB of memory.
Ensure you're not holding the stream content any longer than you need to, it looks like you are. Start streaming back asap as #Stephen suggested with StartAsync.

Telegram "API development tools" limits

I try to use my application (with TLSharp) but suddenly by using TelegramClient .SendCodeRequestAsync function, I get This Exception :
"Flood prevention. Telegram now requires your program to do requests
again only after 84894 seconds have passed (TimeToWait property). If
you think the culprit of this problem may lie in TLSharp's
implementation, open a Github issue "
after waiting for 84894 sec, It show this message again.
(I wait and try several times but messages doesn't differ:( )
Someone told me that its Telegram limits. Is it right?
Do you Have better idea to Send message/file to a telegram account?
It might be a late answer but can be used as a reference. the first problem is that Telegram APIs don't let each phone number to send code request more than 5 times a day. the second problem is shared session file that you use for TelegramClient by default. so you should create a custom session manager to separate each phone number session in a separate dat file.
public class CustomSessionStore : ISessionStore
{
public void Save(Session session)
{
var dir = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Sessions");
if (!Directory.Exists(dir))
{
Directory.CreateDirectory(dir);
}
var file = Path.Combine(dir, "{0}.dat");
using (FileStream fileStream = new FileStream(string.Format(file, (object)session.SessionUserId), FileMode.OpenOrCreate))
{
byte[] bytes = session.ToBytes();
fileStream.Write(bytes, 0, bytes.Length);
}
}
public Session Load(string sessionUserId)
{
var dir = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Sessions");
if (!Directory.Exists(dir))
{
Directory.CreateDirectory(dir);
}
var file = Path.Combine(dir, "{0}.dat");
string path = string.Format(file, (object)sessionUserId);
if (!File.Exists(path))
return (Session)null;
var buffer = File.ReadAllBytes(path);
return Session.FromBytes(buffer, this, sessionUserId);
}
}
then create your TelegramClient like this:
var client = new TelegramClient(apiId, apiHash, new CustomSessionStore(), phoneNumber);
I guess you are closing and starting your application many times or repeating this method. After 10 times the telegram API makes you wait for about 24 hours to prevent flood.
It's a Telegram limit, my advice: Wait 2-3 minutes between calling SendCodeRequestAsync()

Cannot upload large (>50MB) files to SharePoint 2010 document library

I'm trying to upload a large file to a document library, but it fails after just a few seconds. The upload single document fails silently, upload multiple just shows a failed message. I've turned up the file size limit on the web application to 500MB, and the IIS request length to the same (from this blog), and increased the IIS timeout for good measure. Are there any other size caps that I've missed?
Update I've tried a few files of various sizes, anything 50MB or over fails, so I assume something somewhere is still set to the webapp default.
Update 2 Just tried uploading using the following powershell:
$web = Get-SPWeb http://{site address}
$folder = $web.GetFolder("Site Documents")
$file = Get-Item "C:\mydoc.txt" // ~ 150MB
$folder.Files.Add("SiteDocuments/mydoc.txt", $file.OpenRead(), $false)
and get this exception:
Exception calling "Add" with "3" argument(s): "<nativehr>0x80070003</nativehr><nativestack></nativestack>There is no file with URL 'http://{site address}/SiteDocuments/mydoc.txt' in this Web."
which strikes me as odd as of course the file wouldn't exist until it's been uploaded? N.B. while the document library has the name Site Documents, it has the URL SiteDocuments. Not sure why...
Are you sure you updated the right webapp? Is the filetype blocked by the server? Is there adequate space in your content database? I would check ULS logs after that and see if there is another error since it seems you hit the 3 spots you would need too update.
for uploading a large file, you can use the PUT method instead of using the other ways to upload a document.
by using a put method you will save the file into content database directly. see the example below
Note: the disadvantage of the code below is you cannot catch the object that is responsible for uploading directly, on other word, you cannot update the additional custom properties of the uploaded document directly.
public static bool UploadFileToDocumentLibrary(string sourceFilePath, string targetDocumentLibraryPath)
{
//Flag to indicate whether file was uploaded successfuly or not
bool isUploaded = true;
try
{
// Create a PUT Web request to upload the file.
WebRequest request = WebRequest.Create(targetDocumentLibraryPath);
//Set credentials of the current security context
request.Credentials = CredentialCache.DefaultCredentials;
request.Method = “PUT”;
// Create buffer to transfer file
byte[] fileBuffer = new byte[1024];
// Write the contents of the local file to the request stream.
using (Stream stream = request.GetRequestStream())
{
//Load the content from local file to stream
using (FileStream fsWorkbook = File.Open(sourceFilePath, FileMode.Open, FileAccess.Read))
{
//Get the start point
int startBuffer = fsWorkbook.Read(fileBuffer, 0, fileBuffer.Length);
for (int i = startBuffer; i > 0; i = fsWorkbook.Read(fileBuffer, 0, fileBuffer.Length))
{
stream.Write(fileBuffer, 0, i);
}
}
}
// Perform the PUT request
WebResponse response = request.GetResponse();
//Close response
response.Close();
}
catch (Exception ex)
{
//Set the flag to indiacte failure in uploading
isUploaded = false;
}
//Return the final upload status
return isUploaded;
}
and here are an example of calling this method
UploadFileToDocumentLibrary(#”C:\test.txt”, #”http://home-vs/Shared Documents/textfile.pdf”);

asp.net membership, notify administrators when account about to expire

I have a requirement that a certain email distribution list should be notified every so often (still to be determined) about user accounts that are nearing expiration.
I'm wondering the best way to achieve this, I know its generally a bad idea to spawn another thread within asp.net to handle this type of thing, so I'm thinking maybe a simple service is the way to go but for something so small this seems like it might be slightly overkill.
Ideally I'd like something that doesnt require much babysitting (eg. checking service is running).
I have also suggested having a page in the site with this type of information but it is likely that it could be a few days before this is checked. We also cannot let users extend their own expiration date.
Are there any other viable options.
The best suitable method to work on it according to is
create a application which will select list of all users whose account expiry date is nearby (eg. 10 days from today) as per your requirement.
This application will be scheduled as an daily execution (you will create an exe with log file to display errors raised and total number of emails sent in one execution.)
This application will fetch all the records based on criteria and send the emails to all yours using the basic HTML template. and once the email is sent, you will update a column (notificationFlag) in your database as 1 if you have sent is once in last 10 days. else by default it will be 0
you can schedule the exe by the end of the day at 12:10 am (just incase your database server and webserver is not matching in time) every day. .
This is something I've done which is similar to Prescott's comment on your answer.
I have a website with an administrative page that reports on a bunch of expiration dates.
This page also accepts a QueryString parameter SEND_EMAILS, so anytime an administrative user of the site passes the QueryString parameter SEND_EMAILS=true a bunch of emails go out to all the users that are expiring.
Then I just added a windows scheduled task to run daily and load the page with the SEND_EMAILS=true parameter.
This was the simple code I used to issue the webrequest from the console in the scheduled task:
namespace CmdLoadWebsite
{
class Program
{
static void Main(string[] args)
{
string url = "http://default/site/";
if (args.Length > 0)
{
url = args[0];
}
Console.WriteLine(GetWebResult(url));
}
public static string GetWebResult(string url)
{
byte[] buff = new byte[8192];
StringBuilder sb = new StringBuilder();
HttpWebRequest request = (HttpWebRequest) WebRequest.Create(url);
HttpWebResponse response = (HttpWebResponse) request.GetResponse();
Stream webStream = response.GetResponseStream();
int count = 0;
string webString;
do
{
count = webStream.Read(buff, 0, buff.Length);
if (count != 0)
{
webString = Encoding.ASCII.GetString(buff, 0, count);
sb.Append(webString);
}
}
while (count > 0);
return(sb.ToString());
}
}
}

Upload files directly to Amazon S3 from ASP.NET application

My ASP.NET MVC application will take a lot of bandwidth and storage space. How can I setup an ASP.NET upload page so the file the user uploaded will go straight to Amazon S3 without using my web server's storage and bandwidth?
Update Feb 2016:
The AWS SDK can handle a lot more of this now. Check out how to build the form, and how to build the signature. That should prevent you from needing the bandwidth on your end, assuming you need to do no processing of the content yourself before sending it to S3.
If you need to upload large files and display a progress bar you should consider the flajaxian component.
It uses flash to upload files directly to amazon s3, saving your bandwidth.
The best and the easiest way to upload files to amazon S3 via asp.net . Have a look at following blog post by me . i think this one will help. Here i have explained from adding a S3 bucket to creating the API Key, Installing Amazon SDK and writing code to upload files. Following are are the sample code for uploading files to amazon S3 with asp.net C#.
using System
using System.Collections.Generic
using System.Linq
using System.Web
using Amazon
using Amazon.S3
using Amazon.S3.Transfer
///
/// Summary description for AmazonUploader
///
public class AmazonUploader
{
public bool sendMyFileToS3(System.IO.Stream localFilePath, string bucketName, string subDirectoryInBucket, string fileNameInS3)
{
// input explained :
// localFilePath = we will use a file stream , instead of path
// bucketName : the name of the bucket in S3 ,the bucket should be already created
// subDirectoryInBucket : if this string is not empty the file will be uploaded to
// a subdirectory with this name
// fileNameInS3 = the file name in the S3
// create an instance of IAmazonS3 class ,in my case i choose RegionEndpoint.EUWest1
// you can change that to APNortheast1 , APSoutheast1 , APSoutheast2 , CNNorth1
// SAEast1 , USEast1 , USGovCloudWest1 , USWest1 , USWest2 . this choice will not
// store your file in a different cloud storage but (i think) it differ in performance
// depending on your location
IAmazonS3 client = new AmazonS3Client("Your Access Key", "Your Secrete Key", Amazon.RegionEndpoint.USWest2);
// create a TransferUtility instance passing it the IAmazonS3 created in the first step
TransferUtility utility = new TransferUtility(client);
// making a TransferUtilityUploadRequest instance
TransferUtilityUploadRequest request = new TransferUtilityUploadRequest();
if (subDirectoryInBucket == "" || subDirectoryInBucket == null)
{
request.BucketName = bucketName; //no subdirectory just bucket name
}
else
{ // subdirectory and bucket name
request.BucketName = bucketName + #"/" + subDirectoryInBucket;
}
request.Key = fileNameInS3 ; //file name up in S3
//request.FilePath = localFilePath; //local file name
request.InputStream = localFilePath;
request.CannedACL = S3CannedACL.PublicReadWrite;
utility.Upload(request); //commensing the transfer
return true; //indicate that the file was sent
}
}
Here you can use the function sendMyFileToS3 to upload file stream to amazon S3.
For more details check my blog in the following link.
Upload File to Amazon S3 via asp.net
I hope the above mentioned link will help.
Look for a javascript library to handle the client side upload of these files. I stumbled upon a javascript and php example Dojo also seems to offer a clientside s3 file upload.
ThreeSharp is a library to facilitate interactions with Amazon S3 in a .NET environment.
You'll still need to host the logic to upload and send files to s3 in your mvc app, but you won't need to persist them on your server.
Save and GET data in aws s3 bucket in asp.net mvc :-
To save plain text data at amazon s3 bucket.
1.First you need a bucket created on aws than
2.You need your aws credentials like
a)aws key b) aws secretkey c) region
// code to save data at aws
// Note you can get access denied error. to remove this please check AWS account and give //read and write rights
Name space need to add from NuGet package
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
var credentials = new Amazon.Runtime.BasicAWSCredentials(awsKey, awsSecretKey);
try`
{
AmazonS3Client client = new AmazonS3Client(credentials, RegionEndpoint.APSouth1);
// simple object put
PutObjectRequest request = new PutObjectRequest()
{
ContentBody = "put your plain text here",
ContentType = "text/plain",
BucketName = "put your bucket name here",
Key = "1"
//put unique key to uniquly idenitify your data
// you can pass here any data with unique id like primary key
//in db
};
PutObjectResponse response = client.PutObject(request);
}
catch(exception ex)
{
//
}
Now go to your AWS account and check the bucket you can get data with "1" Name in the AWS s3 bucket.
Note:- if you get any other issue please ask me a question here will try to resolve it.
To get data from AWS s3 bucket:-
try
{
var credentials = new Amazon.Runtime.BasicAWSCredentials(awsKey, awsSecretKey);
AmazonS3Client client = new AmazonS3Client(credentials, RegionEndpoint.APSouth1);
GetObjectRequest request = new GetObjectRequest()
{
BucketName = bucketName,
Key = "1"// because we pass 1 as unique key while save
//data at the s3 bucket
};
using (GetObjectResponse response = client.GetObject(request))
{
StreamReader reader = new
StreamReader(response.ResponseStream);
vccEncryptedData = reader.ReadToEnd();
}
}
catch (AmazonS3Exception)
{
throw;
}

Resources