get the client side values inside ashx file - asp.net

I am uploading my pictures using uplodifiy. here is the my codes are below.but
inside the Upload.ashx handler,I coulnt get the submitted values(Id and foo values).they return null values.
how can I solve this problem .thank you .
I have a code like this
$(document).ready(function () {
var id = "55";
var theString = "asdf";
$("#<%=FileUpload1.ClientID%>").uploadify({
'uploader': 'Upload.ashx',
'swf': 'uploadify/uploadify.swf',
'script': 'Upload.ashx',
'cancelImg': 'images/cancel.png',
'folder': 'upload',
'multi': true,
'buttonText': 'RESIM SEC',
'fileExt': '*.jpg;*.png;*.gif;*.bmp;*.jpeg',
'auto': false,
'scriptData': { 'id': id, 'foo': theString}
,onAllComplete: function (event, data) {
}
});
});
and my ashx file like this;
public void ProcessRequest(HttpContext context)
{
context.Response.ContentType = "text/plain";
context.Response.Expires = -1;
try
{
//I tryed both way to get values both both return null values.
string pwd1 = context.Request["Id"];
string pwd2 = context.Request.Form["Foo"];
HttpPostedFile postedFile = context.Request.Files["Filedata"];
string id = context.Request["id"];
string savepath = "";
string tempPath = "";
tempPath = context.Request["folder"];
//If you prefer to use web.config for folder path, uncomment below line:
//tempPath = System.Configuration.ConfigurationManager.AppSettings["FolderPath"];
savepath = context.Server.MapPath(tempPath);
string filename = postedFile.FileName;
if (!Directory.Exists(savepath))
Directory.CreateDirectory(savepath);
string ext = System.IO.Path.GetExtension(filename);
string resimGuid = Guid.NewGuid().ToString();
..........
..........

Use formData with Post method
Extra data can be passed to the script as either a querystring if the method option is set to ‘get’, or via the headers if it’s set to ‘post’. This is all done with the help of the formData option. Depending on what you’ve set as the method option (‘post’ or ‘get’), you can retrieve the information sent in the formData option at server side.
For more detils Please refer Passing Extra Data

Related

Download multiple files (50mb) blazor server-side

i can't really find a way to download a 100mb zip file from the server to the client and also show the progress while downloading. So how will this look for a normal api controller i can add to my server-side project? if lets say i have 3 files i want to download at 50mb each.
i have tried using JSInterop like this, but this is not showing the progress of the file download, and how will i do if i want to download 3 seperate files at the same time?
try
{
//converting file into bytes array
var dataBytes = System.IO.File.ReadAllBytes(file);
await JSRuntime.InvokeVoidAsync(
"downloadFromByteArray",
new
{
ByteArray = dataBytes,
FileName = "download.zip",
ContentType = "application/force-download"
});
}
catch (Exception)
{
//throw;
}
JS:
function downloadFromByteArray(options: {
byteArray: string,
fileName: string,
contentType: string
}): void {
// Convert base64 string to numbers array.
const numArray = atob(options.byteArray).split('').map(c => c.charCodeAt(0));
// Convert numbers array to Uint8Array object.
const uint8Array = new Uint8Array(numArray);
// Wrap it by Blob object.
const blob = new Blob([uint8Array], { type: options.contentType });
// Create "object URL" that is linked to the Blob object.
const url = URL.createObjectURL(blob);
// Invoke download helper function that implemented in
// the earlier section of this article.
downloadFromUrl({ url: url, fileName: options.fileName });
// At last, release unused resources.
URL.revokeObjectURL(url);
}
UPDATE:
if im using this code, it will show me the progress of the file. But how can i trigger it from my code? This way does not do it. But typing the url does.
await Http.GetAsync($"Download/Model/{JobId}");
Controller
[HttpGet("download/model/{JobId}")]
public IActionResult DownloadFile([FromRoute] string JobId)
{
if (JobId == null)
{
return BadRequest();
}
var FolderPath = $"xxxx";
var FileName = $"Model_{JobId}.zip";
var filePath = Path.Combine(environment.WebRootPath, FolderPath, FileName);
byte[] fileBytes = System.IO.File.ReadAllBytes(filePath);
return File(fileBytes, "application/force-download", FileName);
}
UPDATE 2!
i have got it download with progress and click with using JSInterop.
public async void DownloadFiles()
{
//download all selectedFiles
foreach (var file in selectedFiles)
{
//download these files
await JSRuntime.InvokeAsync<object>("open", $"Download/Model/{JobId}/{file.Name}", "_blank");
}
}
Now the only problem left is.. it only downloads the first file out of 3.

Redacting Log Content

In Audit.Net, is it possible to filter request bodies being saved containing sensitive data? This is for Audit.WebAPI.
E.g., There's a JSON request body with {"username": "me", "password": "sensitive"}. Can the password value "sensitive" be replaced by ""?
You could add a custom action that sanitizes the body string on the audit event.
For example:
using Audit.WebApi;
Audit.Core.Configuration.AddCustomAction(ActionType.OnScopeCreated, scope =>
{
var action = scope.GetWebApiAuditAction();
var bodyString = action?.RequestBody?.Value?.ToString();
if (!string.IsNullOrEmpty(bodyString))
{
action.RequestBody.Value = Sanitize(bodyString);
}
});
Using a regular expression:
private string Sanitize(string input)
{
var pattern = #"\s*\""password\"" *: *\"".*\""(,|(?=\s+\}))";
var substitution = #"""password"": """"";
var regex = new Regex(pattern);
return regex.Replace(input, substitution);
}

How to get file size in MultipartStreamProvider GetStream when missing from ContentDisposition

I creating a custom MultipartStreamProvider to store files in Azure File Storage as part of a Lift & Shift effort of a legacy application. The client is using AngularJS on the front end and WebAPI on the backend. When I am trying to use the MultipartStreamProvider, I need to implement GetStream to return a stream for it to write to. I am using cloudFile.OpenWrite which asks for the size of the stream/file that will be written to it. However, in GetStream, the ContentDisposition.Size is empty. Is there a way I can either make the AngularJS send the content size for each file or on the backend, maybe I can dig the size of the file stream from somewhere else? Any help would be greatly appreciated. Thanks!
MultipartStreamProvider
public override Stream GetStream(HttpContent parent, HttpContentHeaders headers)
{
// For form data, Content-Disposition header is a requirement
ContentDispositionHeaderValue contentDisposition = headers.ContentDisposition;
Console.WriteLine(files.Count);
if (contentDisposition != null)
{
// create default filename if its missing
contentDisposition.FileName = (String.IsNullOrEmpty(contentDisposition.FileName) ? $"{Guid.NewGuid()}.data" : contentDisposition.FileName);
// We won't post process files as form data
_isFormData.Add(false);
CloudMultipartFileData fileData = new CloudMultipartFileData(headers, _fileRepository.BaseUrl, contentDisposition.FileName);// your - aws - filelocation - url - maybe);
_fileData.Add(fileData);
var azureStream = _fileRepository.GetWriteStream(contentDisposition.Size, _relativeDirectory, fileData.FileName);
return azureStream;
// We will post process this as form data
_isFormData.Add(true);
}
throw new InvalidOperationException("Did not find required 'Content-Disposition' header field in MIME multipart body part..");
}
Actual Call to Azure
public override Stream GetWriteStream(long? fileSize, string relativeDirectory, string filename)
{
var combinedRelativeDirectory = GetCloudDirectory(relativeDirectory);
CloudFile cloudFile = combinedRelativeDirectory.GetFileReference(filename);
return cloudFile.OpenWrite(fileSize, null, null);
}
AngularJS File Upload Code
/********************************** Add/Upload Photos **************************************/
$scope.$watch('files', function (files) {
$scope.formUpload = false;
console.log(files);
if (files != null) {
for (var i = 0; i < files.length; i++) {
$scope.errorMsg = null;
(function (file) {
upload(file);
})(files[i]);
}
}
});
function upload(file) {
file.upload = Upload.upload({
url: window.location.origin + "/api/mydocs/uploadfile?storeFolder=" + $scope.attachmentFolder + "&storeId=" + $scope.storeId + "&userId=" + $scope.currentUser.UserId,
method: 'POST',
headers: {},
fields: {},
file: file
});
I wound up manually adding the file size to the header
function upload(file) {
file.upload = Upload.upload({
url: window.location.origin + "/api/mydocs/uploadfile?storeFolder=" + $scope.myFolder + "&clientId=" + $scope.clientId,
method: 'POST',
headers: { 'file-info':file.name + "-/" + file.size },
fields: {},
file: file
});
And then in the constructor, I use that to create a lookup table:
public MyCloudMultipartFormDataStreamProvider(string relativeDirectory, IEnumerable<string> lookupInfo)
{
NewFileNames = new Dictionary<string, string>();
_fileRepository = new CloudFileRepository();
_relativeDirectory = relativeDirectory;
_uploadedFilesLookup = new Dictionary<string, long>();
foreach (var fileInfo in lookupInfo)
{
var values = Regex.Split(fileInfo, #"-/");
_uploadedFilesLookup.Add(values[0], Int64.Parse(values[1]));
}
}
Then grab the file's size out of the lookup table and pass that to my GetWriteStream method
var azureStream = _fileFacade.GetWriteStream(_uploadedFilesLookup[fileName],
_relativeDirectory, fileData.FileName, out newFileName);

NSubstitute HttpPostedFileBase SaveAs

On my journey of unit testing my code and I have this code:
var ufile = Substitute.For<HttpPostedFileBase>();
var server = Substitute.For<HttpServerUtilityBase();
var saved = false;
ufile.FileName.Returns("somefileName");
var fileName = fs.Path.GetFileName(ufile.FileName);
var path = fs.Path.Combine(server.MapPath(upath), fileName);
ufile.When(h => h.SaveAs(path)).Do(x => saved = true);
Assert.IsTrue(saved);
So here is what I am testing which I gleaned from different sites:
public ActionResult UploadFiles()
{
var fileinfo = new List<UploadedImageViewModel>();
foreach (string files in Request.Files)
{
var hpf = Request.Files[files] as HttpPostedFileBase; //
if (hpf != null && hpf.ContentLength > 0)
continue;
var FileName = Path.GetFileName(hpf.FileName); //Gets name of file uploaded
var temppath = Path.Combine(Server.MapPath("~/uploadtemp/"), FileName); // creates a string representation of file location
hpf.SaveAs(temppath);
//resize the image before you save it to the right folder under FileUploads
}
return View(fileinfo);
}
Can someone please help me understand this when().Do() syntax of Nsubstitute? In the docs, it says the do should have an action in it but I will need examples to understand.
Then the SaveAs() method of HttpPostedFileBase is void and in Nsubstitute Docs, it says to use when().Do() for void methods so please tell me what is wrong with my unit test.
//Suppose we have this setup
public class MyClass
{
string ReturnSomething()
{
return "FooBar";
}
void DoSomething(out string reason){
reason = 'Oops';
}
}
The usual stubbing syntax for NSubstitute is to use Returns like this:
myClass.ReturnSomething().Returns("wibble");
This stubs out ReturnSomething(), but the Returns syntax only works for methods with a return value.
For methods that have no return we can instead use When().Do(). This is basically what is meant by an Action in their documentation (as opposed to a Func, which does have a return value). A common need to do this is to fill in output parameters on such methods:
string reason;
myClass.When(c => c.DoSomething(out reason))
.Do(args =>
{
args[0] = "Fail";
});
For more on Action and Func see MSDN: Action, Func.
In the specific case of your unit test, instead of setting a variable saved when SaveAs is invoked, consider asserting using the NSubstitute.Received construct instead.

Export to excel not working from HttpHandler J-Query AJAX

im have a weird problem whereby my functionality of exporting to excel doesnt seem to work.
im using J-Query and AJAX to pass html data to a http handler which has some simple context.Response code which all seems fine. Anyway, heres my code:
// my hyperlink for user to click
Click Here to Export
my J-Query/AJAX code
<script type="text/javascript">
$(document).ready(function () {
$("#hyperLink").click(function (e) {
var result = $('#output').html();
var newRes = result.replace('\n', '');
$.ajax({
url: "ExportToExcelhandler.ashx",
data: { 'htmlData': newRes },
dataType: "json",
type: "POST",
success: function (data) {
alert(data);
}
});
});
});
</script>
and my handler:
public void ProcessRequest(HttpContext context)
{
string htmlStuff = context.Request["htmlData"];
string trimStart = "";
string trimEnd = "";
if (htmlStuff != null)
{
trimStart = htmlStuff.Substring(75, htmlStuff.Length - 75);
trimEnd = trimStart.Remove(trimStart.Length - 8, 8) + "";
}
string final= trimEnd;
context.Response.Clear();
context.Response.Buffer = true;
context.Response.AddHeader("content-disposition", "attachment; filename=excelData.xls");
context.Response.ContentType = "application/vnd.ms-excel";
HttpResponse response = context.Response;
context.Response.Output.Write(finalHtmlData);
context.Response.Flush();
context.Response.End();
}
-- Granted, I'm doing some weird things with replace function in my J-Query, and Substring and Remove in my handler; this is because i had to trim my html data so that only the table with the data inside it was included (caused error otherwise). The html data is just report data. So the html data is passed fine to the handler, and it goes through the ProcessRequest method fine, yet doesn't export to excel. Any help would be greatly appreciated, thanks.
Split this into two HTTP handlers, one to generate the Excel data and the second to retrieve the data and have a resource point at it, like this:
GenerateExcelDocument HTTP handler code:
public void ProcessRequest(HttpContext context)
{
string htmlStuff = context.Request["htmlData"];
var docIdentifier = this.GenerateExcelDocument(htmlStuff);
context.Response.ContentType = "text/plain";
context.Response.Write(docIdentifier.ToString("N"));
}
private Guid GenerateExcelDocument()
{
var identifier = Guid.NewGuid();
string trimStart = "";
string trimEnd = "";
if (htmlStuff != null)
{
trimStart = htmlStuff.Substring(75, htmlStuff.Length - 75);
trimEnd = trimStart.Remove(trimStart.Length - 8, 8) + "";
}
// Logic that generates your document and saves it using the identifier
// Can save to database, file system, etc.
return identifier;
}
Now you can call this HTTP handler, like this:
$(document).ready(function () {
$("#hyperLink").click(function (e) {
var result = $('#output').html();
var newRes = result.replace('\n', '');
$.ajax({
url: "GenerateExcelDocument.ashx",
data: { 'htmlData': newRes },
dataType: "json",
type: "POST",
success: function (result) {
window.location.href = '/RetrieveExcelDocument.ashx/' + result;
}
});
});
});
Note: The success callback is where you can hook up the HTML resource to the file retrieval from the server (think href of the anchor tag that worked without passing data to the handler before).
Finally, we need to build the retrieval HTTP handler logic to actually get the Excel document, based upon the identifier returned from the GenerateExcelDocument HTTP handler call, like this:
RetrieveExcelDocument HTTP handler code:
public void ProcessRequest(HttpContext context)
{
var identifier = new Guid(context.Request.Url.Segments[1]);
// Get Excel document content from database, file system, etc. here
var fileContent = GetExcelDocument(identifier);
context.Response.AddHeader("content-disposition",
"attachment; filename=excelData.xls");
context.Response.ContentType = "application/vnd.ms-excel";
context.Response.OutputStream.Write(fileContent, 0, fileContent.Length);
}

Resources