This is actually a 2-part question related directly to .net core 3.0 and specifically with PipeWriter: 1) How should I read in the HttpResponse body? 2) How can I update the HttpResponse? I'm asking both questions because I feel like the solution will likely involve the same understanding and code.
Below is how I got this working in .net core 2.2 - note that this is using streams instead of PipeWriter and other "ugly" things associated with streams - eg. MemoryStream, Seek, StreamReader, etc.
public class MyMiddleware
{
private RequestDelegate Next { get; }
public MyMiddleware(RequestDelegate next) => Next = next;
public async Task Invoke(HttpContext context)
{
var httpResponse = context.Response;
var originalBody = httpResponse.Body;
var newBody = new MemoryStream();
httpResponse.Body = newBody;
try
{
await Next(context);
}
catch (Exception)
{
// In this scenario, I would log out the actual error and am returning this "nice" error
httpResponse.StatusCode = StatusCodes.Status500InternalServerError;
httpResponse.ContentType = "application/json"; // I'm setting this because I might have a serialized object instead of a plain string
httpResponse.Body = originalBody;
await httpResponse.WriteAsync("We're sorry, but something went wrong with your request.");
return;
}
// If everything worked
newBody.Seek(0, SeekOrigin.Begin);
var response = new StreamReader(newBody).ReadToEnd(); // This is the only way to read the existing response body
httpResponse.Body = originalBody;
await context.Response.WriteAsync(response);
}
}
How would this work using PipeWriter? Eg. it seems that working with pipes instead of the underlying stream is preferable, but I can not yet find any examples on how to use this to replace my above code?
Is there a scenario where I need to wait for the stream/pipe to finish writing before I can read it back out and/or replace it with a new string? I've never personally done this, but looking at examples of PipeReader seems to indicate to read things in chunks and check for IsComplete.
To Update HttpRepsonse is
private async Task WriteDataToResponseBodyAsync(PipeWriter writer, string jsonValue)
{
// use an oversized size guess
Memory<byte> workspace = writer.GetMemory();
// write the data to the workspace
int bytes = Encoding.ASCII.GetBytes(
jsonValue, workspace.Span);
// tell the pipe how much of the workspace
// we actually want to commit
writer.Advance(bytes);
// this is **not** the same as Stream.Flush!
await writer.FlushAsync();
}
Related
The old version of this question got too long so by the end of numerous attemts to solve this issue I came up that all can be taken down to a simple question. Why does this produce a SystemObjectDisposed.
private async void PickPhotoButton_OnClicked(object sender, EventArgs e)
{
_globalStream = await DependencyService.Get<IPicturePicker>
().GetImageStreamAsync();
_globalArray = StreamToByteArray(_globalStream);
var gal = new GalleryResource()
{
Pic = _globalArray
};
MemoryObjects.CurrentGallery = gal;
var ctr = HelperMethods.GetInstance<GalleryController>();
await ctr.Post();
}
public byte[] StreamToByteArray(Stream input)
{
using (MemoryStream ms = new MemoryStream())
{
input.CopyTo(ms);
return ms.ToArray();
}
}
The stream arrives from the native side, I turn it into a byte array and pass it into my repository. Everyting work with a dummy byte array so something is wrong with the stream object that possibly gets closed or disposed at point.
The exception is thrown in the repository at this point:
var response = await _client.PostAsync(endPoint, _repService.ConvertObjectToStringContent(obj));
ConvertObjectToStringContent(obj) - not this part of it. From here it actually returns with a value and the byte array is seen inside the debug ie. the byte array stay with a valid lenght all way through.
The only event that does take place when we do finish picking the photo from the library is the following:
void OnImagePickerFinishedPickingMedia(object sender,
UIImagePickerMediaPickedEventArgs args)
{
UIImage image = args.EditedImage ?? args.OriginalImage;
if (image != null)
{
// Convert UIImage to .NET Stream object
NSData data = image.AsJPEG(1);
Stream stream = data.AsStream();
// Set the Stream as the completion of the Task
taskCompletionSource.SetResult(stream);
}
else
{
taskCompletionSource.SetResult(null);
}
imagePicker.DismissModalViewController(true);
}
However it doesn't seem to dispose the stream and even if it did we already got a byte array from it.
Tried even doing this inside Native code
var client = new HttpClient();
var c = new MultipartFormDataContent();
c.Add(new StreamContent(image.AsJPEG(1).AsStream()));
var response = await client.PostAsync(Settings.EndPoint + "api/gallery/", c);
Same error.
I think your problem lies somewhere in this line _byteArray = ToByteArray(_array);
ToByteArray(stream) seems to return you the byte array maybe via conversion from a stream, and this stream might still have a reference to the bytearray. And it might have become disposed.
If it's inside this method, please post it, I wanna knowww
I'm not quite experienced enough to exactly tell what it is about, but maybe my suggestions will be hitting the right spot!
Btw your code looks real clean, I like it!
So, although this issue did come up in the first place with the CrossMedia plugin https://github.com/jamesmontemagno/MediaPlugin it did the same error.
However the error only comes up if you for instance pick a photo like this:
var _mediaFile = await CrossMedia.Current.PickPhotoAsync();
So, when I did this:
var _mediaFile = await CrossMedia.Current.PickPhotoAsync(new
Plugin.Media.Abstractions.PickMediaOptions
{
PhotoSize = Plugin.Media.Abstractions.PhotoSize.Small,
CompressionQuality = 90,
});
The error went away. No idea why.
I'm creating an app that requires todo parallel http request, I'm using HttpClient for this.
I'm looping over the urls and foreach URl I start a new Task todo the request.
after the loop I wait untill every task finishes.
However when I check the calls being made with fiddler I see that the request are being called synchronously. It's not like a bunch of request are being made, but one by one.
I've searched for a solution and found that other people have experienced this too, but not with UWP. The solution was to increase the DefaultConnectionLimit on the ServicePointManager.
The problem is that ServicePointManager does not exist for UWP. I've looked in the API's and I thought I could set the DefaultConnectionLimit on HttpClientHandler, but no.
So I have a few Questions.
Is DefaultConnectionLimit still a property that could be set somewhere?
if so, where do i set it?
if not, how do I increase the connnectionlimit?
Is there still a connectionlimit in UWP?
this is my code:
var requests = new List<Task>();
var client = GetHttpClient();
foreach (var show in shows)
{
requests.Add(Task.Factory.StartNew((x) =>
{
((Show)x).NextEpisode = GetEpisodeAsync(((Show)x).NextEpisodeUri, client).Result;}, show));
}
}
await Task.WhenAll(requests.ToArray());
and this is the request:
public async Task<Episode> GetEpisodeAsync(string nextEpisodeUri, HttpClient client)
{
try
{
if (String.IsNullOrWhiteSpace(nextEpisodeUri)) return null;
HttpResponseMessage content; = await client.GetAsync(nextEpisodeUri);
if (content.IsSuccessStatusCode)
{
return JsonConvert.DeserializeObject<EpisodeWrapper>(await content.Content.ReadAsStringAsync()).Episode;
}
}
catch (Exception ex)
{
Debug.WriteLine(ex.Message);
}
return null;
}
Oke. I have the solution. I do need to use async/await inside the task. The problem was the fact I was using StartNew instead of Run. but I have to use StartNew because i'm passing along a state.
With the StartNew. The task inside the task is not awaited for unless you call Unwrap. So Task.StartNew(.....).Unwrap(). This way the Task.WhenAll() will wait untill the inner task is complete.
When u are using Task.Run() you don't have to do this.
Task.Run vs Task.StartNew
The stackoverflow answer
var requests = new List<Task>();
var client = GetHttpClient();
foreach (var show in shows)
{
requests.Add(Task.Factory.StartNew(async (x) =>
{
((Show)x).NextEpisode = await GetEpisodeAsync(((Show)x).NextEpisodeUri, client);
}, show)
.Unwrap());
}
Task.WaitAll(requests.ToArray());
I think an easier way to solve this is not "manually" starting requests but instead using linq with an async delegate to query the episodes and then set them afterwards.
You basically make it a two step process:
Get all next episodes
Set them in the for each
This also has the benefit of decoupling your querying code with the sideeffect of setting the show.
var shows = Enumerable.Range(0, 10).Select(x => new Show());
var client = new HttpClient();
(Show, Episode)[] nextEpisodes = await Task.WhenAll(shows
.Select(async show =>
(show, await GetEpisodeAsync(show.NextEpisodeUri, client))));
foreach ((Show Show, Episode Episode) tuple in nextEpisodes)
{
tuple.Show.NextEpisode = tuple.Episode;
}
Note that i am using the new Tuple syntax of C#7. Change to the old tuple syntax accordingly if it is not available.
I have a form that uploads multiple files. My model has a List<HttpPostedFileBase> called SchemaFileBases, which is correctly binded. I need to upload these files to s3 and would like to do it in parallel. I'm unable to use asyc and await because this code is run from both ASP.Net and a queue based application that currently doesn't have async/await support (working on it).
If I change the foreach below to Parallel.ForEach(this.SchemaFileBases, schemaFileBase => {... Then I get some funkiness going on. The two files end up being mashed. Each file will contain some of the other files content after it's uploaded. AwsDocument is being used elsewhere in parallel so I don't think it has to do with that. Each AwsDocument has it's own AmazonS3Client.
public override void UploadToS3(IMetadataParser parser)
{
string hash;
string key;
foreach (var schemaFileBase in this.SchemaFileBases)
{
AwsDocument aws = new AwsDocument(AwsBucket.Received);
hash = schemaFileBase.InputStream.Md5Hash().ToByteArray().ToHex();
key = String.Format("{0}/{1}", this.S3Prefix, schemaFileBase.FileName);
Stream inputStream = schemaFileBase.InputStream;
aws.UploadToS3(key, inputStream, hash);
}
}
My coworker suspect's it's something to do with how the InputStream on the HttpPostedFileBase is implemented. Perhaps it is not thread safe, and the streams are both reading from the original request at the same time? I can't imagine MS would do that though.
Multi-threaded version:
public override void UploadToS3(IMetadataParser parser)
{
Parallel.ForEach(this.SchemaFileBases, f =>
{
AwsDocument aws = new AwsDocument(AwsBucket.Received);
string hash = f.InputStream.Md5Hash().ToByteArray().ToHex();
string key = String.Format("{0}/{1}", this.S3Prefix, f.FileName);
Stream inputStream = f.InputStream;
aws.UploadToS3(key, inputStream, hash);
});
}
Above solution is what I tried to multi-thread it. Does not work (files get mixed up all weird).
I have been trying to find a way to make this task more efficient. I am consuming a REST based web service and need to update information for over 2500 clients.
I am using fiddler to watch the requests, and I'm also updating a table with an update time when its complete. I'm getting about 1 response per second. Are my expectations to high? I'm not even sure what I would define as 'fast' in this context.
I am handling everything in my controller and have tried running multiple web requests in parallel based on examples around the place but it doesn't seem to make a difference. To be honest I don't understand it well enough and was just trying to get it to build. I suspect it is still waiting for each request to complete before firing again.
I have also increased connections in my web config file as per another suggestion with no success:
<system.net>
<connectionManagement>
<add address="*" maxconnection="20" />
</connectionManagement>
</system.net>
My Controllers action method looks like this:
public async Task<ActionResult> UpdateMattersAsync()
{
//Only get matters we haven't synced yet
List<MatterClientRepair> repairList = Data.Get.AllUnsyncedMatterClientRepairs(true);
//Take the next 500
List<MatterClientRepair> subRepairList = repairList.Take(500).ToList();
FinalisedMatterViewModel vm = new FinalisedMatterViewModel();
using (ApplicationDbContext db = new ApplicationDbContext())
{
int jobCount = 0;
foreach (var job in subRepairList)
{
// If not yet synced - it shouldn't ever be!!
if (!job.Synced)
{
jobCount++;
// set up some Authentication fields
var oauth = new OAuth.Manager();
oauth["access_token"] = Session["AccessToken"].ToString();
string uri = "https://app.com/api/v2/matters/" + job.Matter;
// prepare the json object for the body
MatterClientJob jsonBody = new MatterClientJob();
jsonBody.matter = new MatterForUpload();
jsonBody.matter.client_id = job.NewClient;
string jsonString = jsonBody.ToJSON();
// Send it off. It returns the whole object we updated - we don't actually do anything with it
Matter result = await oauth.Update<Matter>(uri, oauth["access_token"], "PUT", jsonString);
// update our entities
var updateJob = db.MatterClientRepairs.Find(job.ID);
updateJob.Synced = true;
updateJob.Update_Time = DateTime.Now;
db.Entry(updateJob).State = System.Data.Entity.EntityState.Modified;
if (jobCount % 50 == 0)
{
// save every 50 changes
db.SaveChanges();
}
}
}
// if there are remaining files to save
if (jobCount % 50 != 0)
{
db.SaveChanges();
}
return View("FinalisedMatters", Data.Get.AllMatterClientRepairs());
}
}
And of course the Update method itself which handles the Web requesting:
public async Task<T> Update<T>(string uri, string token, string method, string json)
{
var authzHeader = GenerateAuthzHeader(uri, method);
// prepare the token request
var request = (HttpWebRequest)WebRequest.Create(uri);
request.Headers.Add("Authorization", authzHeader);
request.Method = method;
request.ContentType = "application/json";
request.Accept = "application/json, text/javascript";
byte[] bytes = System.Text.Encoding.ASCII.GetBytes(json);
request.ContentLength = bytes.Length;
System.IO.Stream os = request.GetRequestStream();
os.Write(bytes, 0, bytes.Length);
os.Close();
WebResponse response = await request.GetResponseAsync();
using (var reader = new System.IO.StreamReader(response.GetResponseStream()))
{
return JsonConvert.DeserializeObject<T>(reader.ReadToEnd());
}
}
If it's not possible to do more than 1 request per second then I'm interested in looking at an Ajax solution so I can give the user some feedback while it is processing. In my current solution I cannot give the user feedback while the action method hasn't reached 'return' yet can I?
Okay it's taken me a few days (and a LOT of trial and error) but I've worked this out. Hopefully it can help others. I finally found my silver bullet. And it was probably the place I should have started:
MSDN: Consuming the Task-based Asynchronous Pattern
In the end this following line of code is what brought it all to light.
string [] pages = await Task.WhenAll(from url in urls select DownloadStringAsync(url));
I substituted a few things to make it work for a Put request as follows:
HttpResponseMessage[] results = await Task.WhenAll(from p in toUpload select client.PutAsync(p.uri, p.jsonContent));
'toUpload' is a List of MyClass:
public class MyClass
{
// the URI should be relative to the base pase
// (ie: /api/v2/matters/101)
public string uri { get; set; }
// a string in JSON format, being the body of the PUT request
public StringContent jsonContent { get; set; }
}
The key was to stop trying to put my PutAsync method inside a loop. My new line of code IS still blocking until ALL responses have come back, but that is what I wanted. Also, learning that I could use this LINQ style expression to create a Task List on the fly was immeasurably helpful. I won't post all the code (unless someone wants it) because it's not as nicely refactored as the original and I still need to check whether the response of each item was 200 OK before I record it as successfully saved in my database. So how much faster is it?
Results
I tested a sample of 50 web service calls from my local machine. (There is some saving of records to a SQL Database in Azure at the end).
Original Synchronous Code: 70.73 seconds
Asynchronous Code: 8.89 seconds
That's gone from 1.4146 requests per second down to a mind melting 0.1778 requests per second! (if you average it out)
Conclusion
My journey isn't over. I've just scratched the surface of asynchronous programming and am loving it. I need to now work out how to save only the results that have returned 200 OK. I can deserialize the HttpResponse which returns a JSON object (which has a unique ID I can look up etc.) OR I could use the Task.WhenAny method, and experiment with Interleaving.
My goal is to authenticate Web API requests using a AuthorizationFilter or DelegatingHandler. I want to look for the client id and authentication token in a few places, including the request body. At first it seemed like this would be easy, I could do something like this
var task = _message.Content.ReadAsAsync<Credentials>();
task.Wait();
if (task.Result != null)
{
// check if credentials are valid
}
The problem is that the HttpContent can only be read once. If I do this in a Handler or a Filter then the content isn't available for me in my action method. I found a few answers here on StackOverflow, like this one: Read HttpContent in WebApi controller that explain that it is intentionally this way, but they don't say WHY. This seems like a pretty severe limitation that blocks me from using any of the cool Web API content parsing code in Filters or Handlers.
Is it a technical limitation? Is it trying to keep me from doing a VERY BAD THING(tm) that I'm not seeing?
POSTMORTEM:
I took a look at the source like Filip suggested. ReadAsStreamAsync returns the internal stream and there's nothing stopping you from calling Seek if the stream supports it. In my tests if I called ReadAsAsync then did this:
message.Content.ReadAsStreamAsync().ContinueWith(t => t.Result.Seek(0, SeekOrigin.Begin)).Wait();
The automatic model binding process would work fine when it hit my action method. I didn't use this though, I opted for something more direct:
var buffer = new MemoryStream(_message.Content.ReadAsByteArrayAsync().WaitFor());
var formatters = _message.GetConfiguration().Formatters;
var reader = formatters.FindReader(typeof(Credentials), _message.Content.Headers.ContentType);
var credentials = reader.ReadFromStreamAsync(typeof(Credentials), buffer, _message.Content, null).WaitFor() as Credentials;
With an extension method (I'm in .NET 4.0 with no await keyword)
public static class TaskExtensions
{
public static T WaitFor<T>(this Task<T> task)
{
task.Wait();
if (task.IsCanceled) { throw new ApplicationException(); }
if (task.IsFaulted) { throw task.Exception; }
return task.Result;
}
}
One last catch, HttpContent has a hard-coded max buffer size:
internal const int DefaultMaxBufferSize = 65536;
So if your content is going to be bigger than that you'll need to manually call LoadIntoBufferAsync with a larger size before you try to call ReadAsByteArrayAsync.
The answer you pointed to is not entirely accurate.
You can always read as string (ReadAsStringAsync)or as byte[] (ReadAsByteArrayAsync) as they buffer the request internally.
For example the dummy handler below:
public class MyHandler : DelegatingHandler
{
protected override async System.Threading.Tasks.Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
{
var body = await request.Content.ReadAsStringAsync();
//deserialize from string i.e. using JSON.NET
return base.SendAsync(request, cancellationToken);
}
}
Same applies to byte[]:
public class MessageHandler : DelegatingHandler
{
protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
{
var requestMessage = await request.Content.ReadAsByteArrayAsync();
//do something with requestMessage - but you will have to deserialize from byte[]
return base.SendAsync(request, cancellationToken);
}
}
Each will not cause the posted content to be null when it reaches the controller.
I'd put the clientId and the authentication key in the header rather than content.
In which way, you can read them as many times as you like!