We are converting a Flex application to use some REST APIs.
When adding the mx.rpc.http.HTTPService class to the code, the SWF binary output grew from 175KB to 260KB. This is an unacceptable hit.
Is there any better way to do lightweight REST calls from a Flex app? Are we better off using an external interface JS just to make the calls from there?
flash.net.URLLoader is built into the runtime and won't cause any increase in filesize. I've used it as a JSON client before, so you shouldn't have any troubles with it.
Below is a very simple example. See documentation for HTTP_STATUS and HTTP_RESPONSE_STATUS for information on their restrictions.
var request: URLRequest = new URLRequest("http://tempuri.org/service/json");
request.method = "POST";
request.contentType = "application/json";
request.data = JSON.encode(jsonObject);
var loader : URLLoader = new URLLoader(request);
// Only supported by some browsers
loader.addEventHandler(HTTPStatusEvent.HTTP_STATUS, statusCodeReceived);
// AIR only
loader.addEventHandler(HTTPStatusEvent.HTTP_RESPONSE_STATUS, statusCodeReceived);
loader.addEventHandler(Event.COMPLETE, function(ev:Event):void
{
var responseJson : String = request.data as String;
var responseJsonObject : String = JSON.decode(responseJson);
});
loader.addEventHandler(SecurityErrorEvent.SECURITY_ERROR, errorHandler);
loader.addEventHandler(IOErrorEvent.IO_ERROR, errorHandler);
I've always thought a good approach to this would be to create a small interface to the browser's JavaScript HTTP API, XmlHttpRequest. I've never tried it, but I've looked into it, and it looked like it might be fairly straightforward.
This would have the added benefit of working around Flash Player's security restrictions which make its HTTP support terribly crippled.
Related
I am working with a legacy ASP.NET Framework MVC application that is experiencing issues with memory usage such as occasional bursts of OutOfMemory exceptions across a variety of operations. The application is frequently operating with large lists of objects (sometimes 10s to 100s of megabytes), and then serializing them to JSON to return to the client. We are not totally sure what the source of the OutOfMemory exceptions is, but believe a likely candidate is memory fragmentation due to too many large objects going on the Large Object Heap.
We are thinking a quick win is to refactor some of the controller endpoints to serialize their JSON content using a stream writer (as outlined in the JSON.NET Documentation), and to stream the content back to the browser. This won't eliminate the memory load of the data lists prior to serialization, but in theory should cut down the overall amount of data going on to the LOH.
The code is written to send the results in chunks of less than 85kb:
public async Task<ActionResult> MyControllerMethod()
{
var data = GetData();
Response.BufferOutput = false;
Response.ContentType = "application/json";
var serializer = JsonSerializer.Create();
using (var sw = new StreamWriter(Response.OutputStream, Encoding.UTF8, 84999))
{
sw.AutoFlush = false;
serializer.Serialize(sw, data);
await sw.FlushAsync();
}
return new EmptyResult();
}
I am aware of a few downsides with this approach, but don't consider them showstoppers:
More complex to implement a unit test due to the 'EmptyResult' returned by the controller.
I have read there is a small overhead due to a call to PInvoke whenever data is flushed. (In practice I haven't noticed this).
Cannot post-process the content using e.g. an HttpHandler
Cannot set a content-length header which may be useful for the client in some cases.
What other downsides or potential problems exist with this approach?
I'm deserializing a huge JSON (1.4 GB) via a stream, because I don't want to load the whole content into memory in advance just for parsing. That's working fine, but it takes ~80 seconds, so I want to display a progress.
public JObject DeserializeViaStream(string filename)
{
object obj;
var serializer = new JsonSerializer();
using (var sr = new StreamReader(new FileStream(filename, FileMode.Open)))
{
using (var jsonTextReader = new JsonTextReader(sr))
{
obj = serializer.Deserialize(jsonTextReader);
}
}
return (JObject) obj;
}
I have not yet tried but only one idea: I could implement my own stream reader which keep track of the bytes being read and comparing that to the file length.
Is there a built-in option or easier way to do this?
I ended up using the idea I had. Luckily there's already a ProgressStream available by Mel Green (archive.org). The original URL is no longer available.
Please note:
this approach may not work for all situations and with all libraries forever. This is due to the fact that the Seek() operation provides random access, and someone could read the file multiple times.
I can't post the source code here, because it was released under an unclear license.
I make my signalrR connection in JavaScript like so:
$.connection.hub.start().done(function () {
//do something
});
In using WebSockets I can set the BinaryType property like so:
var wsUri = "ws://localhost:8080/whiteboard/websocket";
var websocket = new WebSocket(wsUri);
websocket.binaryType = "blob";
or
websocket.binaryType = "arraybuffer";
can i set this property for SignalR and if so how?
No you can't, at least up to the latest officially released version (2.x). SignalR is a set of layered abstactions over a physical connection, where all those abstractions must work with different transport strategies (WebSockets being just one of them), therefore they must represent a common subset of features.
Anything related to the required type of data transmission/serialization is handled by SignalR automatically, except for certain portions when using a PersistentConnection, where only strings can be used. There is some space for changing some behaviors by injecting custom implementations, but I think this one would be very hard to do and probably conflicting with SignalR's general goals.
I've run across some code in an ASP.NET app, and I'm wondering whether there is any practical difference between the following two snippets. Note, we are trying to send a request to a 3rd party endpoint and then use the response in what's rendered on the page.
Dim asyncResult As IAsyncResult = request.BeginGetResponse(Nothing, Nothing)
asyncResult.AsyncWaitHandle.WaitOne()
Using webResponse As WebResponse = request.EndGetResponse(asyncResult)
Using rd As StreamReader = New StreamReader(webResponse.GetResponseStream())
'code here
End Using
End Using
and this synchronous version:
Using webResponse As WebResponse = request.GetResponse()
Using rd As StreamReader = New StreamReader(webResponse.GetResponseStream())
'code here
End Using
End Using
According to this answer to another question, WaitOne blocks the thread. If so, is there really any advantage to doing that versus just using the synchronous method above? Am I correct in assuming that the thread processing the page will not be available to process other requests until this the method is finished either way?
That is a common anti-pattern. You get the worst of both worlds. No threads are unblocked and you add overhead.
Probably, the responsible person heard that using async APIs makes their app more scalable. If that was the case, why wouldn't GetResponse be just implemented in terms of the Begin/End methods and always be scalable?
Async is all the rage at the moment and it is being misused all the time and even when used correctly it is often a waste of time on the server. Don't be surprised seeing stuff like this.
I'm fairly new to BizTalk and creating a custom pipeline component. I have seen code in examples that are similar to the following:
public void Disassemble(IPipelineContext pContext, IBaseMessage pInMsg)
{
Stream originalDataStream = pInMsg.BodyPart.GetOriginalDataStream();
StreamReader strReader = new StreamReader(originalDataStream);
string strOriginalData = strReader.ReadToEnd();
byte[] bufferOriginalMessage = new byte[strOriginalData.Length];
bufferOriginalMessage = ASCIIEncoding.Default.GetBytes(strOriginalData);
Stream ms = new MemoryStream();
ms.Write(bufferOriginalMessage, 0, strOriginalD
//other stuff here
ms.Seek(0, SeekOrigin.Begin);
pInMsg.BodyPart.Data = ms;
}
But nowhere in the method is the StreamReader being closed or disposed. The method simply exits.
Normally when using StreamReader and other classes, it is best practice to use a using statement so that the stream is automatically disposed.
Is there a particular reason (perhaps in BizTalk) why you wouldn't dispose this StreamReader?
I have not found any information on this point. Can anyone help?
In general, yes, it's a good practice to close readers and streams you don't need anymore. That said, there might not necessarily be 100% required everytime. For example, closing the reader would close the underlying stream normally, but chances are, something else is probably already aware of the stream and will close it at the right time on it's own.
What is good practice, however, is to add any streams you use in a pipeline component with a lifetime matching that of the message to the resource tracker, so that BizTalk can dispose them automatically when the pipeline execution finishes and the message has been processed.