Sending HTTP request with multiple parameters having same name - http

I need to send a HTTP request (and get XML response) from Flash that looks similar to following:
http://example.com/somepath?data=1&data=2&data=3
I.e. having several parameters that share same name, but have different values.
Until now I used following code to make HTTP requests:
var resp:XML = new XML();
resp.onLoad = function(success:Boolean) {/*...*/};
resp.ignoreWhite = true;
var req:LoadVars = new LoadVars();
req["someParam1"] = 3;
req["someParam2"] = 12;
req.sendAndLoad("http://example.com/somepath", resp, "GET");
In this case this will not do: there will be only one parameter having last value.
What are my options? I'm using actionscript 2.
Added
I guess, I can do something like that:
var url:String = myCustomFunctionForBuildingRequestString();
var resp:XML = new XML();
resp.onLoad = function(success:Boolean) {/*...*/};
resp.load(url);
But in that case I am loosing ability to do POST requests. Any alternatives?
Changing request is not appropriate.

The standard http way of sending array data is
http://example.com/?data[0]=1&data[1]=2
But this isn't wrong either (added from comment):
http://example.com/?data[]=1&data[]=2
Sending more parameters with the same name like you're doing, in practice means that all but the last item should be ignored. This is because when reading variables, the server overwrites (in memory) any item that has the same name as that one, because renaming a variable isn't good practice and never was.
I don't know much AS (none :p) but you'd access it as a list or array or whatever data structures it has.

Although POST may be having multiple values for the same key, I'd be cautious using it, since some servers can't even properly handle that, which is probably why this isn't supported ... if you convert "duplicate" parameters to a list, the whole thing might start to choke, if a parameter comes in only once, and suddendly you wind up having a string or something ... but i guess you know what you're doing ...
I am sorry to say so, but what you want to do, is not possible in pure AS2 ... the only 2 classes available for HTTP are LoadVars and XML ... technically there's also loadVariables, but it will simply copy properties from the passed object into the request, which doesn't change your problem, since properties are unique ...
if you want to stick to AS2, you need an intermediary tier:
a server to forward your calls. if you have access to the server, then you create a new endpoint for AS2 clients, which will decode the requests and pass them to the normal endpoint.
use javascript. with flash.external::ExternalInterface you can call JavaScript code. You need to define a callback for when the operation is done, as well as a JavaScript function that you can call (there are other ways but this should suffice). Build the request string inside flash, pump it to JavaScript and let JavaScript send it to the server in a POST request and get the response back to flash through the callback.
up to you to decide which one is more work ...
side note: in AS3, you'd use flash.net::URLLoader with dataFormat set to flash.net::URLLoaderDataFormat.TEXT, and then again encode parameters to a string, and send them.

Disclaimer; I've never used Actionscript and have no means for testing this.
Putting the same variable name with several values on the query string is the standard way of sending multi-value variables (for example form checkboxes) to web servers. If LoadVars is capable of sending multiple values then it seems plausible that the values should be stored in an array:
req["someParam1"] = ["foo","bar","bas"];
There also seems to be a decode function to LoadVars, what happens if you try to import the query string you want into the object?:
req.decode("someParam1=foo&someParam1=bar&someParam1=bas");

You cannot use loadvars like this - because data can be either 1 or 2 or 3, not all of them at the same time.
You can either pass it as a comma separated list:
var req:LoadVars = new LoadVars();
req["data"] = "1,2,3";
or as an xml string, and parse it at the server. I am not familiar with manipulating xml in AS2, but this is how you'd do it in AS3:
var xml:XML = <root/>;
xml.appendChild(<data>1</data>);
xml.appendChild(<data>2</data>);
xml.appendChild(<data>3</data>);
//now pass it to loadvars
req["data"] = xml.toXMLString();
The string you send is:
<root>
<data>1</data>
<data>2</data>
<data>3</data>
</root>

Related

Generically forwarding a GRPC call

I have a GRPC API where, following a refactor, a few packages were renamed. This includes the package declaration in one of our proto files that defines the API. Something like this:
package foo;
service BazApi {
rpc FooEventStream(stream Ack) returns (stream FooEvent);
}
which was changed to
package bar;
service BazApi {
rpc FooEventStream(stream Ack) returns (stream FooEvent);
}
The server side is implemented using grpc-java with scala and monix on top.
This all works fine for clients that use the new proto files, but for old clients that were built on top of the old proto files, this causes problems: UNIMPLEMENTED: Method not found: foo.BazApi/FooEventStream.
The actual data format of the messages passed over the GRPC API has not changed, only the package.
Since we need to keep backwards compatibility, I've been looking into a way to make the old clients work while keeping the name change.
I was hoping to make this work with a generic ServerInterceptor which would be able to inspect an incoming call, see that it's from an old client (we have the client version in the headers) and redirect/forward it to the renamed service. (Since it's just the package name that changed, this is easy to figure out e.g. foo.BazApi/FooEventStream -> bar.BazApi/FooEventStream)
However, there doesn't seem to be an elegant way to do this. I think it's possible by starting a new ClientCall to the correct endpoint, and then handling the ServerCall within the interceptor by delegating to the ClientCall, but that will require a bunch of plumbing code to properly handle unary/clientStreaming/serverStreaming/bidiStreaming calls.
Is there a better way to do this?
If you can easily change the server, you can have it support both names simultaneously. You can consider a solution where you register your service twice, with two different descriptors.
Every service has a bindService() method that returns a ServerServiceDefinition. You can pass the definition to the server via the normal serverBuilder.addService().
So you could get the normal ServerServiceDefinition and then rewrite it to the new name and then register the new name.
BazApiImpl service = new BazApiImpl();
serverBuilder.addService(service); // register "bar"
ServerServiceDefinition barDef = service.bindService();
ServerServiceDefinition fooDefBuilder = ServerServiceDefinition.builder("foo.BazApi");
for (ServerMethodDefinition<?,?> barMethodDef : barDef.getMethods()) {
MethodDescriptor desc = barMethodDef.getMethodDescriptor();
String newName = desc.getFullMethodName().replace("foo.BazApi/", "bar.BazApi/");
desc = desc.toBuilder().setFullMethodName(newName).build();
foDefBuilder.addMethod(desc, barMethodDef.getServerCallHandler());
}
serverBuilder.addService(fooDefBuilder.build()); // register "foo"
Using the lower-level "channel" API you can make a proxy without too much work. You mainly just proxy events from a ServerCall.Listener to a ClientCall and the ClientCall.Listener to a ServerCall. You get to learn about the lower-level MethodDescriptor and the rarely-used HandlerRegistry. There's also some complexity to handle flow control (isReady() and request()).
I made an example a while back, but never spent the time to merge it to grpc-java itself. It is currently available on my random branch. You should be able to get it working just by changing localhost:8980 and by re-writing the MethodDescriptor passed to channel.newCall(...). Something akin to:
MethodDescriptor desc = serverCall.getMethodDescriptor();
if (desc.getFullMethodName().startsWith("foo.BazApi/")) {
String newName = desc.getFullMethodName().replace("foo.BazApi/", "bar.BazApi/");
desc = desc.toBuilder().setFullMethodName(newName).build();
}
ClientCall<ReqT, RespT> clientCall
= channel.newCall(desc, CallOptions.DEFAULT);

Is there a tangible benefit to using wrapper requests over plain messages in grpc service calls?

Lets say we have a message containing ID of some record in the database
message Record {
uint64 id = 1;
}
We also have an rpc call that returns all of the rows from table DATA that said record is mentioned in.
rpc GetDataForRecord(Record) returns (Data) {}
If we, for example, wrap Record in
RqData{
Record id = 1;
}
then once we need to only return, for example, "active" data, we won't need to make
GetActiveDataForRecord
instead we could extend RqData as:
RqData{
Record id = 1;
bool use_active = 2;
}
and use
rpc GetDataForRecord(RqData) returns (Data) {}
and clients that know of this new functionality will be able to call it, while older clients will just use it as it was passing only Record part within the Rq wrapper, without specifying active or not.
Here's the question: is there really a reason to use this kind of wrapping of everything into a separate request, or am I overthinking things and just passing plain structures will do?
I am kinda trying to think about the future, but not sure if I am not overcomplicating things.
In general, making a method-specific request and response is a Good Thing™ and is encouraged. For a Foo method you'd have FooRequest and FooResponse. Having specialized messages for the method allows you to add new "arguments," as you mentioned.
But for some cases it turns out fine to break the pattern and avoid the wrapping; it's a judgement call. Although you're asking from a different perspective, you may be interested in this answer about related methods.

Read Request Body in ASP.NET

How does one read the request body in ASP.NET? I'm using the REST Client add-on for Firefox to form a GET request for a resource on a site I'm hosting locally, and in the Request Body I'm just putting the string "test" to try to read it on the server.
In the server code (which is a very simple MVC action) I have this:
var reader = new StreamReader(Request.InputStream);
var inputString = reader.ReadToEnd();
But when I debug into it, inputString is always empty. I'm not sure how else (such as in FireBug) to confirm that the request body is indeed being sent properly, I guess I'm just assuming that the add-on is doing that correctly. Maybe I'm reading the value incorrectly?
Maybe I'm misremembering my schooling, but I think GET requests don't actually have a body. This page states.
The HTML specifications technically define the difference between "GET" and "POST" so that former means that form data is to be encoded (by a browser) into a URL while the latter means that the form data is to appear within a message body.
So maybe you're doing things correctly, but you have to POST data in order to have a message body?
Update
In response to your comment, the most "correct" RESTful way would be to send each of the values as its own parameter:
site.com/MyController/MyAction?id=1&id=2&id=3...
Then your action will auto-bind these if you give it an array parameter by the same name:
public ActionResult MyAction(int[] id) {...}
Or if you're a masochist you can maybe try pulling the values out of Request.QueryString one at a time.
I was recently reminded of this old question, and wanted to add another answer for completeness based on more recent implementations in my own work.
For reference, I've blogged on the subject recently.
Essentially, the heart of this question was, "How can I pass larger and more complex search criteria to a resource to GET a filtered list of objects?" And it ended up boiling down to two choices:
A bunch of GET query string parameters
A POST with a DTO in the request body
The first option isn't ideal, because implementation is ugly and the URL will likely exceed a maximum length at some point. The second option, while functional, just didn't sit right with me in a "RESTful" sense. After all, I'm GETting data, right?
However, keep in mind that I'm not just GETting data. I'm creating a list of objects. Each object already exists, but the list itself doesn't. It's a brand new thing, created by issuing search/filter criteria to the complete repository of objects on the server. (After all, remember that a collection of objects is still, itself, an object.)
It's a purely semantic difference, but a decidedly important one. Because, at its simplest, it means I can comfortably use POST to issue these search criteria to the server. The response is data which I receive, so I'm "getting" data. But I'm not "GETting" data in the sense that I'm actually performing an act of creation, creating a new instance of a list of objects which happens to be composed of pre-existing elements.
I'll fully admit that the limitation was never technical, it was just semantic. It just never "sat right" with me. A non-technical problem demands a non-technical solution, in this case a semantic one. Looking at the problem from a slightly different semantic viewpoint resulted in a much cleaner solution, which happened to be the solution I ended up using in the first place.
Aside from the GET/POST issue, I did discover that you need to set the Request.InputStream position back to the start. Thanks to this answer I found.
Specifically the comment
Request.InputStream // make sure to reset the Position after reading or later reads may fail
Which I translated into
Request.InputStream.Seek(0,0)
I would try using the HttpClient (available via Nuget) for doing this type of thing. Its so much easier than the System.Net objects
Direct reading from the Request.InputStream dangerous because when re-reading will get null even if the data exists. This is verified in practice.
Reliable reading is performed as follows:
/*Returns a string representing the content of the body
of the HTTP-request.*/
public static string GetFromBodyString(this HttpRequestBase request)
{
string result = string.Empty;
if (request == null || request.InputStream == null)
return result;
request.InputStream.Position = 0;
/*create a new thread in the memory to save the original
source form as may be required to read many of the
body of the current HTTP- request*/
using (MemoryStream memoryStream = new MemoryStream())
{
request.InputStream.CopyToMemoryStream(memoryStream);
using (StreamReader streamReader = new StreamReader(memoryStream))
{
result = streamReader.ReadToEnd();
}
}
return result;
}
/*Copies bytes from the given stream MemoryStream and writes
them to another stream.*/
public static void CopyToMemoryStream(this Stream source, MemoryStream destination)
{
if (source.CanSeek)
{
int pos = (int)destination.Position;
int length = (int)(source.Length - source.Position) + pos;
destination.SetLength(length);
while (pos < length)
pos += source.Read(destination.GetBuffer(), pos, length - pos);
}
else
source.CopyTo((Stream)destination);
}

Can I pass an array to a function using the ... rest* construction?

I'm making multiple similar calls with similar results to one remote object. Because these calls are so similar and very changeable, I've been keeping the name of the remote method in a config file, and when I need to make the call I use getOperation() on the remote object, and call send() on the operation object. However, the requirements have changed so that not all of the calls will have the same number of parameters.
Because send uses ..., will I be able to continue using the same formation and pass an array, or will send() treat that as passing one argument of type array?
The Operation class also has an "arguments" property that you can use. That way you can prefill it before calling send(). The send() method then requires not extra arguments.
var operation:Operation = Operation(remoteObject.getOperation(methodName));
operation.arguments = parameters;
var token:AsyncToken = operation.send();
var responder:Responder = new Responder(resultHandler, faultHandler);
token.addResponder(responder);
you can use the ...rest
that will give you an array with a bunch of objects. I would recommend tat you make the first item [0] always the ID. This ID should identify either the sender or the type of object being passed. you can easily do a switch/case for each type of item. You could also do a more sophisticated way of dealing with this, but this should work.

difficult syncronization problem with FLEX commands (in cairngorm)

My problem, simplified:
I have a dataGrid with a dataProvider "documents"
A column of the datagrid has a labelFunction that gets the project_id field of the document, and returns the project name, from a bindable variable "projects"
Now, I dispatch the events to download from the server the documents and the projects, but If the documents get downloaded before the projects, then the label function gives an error (no "projects" variable)
Therefore, I must serialize the commands being executed: the getDocuments command must execute only after the getProjects command.
In the real world, though, I have dozens of resources being downloaded, and those command are not always grouped together (so I can't for example execute the second command from the onSuccess() method of the first, because not always they must be executed together..)..
I need a simple solution.. I need an idea..
If I understand you correctly, you need to serialize the replies from the server. I have done that by using AsyncToken.
The approach: Before you call the remote function, add a "token" to it. For instance, an id. The reply from the server for that particular call will then include that token. That way you can keep several calls separate and create chains of remote calls.
It's quite cool actually:
service:RemoteObject;
// ..
var call:AsyncToken = service.theMethod.send();
call.myToken = "serialization id";
private function onResult(event:ResultEvent):void
{
// Fetch the serialization id and do something with it
var serId:String = event.token.myToken;
}

Resources