Background
We have an app that deals with a considerable amount of requests per second. This app needs to notify an external service, by making a GET call via HTTPS to one of our servers.
Objective
The objective here is to use HTTPoison to make async GET requests. I don't really care about the response of the requests, all I care is to know if they failed or not, so I can write any possible errors into a logger.
If it succeeds I don't want to do anything.
Research
I have checked the official documentation for HTTPoison and I see that they support async requests:
https://hexdocs.pm/httpoison/readme.html#usage
However, I have 2 issues with this approach:
They use flush to show the request was completed. I can't loggin into the app and manually flush to see how the requests are going, that would be insane.
They don't show any notifications mechanism for when we get the responses or errors.
So, I have a simple question:
How do I get asynchronously notified that my request failed or succeeded?
I assume that the default HTTPoison.get is synchronous, as shown in the documentation.
This could be achieved by spawning a new process per-request. Consider something like:
notify = fn response ->
# Any handling logic - write do DB? Send a message to another process?
# Here, I'll just print the result
IO.inspect(response)
end
spawn(fn ->
resp = HTTPoison.get("http://google.com")
notify.(resp)
end) # spawn will not block, so it will attempt to execute next spawn straig away
spawn(fn ->
resp = HTTPoison.get("http://yahoo.com")
notify.(resp)
end) # This will be executed immediately after previoius `spawn`
Please take a look at the documentation of spawn/1 I've pointed out here.
Hope that helps!
Related
I want to implement a chat system.
I am stuck at the point where user sends multiple messgaes really fast. Although all the messages are reached to the server but in any order.
So I thought of implementing a queue where each message shall
First be placed in queue
Wait for its turn
Make the post request on its turn
Wait for around 5 secs for the response from server
If the response arrives within time frame and the status is OK, message sent else message sending failed.
In any case of point 5, the message shall be dequeued and next message shall be given chance.
Now, the major problem is, there could be multiple queues for each chat head or the user we are talking to. How will I implement this? I am really new to dart and flutter. Please help. Thanks!
It sounds like you are describing a Stream - a series of asynchronous events which are ordered.
https://www.dartlang.org/guides/language/language-tour#handling-streams
https://www.dartlang.org/guides/libraries/library-tour#stream
Create a StreamController, and add messages to it as they come in:
var controller = StreamController<String>();
// whenever you have a message
controller.add(message);
Listen on that stream and upload the messages:
await for(var messsage in controller.messages) {
await uploadMessage(message);
}
There's multiple executions happening interleaved/concurrently in firebase. The log id changes when new execution happens and the old execution id is forgotten. So the execution id moves forward only. When the old function resumes, it uses new execution id. Is there a way to achieve old execution id for old function & new execution id for new function.
Workflow:
Lets say Function1 & Function2 are diffrent triggers of same function.
1. Function1 does some db reads and do http requests. This returns an http promise - This takes some time(maybe some ms). Lets assume its execution-id from log is 154690519665944.
2. Function2 get triggered while function1 was waiting. function2 gets execution-id 154690574405903. function2 also does same thing and waits for http response.
3. Function1 resumes and it got http response and while logging it uses another execution-id 154694739233261 in log.
What happened to execution-id 154690519665944 ?
Since there's multiple triggers happening simultaneously, the only way to find whether a function completed successfully is to check logs. So by using execution-id as the filter, I could have find whether the function executed successfully or not. But because firebase changes execution-id randomly, I guess I have to find another solution.
PS: There's an update call which will trigger the same function. Does that change the parent function execution-id ?
Without seeing your code or complete logs, I don't think a definitive answer can be provided.
However, it sounds like the question you're asking is how can you keep data in memory across asynchronous transactions. Regardless of Firebase or not, you basically have two options:
commit that data to the database during the first part of the
transaction and then retrieve it in the second part
pass the data
from the first part to the second part so that it already has it.
You seem to be relying on the execution-id, so I would recommend taking the latter approach and passing that id along as part the input to your httprequest and having the server your calling return it in its response.
I need to run query with importing modules from pod.
Without importing modules if I run simple query with Database Id using below, it is working.
let $queryParam := fn:concat("?query=",xdmp:url-encode($query),"&eval=",$dataBaseId,":123")
let $url := fn:concat($hostcqport,"/eval.xqy",$queryParam)
let $response := xdmp:http-post($url, $options)[2]
If I have import modules statements then it is throwing Error(File Not Found).
So I tried getting the app-server id and tried passing that instead of database-id as below,
let $queryParam := fn:concat("?query=",xdmp:url-encode($query),"&eval=",$serverId,":123")
let $url := fn:concat($hostcqport,"/eval.xqy",$queryParam)
let $response := xdmp:http-post($url, $options)[2]
How to pass the server-id to make the query executing against particular app-server.
Is this MarkLogic 8 or earlier (I ask because rewrite options on 8 allow for dynamic switching of module databases before execution (among lots of other amazing goodies). This may be what you want because you can look at the query parameters at this point and build logic into the rewite rules.
Otherwise, Can you explain in more detail what you are trying to accomplish in the end. By the time your code ran, it was already executed in the context of a particular App server - so asking to execute against a another app server by analysing the query parameters is a bit too late (because you are already using the app server).
[edit] The following is in response to the comments since provided. This is a messy response because the actual ticket and comments are still not a completely clear picture. But if you stitch them together, then a problem statement does now exist for which I can respond.
The original author of the question confirmed via comments that they are "trying to hit an app server on a different node than the one that you actually posted to"
OK.. This is the response to that clarification:
That is not possible. Your request is already being processed by a thread on the node that you hit with your http request. Marklogic is a cluster, but it does not share threads (or anything else for that matter). Choices are:
a redirect to the proper node
possibly use the current node to make the request on your behalf.
But that ties up the first thread and the thread on the other node and has the HTTP communication overhead - and you need to have an app server listening for this purpose.
If this is a fire-and-forget type of situation, then you can hit any node and save the data/request in a document in the DB using a URI naming convention that indicates what app server it is for, and by way of insert triggers (with a URI-prefix for their server id), pick up the request from the DB and process it.
I am working with:
let callTheAPI = async {
printfn "\t\t\tMAKING REQUEST at %s..." (System.DateTime.Now.ToString("yyyy-MM-ddTHH:mm:ss"))
let! response = Http.AsyncRequestStream(url,query,headers,httpMethod,requestBody)
printfn "\t\t\t\tREQUEST MADE."
}
And
let cts = new System.Threading.CancellationTokenSource()
let timeout = 1000*60*4//4 minutes (4 mins no grace)
cts.CancelAfter(timeout)
Async.RunSynchronously(callTheAPI,timeout,cts.Token)
use respStrm = response.ResponseStream
respStrm.Flush()
writeLinesTo output (responseLines respStrm)
To call a web API (REST) and the let! response = Http.AsyncRequestStream(url,query,headers,httpMethod,requestBody) just hangs on certain queries. Ones that take a long time (>4 minutes) particularly. This is why I have made it Async and put a 4 minute timeout. (I collect the calls that timeout and make them with smaller time range parameters).
I started Http.RequestStream from FSharp.Data first, but I couldn't add a timeout to this so the script would just 'hang'.
I have looked at the API's IIS server and the application pool Worker Process active requests in IIS manager and I can see the requests come in and go again. They then 'vanish' and the F# script hangs. I can't find an error message anywhere on the script side or server side.
I included the Flush() and removed the timeout and it still hung. (Removing the Async in the process)
Additional:
Successful calls are made. Failed calls can be followed by successful calls. However, it seems to get to a point where all the calls time out and the do so without even reaching the server any more. (Worker Process Active Requests doesn't show the query)
Update:
I made the Fsx script output the queries and ran them through IRM with now issues (I have timeout and it never locks up). I have a suspicion that there is an issue with FSharp.Data.Http.
Async.RunSynchronously blocks. Read the remarks section in the docs: RunSynchronously. Instead, use Async.AwaitTask.
I have an application that does the following:
After the app receives a get request, it reads the client's cookies
for identification.
It stores the identification information in Postgresql DB
And it sends the appropriate response and finishes the handling
process.
But in this way the client is also waiting for me to store the data in PSQL. I don'
t want this what I want is:
After the app receives a get request, it reads the client's cookies
for identification.
It sends the appropriate response and finishes the handling process.
It stores the identification information in Postgresql DB.
In the second part storing process is happening after the client has received his response so he won't have to wait for it. I've searched for a solution but haven't found anything thus far. I believe I'm searching with wrong keywords because, I believe this is a common problem.
Any feedback is appreciated.
You should add a callback to the ioloop. Via some code like this:
from tornado import ioloop
def somefuction(*args):
# call the DB
...
... now in your get() or post() handler
...
io_loop = ioloop.IOLoop.instance()
io_loop.add_callback(partial(somefunction, arg, arg2))
... rest of your handler ...
self.finish()
This will get called after the response is returned to the user on the next iteration through the event handler to call your DB processor somefunction.
If you dont want to wait for Postgres to respond you could try
1) An async postgres driver
2) Put the DB jobs on a queue and let the queue handle the DB write. Try Rabbit MQ
Remember because you return to the user before you write to the DB you have to think about how to handle write errors