Akka non blocking options when an HTTP response is requied - http

I understand how to make a message based non-blocking application in akka, and can easily mock up examples that perform
concurrent operations and pass back the aggregated results in a message. Where I have difficulty is understanding what my
non-blocking options are when my application has to respond to an HTTP request. The goal is to receive a request and
immediately hand it over to a local or remote actor to do the work, which in turn will hand it off to get a result that
could take some time. Unfortunatly under this model, I don't understand how I could express this with a non-blocking
series of "tells" rather than blocking "asks". If at any point in the chain I use a tell, I no longer have a future to
use as the eventual response content (required by the http framework interface which in this case is finagle - but that is not
important). I understand the request is on its own thread, and my example is quite contrived, but just trying to
understand my design options.
In summary, If my contrived example below can be reworked to block less I very much love to understand how. This is my
first use of akka since some light exploration a year+ ago, and in every article, document, and talk I have viewed says
not to block for services.
Conceptual answers may be helpful but may also be the same as what I have already read. Working/Editing my example
would likely be key to my understanding of the exact problem I am attempting to solve. If the current example is generally
what needs to be done that confirmation is helpful too, so I don't search for magic that does not exist.
Note The following aliases: import com.twitter.util.{Future => TwitterFuture, Await => TwitterAwait}
object Server {
val system = ActorSystem("Example-System")
implicit val timeout = Timeout(1 seconds)
implicit def scalaFuture2twitterFuture[T](scFuture: Future[T]): TwitterFuture[T] = {
val promise = TwitterPromise[T]
scFuture onComplete {
case Success(result) ⇒ promise.setValue(result)
case Failure(failure) ⇒ promise.setException(failure)
}
promise
}
val service = new Service[HttpRequest, HttpResponse] {
def apply(req: HttpRequest): TwitterFuture[HttpResponse] = req.getUri match {
case "/a/b/c" =>
val w1 = system.actorOf(Props(new Worker1))
val r = w1 ? "take work"
val response: Future[HttpResponse] = r.mapTo[String].map { c =>
val resp = new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK)
resp.setContent(ChannelBuffers.copiedBuffer(c, CharsetUtil.UTF_8))
resp
}
response
}
}
//val server = Http.serve(":8080", service); TwitterAwait.ready(server)
class Worker1 extends Actor with ActorLogging {
def receive = {
case "take work" =>
val w2 = context.actorOf(Props(new Worker2))
pipe (w2 ? "do work") to sender
}
}
class Worker2 extends Actor with ActorLogging {
def receive = {
case "do work" =>
//Long operation...
sender ! "The Work"
}
}
def main(args: Array[String]) {
val r = service.apply(
com.twitter.finagle.http.Request("/a/b/c")
)
println(TwitterAwait.result(r).getContent.toString(CharsetUtil.UTF_8)) // prints The Work
}
}
Thanks in advance for any guidance offered!

You can avoid sending a future as a message by using the pipe pattern—i.e., in Worker1 you'd write:
pipe(w2 ? "do work") to sender
Instead of:
sender ! (w2 ? "do work")
Now r will be a Future[String] instead of a Future[Future[String]].
Update: the pipe solution above is a general way to avoid having your actor respond with a future. As Viktor points out in a comment below, in this case you can take your Worker1 out of the loop entirely by telling Worker2 to respond directly to the actor that it (Worker1) got the message from:
w2.tell("do work", sender)
This won't be an option if Worker1 is responsible for operating on the response from Worker2 in some way (by using map on w2 ? "do work", combining multiple futures with flatMap or a for-comprehension, etc.), but if that's not necessary, this version is cleaner and more efficient.
That kills one Await.result. You can get rid of the other by writing something like the following:
val response: Future[HttpResponse] = r.mapTo[String].map { c =>
val resp = new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK)
resp.setContent(ChannelBuffers.copiedBuffer(c, CharsetUtil.UTF_8))
resp
}
Now you just need to turn this Future into a TwitterFuture. I can't tell you off the top of my head exactly how to do this, but it should be fairly trivial, and definitely doesn't require blocking.

You definitely don't have to block at all here. First, update your import for the twitter stuff to:
import com.twitter.util.{Future => TwitterFuture, Await => TwitterAwait, Promise => TwitterPromise}
You will need the twitter Promise as that's the impl of Future you will return from the apply method. Then, follow what Travis Brown said in his answer so your actor is responding in such a way that you do not have nested futures. Once you do that, you should be able to change your apply method to something like this:
def apply(req: HttpRequest): TwitterFuture[HttpResponse] = req.getUri match {
case "/a/b/c" =>
val w1 = system.actorOf(Props(new Worker1))
val r = (w1 ? "take work").mapTo[String]
val prom = new TwitterPromise[HttpResponse]
r.map(toResponse) onComplete{
case Success(resp) => prom.setValue(resp)
case Failure(ex) => prom.setException(ex)
}
prom
}
def toResponse(c:String):HttpResponse = {
val resp = new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK)
resp.setContent(ChannelBuffers.copiedBuffer(c, CharsetUtil.UTF_8))
resp
}
This probably needs a little more work. I didn't set it up in my IDE, so I can't guarantee you it compiles, but I believe the idea to be sound. What you return from the apply method is a TwitterFuture that is not yet completed. It will be completed when the future from the actor ask (?) is done and that's happing via a non-blocking onComplete callback.

Related

Rust ownership issues

I'm quite new to Rust, I'm mainly a C#, javascript and python developer, so I like to approach things in a OOP way, however I still can't wrap my head around ownership in rust. Especially when it comes to OOP.
I'm writing a TCP server. I have a struct that contains connections (streams) and I read the sockets asynchronously using the mio crate. I understand what the error is telling me, but I have no clue how to fix it. I tried changing the read_message method into a function (without the reference to self), which worked, but the problem with this is that I'll need to access the connections and whatnot from the struct (to relay messages between sockets for example), so this workaround won't be plausible in later versions. Is there an easy fix for this, or is the design inherently flawed?
Here's a snippet that shows what my problem is:
let sock = self.connections.get_mut(&token).unwrap();
loop {
match sock.read(&mut msg_type) {
Ok(_) => {
self.read_message(msg_type[0], token);
}
}
}
fn read_message(&mut self, msg_type: u8, token: Token) {
let sock = self.connections.get_mut(&token).unwrap();
let msg_type = num::FromPrimitive::from_u8(msg_type);
match msg_type {
Some(MsgType::RequestIps) => {
let decoded: MsgTypes::Announce = bincode::deserialize_from(sock).unwrap();
println!("Public Key: {}", decoded.public_key);
}
_ => unreachable!()
}
}
And the error I'm getting is the following:
You are holding a mutable borrow on sock, which is part of self, at the moment you try to call self.read_message. Since you indicated that read_message needs mutable access to all of self, you need to make sure you don't have a mutable borrow on sock anymore at that point.
Fortunately, thanks to non-lexical lifetimes in Rust 2018, that's not hard to do; simply fetch sock inside the loop:
loop {
let sock = self.connections.get_mut(&token).unwrap();
match sock.read(&mut msg_type) {
Ok(_) => {
self.read_message(msg_type[0], token);
}
}
}
Assuming sock.read doesn't return anything that holds a borrow on sock, this should let the mutable borrow on sock be released before calling self.read_message. It needs to be re-acquired in the next iteration, but seeing as you're doing network I/O, the relative performance penalty of a single HashMap (?) access should be negligible.
(Due to lack of a minimal, compileable example, I wasn't able to test this.)

Dart: client.post() method in the http.dart package is hanging and isn't returning a future

I am currently implementing a simple ONVIF client program in dart and am having a bit of trouble with handling futures in the http.dart package.
In my code below, the client.post() method returns a Future<response> (containing body/header/status code and so on...) and in my case I would need to receive this before the if/else statement, hence why I have used await. Trouble is the program just hangs and doesn't proceed past the client.post()line.
I know I might need to do a client.close() somewhere but I've tried lots of different ways and nothing works. Here is my current code with some comments to try and explain it a bit:
// The Variables:
// reqSysDateAndTime is a soap message we are sending to the device.
// onvifDev is just a place where the device details are stored.
// probeMatch is a class that stores important info from the ws-discovery stage.
Future<String> checkXaddrsAndGetTime(Device onvifDev, ProbeMatch probeMatch) async {
// Set up the client and uri.
Uri uri = Uri.parse(probeMatch.xaddrs);
http.Client client = http.Client();
// Send the POST request, with full SOAP envelope as the request body
print('[Setup]: Listening for Date/Time response...');
Response response = await client.post(uri, body: reqSysDateAndTime);
print("${response.body}");
// Determine if the address is usable or not.
if (response != null) {
// Set this address as 'working'
onvifDev.xAddrs = probeMatch.xaddrs;
return response.body;
}
else {
return null; // The address does not work
}
}
I also know that this isn't an issue with the actual body of the request because if I do...
client.post(uri, body: reqSysDateAndTime).then((onValue) => print(onValue.body));
...instead, it will print out the response which I'm expecting.
I understand that this is probably a small fix that I'm missing but any help would be much appreciated.
Cheers.
To briefly answer my own question, turns out it was a silly error on my part where the address I was using was link-local - hence why it was hanging at client.post(). There is a nice method in the InternetAddress class here, amongst some other useful methods, which checks to see if the address is link-local or not.

Is retrofit fast?

I am now using a retrofit.
In addition, I use the following libraries.
=================
gson-2.8.2.jar
gson-2.8.2-javadoc.jar
hamcrest-core-1.3.jar
junit-4.12.jar
okhttp-3.9.1.jar
okio-1.13.0.jar
retrofit-2.3.0.jar
================
Q: Can retro fits be really fast?
As a result of my testing, it is too slow.
Retrofit Average speed: 2500 ms, personal code average speed: 900ms
Is it true that I used it properly? (Kotlin)
Below is a code that uses Retrofit.
interface ApiService {
#GET("/lol/summoner/v3/summoners/by-name/{name}")
fun getSummonerByName(#Path("name") name : String, #Query("api_key") apiKey : String): Call<SummonerDTO>
}
fun getSummonerByName(summonerName: String, apiKey: String): SummonerDTO? {
var retrofit = Retrofit.Builder().baseUrl("https://" + HOST + "").addConverterFactory(GsonConverterFactory.create()).build()
var service = retrofit.create(ApiService::class.java)
var repos = service.getSummonerByName(summonerName, apiKey)
val response = repos.execute()
if(response.isSuccessful){
return response.body()
}
return null
}
This is an advisory answer to this stupid question.
The speed of the Retrofit is usually done in the background and called as a callback function.
So the test method is wrong. Use HttpsURLConnection if you only want one quick connection without considering any software architecture or pattern. It is concise and short. However, if you want asynchronous processing, use Retrofit. Easy and powerful.

How to wait for free Akka actor while processing stream of data using Plays Iteratee

I have infinite stream with messages represented as Plays Enumerator to which I apply Iteratee. Each message is then processed by Akka actor (number of actors is limited to 10).
Now I would like code in Iteratee to asynchronously wait for free actor if all 10 actors are busy and not to send them another messages which leads to exception Ask timed out on ....
How can I achieve such functionality? Is there a better way to process infinite stream with 10 actors without await?
Example of code I was talking about could look like this:
val workers = context.actorOf(Props[MyWorker].withRouter(RoundRobinRouter(10)))
val it = Iteratee.foreach[Msg] { msg =>
workers ? msg
}
msgEnumerator.apply(it)
Use Iteratee.foldM with the actor ask pattern you have here is seems like the right approach. Assuming you don't want your actors to build up large mailboxes (if you don't care about large mailboxes, just use tell and Iteratee.foreach instead of ask) This will require some specialized routing logic. Since the api for making a custom akka router doesn't support asynchrony, you will need a custom actor to handle the logic of distributing just one piece of work to each actor in your actor pool at a time.
I imagine it working something like:
class WorkDistributor extends Actor {
final val NUM_WORKERS = 10
val workers = context.actorOf(Props[MyWorker].withRouter(RoundRobinRouter(NUM_WORKERS)))
var numActiveWorkers = 0
var queuedWork: Option[Work] = None
def receive = {
case IterateeWork(work) if numActiveWorkers < NUM_WORKERS => workers ! work; numActiveWorkers += 1; sender ! SendMeMoreWork
case IterateeWork(work) => queuedWork = Some(work)
case ActorFinishedWork if queuedWork.isDefined => queuedWork.foreach(workers ! _); queuedWork = None
case ActorFinishedWork => numActiveWorkers -= 1; sender ! SendMeMoreWork
}
}
Where the IterateeWork message is sent by the iteratee and the ActorFinishedWork message is sent by the actors in the actor pool.
Looking at this thing I wrote, this should be rewritten to use become to change the behavior when the actor pool is full (rather than the if filters on each case but I leave that as an exercise for the reader.
Then your Iteratee will look like
Iteratee.foldM[Work, SendMeMoreWork.type](SendMeMoreWork) {
case (_, work) => workDistributor ? IterateeWork(work)
}

Generic reply from agent/mailboxprocessor?

I currently have an agent that does heavy data processing by constantly posting "work" messages to itself.
Sometimes clients to this agent wants to interrupt this processing to access the data in a safe manner.
For this I thought that posting an async to the agent that the agent can run whenever it's in a safe state would be nice. This works fine and the message looks like this:
type Message = |Sync of Async<unit>*AsyncReplyChannel<unit>
And the agent processing simply becomes:
match mailbox.Receive () with
| Sync (async, reply) -> async |> Async.RunSynchronously |> reply.Reply
This works great as long as clients don't need to return some value from the async as I've constrained the async/reply to be of type unit and I cannot use a generic type in the discriminated union.
My best attempts to solve this has involved wrapper asyncs and waithandles, but this seems messy and not as elegant as I've come to expect from F#. I'm also new to async workflows in F# so it's very possible that I've missed/misunderstood some concepts here.
So the question is; how can I return generic types in a agent response?
The thing that makes this difficult is that, in your current version, the agent would somehow have to calculate the value and then pass it to the channel, without knowing what is the type of the value. Doing that in a statically typed way in F# is tricky.
If you make the message generic, then it will work, but the agent will only be able to handle messages of one type (the type T in Message<T>).
An alternative is to simply pass Async<unit> to the agent and let the caller do the value passing for each specific type. So, you can write message & agent just like this:
type Message = | Sync of Async<unit>
let agent = MailboxProcessor.Start(fun inbox -> async {
while true do
let! msg = inbox.Receive ()
match msg with
| Sync (work) -> do! work })
When you use PostAndReply, you get access to the reply channel - rather than passing the channel to the agent, you can just use it in the local async block:
let num = agent.PostAndReply(fun chan -> Sync(async {
let ret = 42
chan.Reply(ret) }))
let str = agent.PostAndReply(fun chan -> Sync(async {
let ret = "hi"
chan.Reply(ret) }))

Resources