Is retrofit fast? - retrofit

I am now using a retrofit.
In addition, I use the following libraries.
=================
gson-2.8.2.jar
gson-2.8.2-javadoc.jar
hamcrest-core-1.3.jar
junit-4.12.jar
okhttp-3.9.1.jar
okio-1.13.0.jar
retrofit-2.3.0.jar
================
Q: Can retro fits be really fast?
As a result of my testing, it is too slow.
Retrofit Average speed: 2500 ms, personal code average speed: 900ms
Is it true that I used it properly? (Kotlin)
Below is a code that uses Retrofit.
interface ApiService {
#GET("/lol/summoner/v3/summoners/by-name/{name}")
fun getSummonerByName(#Path("name") name : String, #Query("api_key") apiKey : String): Call<SummonerDTO>
}
fun getSummonerByName(summonerName: String, apiKey: String): SummonerDTO? {
var retrofit = Retrofit.Builder().baseUrl("https://" + HOST + "").addConverterFactory(GsonConverterFactory.create()).build()
var service = retrofit.create(ApiService::class.java)
var repos = service.getSummonerByName(summonerName, apiKey)
val response = repos.execute()
if(response.isSuccessful){
return response.body()
}
return null
}

This is an advisory answer to this stupid question.
The speed of the Retrofit is usually done in the background and called as a callback function.
So the test method is wrong. Use HttpsURLConnection if you only want one quick connection without considering any software architecture or pattern. It is concise and short. However, if you want asynchronous processing, use Retrofit. Easy and powerful.

Related

Converting a collection of Awaitables to a time-ordered AsyncGenerator in Hack

I am trying to implement Rx stream/observable merging with Hack async, and a core step is described by the title. A code version of this step would look something like this:
<?hh // strict
async function foo(Awaitable<Iterable<T>> $collection): Awaitable<void> {
$ordered_generator = async_collection_to_gen($collection) // (**)
foreach($ordered_generator await as $v) {
// do something with each awaited value in the time-order they are resolved
}
}
However, after mulling it over, I don't think I can write the starred (**) function. I've found that at some point or another, the implementations I've tried require functionality akin to JS's Promise.race, which resolves when the first of a collection Promises resolves/rejects. However, all of Hack's Awaitable collection helpers create an Awaitable of a fully resolved collection. Furthermore, Hack doesn't permit that we don't await async calls from async functions, which I've also found to be necessary.
Is it possible to anyone's knowledge?
This is possible actually! I dug around and stumbled upon a fork of asio-utilities by #jano implementing an AsyncPoll class. See PR for usage. It does exactly as I hoped.
So it turns out, there is an Awaitable called ConditionWaitHandle with succeed and fail methods* that can be invoked by any context (so long as the underlying WaitHandle hasn't expired yet), forcing the ConditionWaitHandle to resolve with the passed values.
I gave the code a hard look, and underneath it all, it works by successive Awaitable races, which ConditionWaitHandle permits. More specifically, the collection of Awaitables is compressed via AwaitAllWaitHandles (aka \HH\Asio\v) which resolves as slowly as the slowest Awaitable, then nested within a ConditionWaitHandle. Each Awaitable is awaited in an async function that triggers the common ConditionWaitHandle, concluding the race. This is repeated until the Awaitables have all resolved.
Here's a more compact implementation of a race using the same philosophy:
<?hh
function wait(int $i): Awaitable<void> {
return Race::wrap(async { await HH\Asio\usleep($i); return $i; });
}
// $wait_handle = null;
class Race {
public static ?ConditionWaitHandle $handle = null;
public static async function wrap<T>(Awaitable<T> $v): Awaitable<void> {
$ret = await $v;
$handle = self::$handle;
invariant(!is_null($handle), '');
$handle->succeed($ret);
}
}
Race::$handle = ConditionWaitHandle::create(
\HH\Asio\v(
Vector{
(wait(1))->getWaitHandle(),
(wait(1000000))->getWaitHandle()
}
)
);
printf("%d microsecond `wait` wins!", \HH\Asio\join(Race::$handle));
Very elegant solution, thanks #jano!
*(the semblance to promises/deferred intensifies)
I am curious how premature completion via ConditionWaitHandle meshes with the philosophy that all Awaitables should run to completion.

the use of async in WebAPI with many async calls

Sorry for the beginers question, I read a lot of post here and on the web and there is something fondemental I can't understand.
As I understood, the usage of async actions in WebAPI is mainly for scalability reasons, so any incoming request will be diverted to a worker instead to a thread and by that, more requests could be served.
My project consists on several huge actions that read/insert/update from DB by EF6 many times. the action looks like that:
public async Task<HttpResponseMessage> Get(int id)
{
PolicyModel response = await policyRepository.GetPolicyAsync(id);
return Request.CreateResponse<PolicyModel>(HttpStatusCode.OK,response);
}
and GetPolicyAsync(int id) looks like that:
public async Task<PolicyModel> GetPolicyAsync(int id)
{
PolicyModel response = new PolicyModel();
User currentUser = await Repositories.User.GetUserDetailsAsync(id);
if(currentUser.IsNew)
{
IEnumerable<Delivery> deliveries = await Repositories.Delivery.GetAvailableDeliveries();
if(deliveries == null || deliveries.Count() == 0
{
throw new Exception("no deliveries available");
}
response.Deliveries = deliveries;
Ienumerable<Line> lines = await Repositores.Delivery.GetActiveLinesAsync();
lines.AsParallel().ForAll(line => {
await Repositories.Delivery.AssignLineAsync(line,currentUser);
}
...
return response;
}
I didn't write the entire code but it goes quite a bit and it is also broken into several methods but that is the spirit of it
now the question i have is: is it a good practice to use so many awaiters in one method? I saw that it is more difficult to debug, is it hard to maintain the thread context and for the sake of worker assignment, shouldn't I just use Task.Factory.StartNew() or maybe call a simple Task.Delay() so that the request will immidiatelly be diverted to a worker?
I know that it is not a good practice (async all the way) so maybe just one async method at the end/begining of the GetpolicyAsync(int id) method
EDIT:
as I understood the mechanics of the async methods in .net, for every async method, the compiler is looking for a free thread and let it deal withthe method, the thread is looking for a free worker and assign the method to it and then report back to the compiler that it is free. so if we have 10 threads and for every thread there are 10 workers, the program can deals with 100 concurrent async methods.
so back to web developement, the IIS assign x thread to each app pool, 10 for instance. that means that the async WebAPI method can handle 100 requests but if there is another async method inside, the amount of requests that can be dealth with are 50 and so on, am I right?
and as I understood, I must call an async method in order to make the WebAPI method truely async and now, since it is a bad practice to use Task.Factory.StartNew(), I must at least use Task.Delay()
what I really want to gain is the scalability of the async WebAPI methods and the context awareness of synced methods
in all the examples I've seen so far, they only show a very simple code but in real life, methods are far more complex
Thanks
There's nothing wrong with having many awaits in a single method. If the method becomes too complicated for you you can split it up into several methods:
public async Task<PolicyModel> GetPolicyAsync(int id)
{
PolicyModel response = new PolicyModel();
User currentUser = await Repositories.User.GetUserDetailsAsync(id);
if(currentUser.IsNew)
{
await HandleDeliveriesAsync(await Repositories.Delivery.GetAvailableDeliveries());
}
...
return response;
}
public async Task HandleDeliveriesAsync(IEnumerable<Delivery> deliveries)
{
if(deliveries == null || deliveries.Count() == 0
{
throw new Exception("no deliveries available");
}
response.Deliveries = deliveries;
Ienumerable<Line> lines = await Repositores.Delivery.GetActiveLinesAsync();
lines.AsParallel().ForAll(line => {
await Repositories.Delivery.AssignLineAsync(line,currentUser);
}
Don't use Task.Factory.StartNew or Task.Delay as it just offloads the same work to a different thread. It doesn't add any value in most cases and actually harms in ASP.Net.
After much search, I came across this artical:
https://msdn.microsoft.com/en-us/magazine/dn802603.aspx
it explains what #i3arnon said in the comment. there are no workers at all, the threads are doing all the work.
In a nutshell, the thread that handles a web request reach to an opporation that is designed to be done asyncronically on the driver stack, it creates a request and pass it to the driver. The driver mark it as pending and reports "done" to the thread which goes back to the thread pool to get another assignment. The actual job is not being done by threads but by the driver that borrowing cpu time from all threads and leave them free to attend to their businesses. When the opporation is done, the driver notifies it and an available thread comes to continue...
so, from that I learn that I should look into each and every async method and check that it actually does an opporation that uses the drivers asyncronically.
Point to think: the scallability of your website is dependent on the amount of opporations that are truly being done asyncronically, if you don't have any of them, then your website won't scale.
moderators, I'm a real newbe, if I got it all wrong, please correct me
Thanks!

Servicestack async method (v4)

at this moment I am developing an android db access to a servicestack web api.
I need to show a message of "wait please..." when the user interacts with the db, I read some documentation:
Calling from client side an async method
But I cannot found how the async method must be implemented in the webapi, serverside.
Can you please share a link to the documentation? or a simple working sample?
I have this before trying to convert it to an async call.
var response = jsonClient.Send(new UsuarioLogin { Cuenta = txtCuenta.Text, P = txtPass.Text});
After some read, the latter code is converted into this:
var response = await jsonClient.SendAsync(new UsuarioLogin { Cuenta = txtCuenta.Text, P = txtPass.Text});
So, in the server side I have this(just an extract):
public object Post(UsuarioLogin request)
{
var _usuario = usuarioRepo.Login(_cuenta, _password);
if (_usuario != null)
{
if (_usuario.UsuarioPerfilId != 2 )
{
return new UsuarioLoginResponse { Ret = -3 };
}
How can I convert it to an async method?
Thanks in advance.
Server-side and client-side async are opaque and unrelated, i.e. an async call on the client works the same way irrespective if it's calling a sync or an async service, there's also no difference for sync client either. Essentially how the server is implemented has no effect on how it's called on the client.
To make an async service on the Server you just need to return a Task, you can look at the AsyncTaskTests for different examples of how to create an async Service.
There are a lot of benefits for making non-blocking async calls on the client since this can be called from the UI thread without blocking the UI. But there's less value of server-side async which adds a lot of hidden artificial complexity, is easy to deadlock, requires rewriting I/O operations to be async and in many cases wont improve performance unless I/O is the bottleneck, e.g. async doesn't help when calling a single RDBMS. You're going to get a lot more performance benefits from using Caching instead.

Akka non blocking options when an HTTP response is requied

I understand how to make a message based non-blocking application in akka, and can easily mock up examples that perform
concurrent operations and pass back the aggregated results in a message. Where I have difficulty is understanding what my
non-blocking options are when my application has to respond to an HTTP request. The goal is to receive a request and
immediately hand it over to a local or remote actor to do the work, which in turn will hand it off to get a result that
could take some time. Unfortunatly under this model, I don't understand how I could express this with a non-blocking
series of "tells" rather than blocking "asks". If at any point in the chain I use a tell, I no longer have a future to
use as the eventual response content (required by the http framework interface which in this case is finagle - but that is not
important). I understand the request is on its own thread, and my example is quite contrived, but just trying to
understand my design options.
In summary, If my contrived example below can be reworked to block less I very much love to understand how. This is my
first use of akka since some light exploration a year+ ago, and in every article, document, and talk I have viewed says
not to block for services.
Conceptual answers may be helpful but may also be the same as what I have already read. Working/Editing my example
would likely be key to my understanding of the exact problem I am attempting to solve. If the current example is generally
what needs to be done that confirmation is helpful too, so I don't search for magic that does not exist.
Note The following aliases: import com.twitter.util.{Future => TwitterFuture, Await => TwitterAwait}
object Server {
val system = ActorSystem("Example-System")
implicit val timeout = Timeout(1 seconds)
implicit def scalaFuture2twitterFuture[T](scFuture: Future[T]): TwitterFuture[T] = {
val promise = TwitterPromise[T]
scFuture onComplete {
case Success(result) ⇒ promise.setValue(result)
case Failure(failure) ⇒ promise.setException(failure)
}
promise
}
val service = new Service[HttpRequest, HttpResponse] {
def apply(req: HttpRequest): TwitterFuture[HttpResponse] = req.getUri match {
case "/a/b/c" =>
val w1 = system.actorOf(Props(new Worker1))
val r = w1 ? "take work"
val response: Future[HttpResponse] = r.mapTo[String].map { c =>
val resp = new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK)
resp.setContent(ChannelBuffers.copiedBuffer(c, CharsetUtil.UTF_8))
resp
}
response
}
}
//val server = Http.serve(":8080", service); TwitterAwait.ready(server)
class Worker1 extends Actor with ActorLogging {
def receive = {
case "take work" =>
val w2 = context.actorOf(Props(new Worker2))
pipe (w2 ? "do work") to sender
}
}
class Worker2 extends Actor with ActorLogging {
def receive = {
case "do work" =>
//Long operation...
sender ! "The Work"
}
}
def main(args: Array[String]) {
val r = service.apply(
com.twitter.finagle.http.Request("/a/b/c")
)
println(TwitterAwait.result(r).getContent.toString(CharsetUtil.UTF_8)) // prints The Work
}
}
Thanks in advance for any guidance offered!
You can avoid sending a future as a message by using the pipe pattern—i.e., in Worker1 you'd write:
pipe(w2 ? "do work") to sender
Instead of:
sender ! (w2 ? "do work")
Now r will be a Future[String] instead of a Future[Future[String]].
Update: the pipe solution above is a general way to avoid having your actor respond with a future. As Viktor points out in a comment below, in this case you can take your Worker1 out of the loop entirely by telling Worker2 to respond directly to the actor that it (Worker1) got the message from:
w2.tell("do work", sender)
This won't be an option if Worker1 is responsible for operating on the response from Worker2 in some way (by using map on w2 ? "do work", combining multiple futures with flatMap or a for-comprehension, etc.), but if that's not necessary, this version is cleaner and more efficient.
That kills one Await.result. You can get rid of the other by writing something like the following:
val response: Future[HttpResponse] = r.mapTo[String].map { c =>
val resp = new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK)
resp.setContent(ChannelBuffers.copiedBuffer(c, CharsetUtil.UTF_8))
resp
}
Now you just need to turn this Future into a TwitterFuture. I can't tell you off the top of my head exactly how to do this, but it should be fairly trivial, and definitely doesn't require blocking.
You definitely don't have to block at all here. First, update your import for the twitter stuff to:
import com.twitter.util.{Future => TwitterFuture, Await => TwitterAwait, Promise => TwitterPromise}
You will need the twitter Promise as that's the impl of Future you will return from the apply method. Then, follow what Travis Brown said in his answer so your actor is responding in such a way that you do not have nested futures. Once you do that, you should be able to change your apply method to something like this:
def apply(req: HttpRequest): TwitterFuture[HttpResponse] = req.getUri match {
case "/a/b/c" =>
val w1 = system.actorOf(Props(new Worker1))
val r = (w1 ? "take work").mapTo[String]
val prom = new TwitterPromise[HttpResponse]
r.map(toResponse) onComplete{
case Success(resp) => prom.setValue(resp)
case Failure(ex) => prom.setException(ex)
}
prom
}
def toResponse(c:String):HttpResponse = {
val resp = new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK)
resp.setContent(ChannelBuffers.copiedBuffer(c, CharsetUtil.UTF_8))
resp
}
This probably needs a little more work. I didn't set it up in my IDE, so I can't guarantee you it compiles, but I believe the idea to be sound. What you return from the apply method is a TwitterFuture that is not yet completed. It will be completed when the future from the actor ask (?) is done and that's happing via a non-blocking onComplete callback.

How to work with async code in Mongoose virtual properties?

I'm trying to work with associating documents in different collections (not embedded documents) and while there is an issue for that in Mongooose, I'm trying to work around it now by lazy loading the associated document with a virtual property as documented on the Mongoose website.
The problem is that the getter for a virtual takes a function as an argument and uses the return value for the virtual property. This is great when the virtual doesn't require any async calls to calculate it's value, but doesn't work when I need to make an async call to load the other document. Here's the sample code I'm working with:
TransactionSchema.virtual('notebook')
.get( function() { // <-- the return value of this function is used as the property value
Notebook.findById(this.notebookId, function(err, notebook) {
return notebook; // I can't use this value, since the outer function returns before we get to this code
})
// undefined is returned here as the properties value
});
This doesn't work since the function returns before the async call is finished. Is there a way I could use a flow control library to make this work, or could I modify the first function so that I pass the findById call to the getter instead of an anonymous function?
You can define a virtual method, for which you can define a callback.
Using your example:
TransactionSchema.method('getNotebook', function(cb) {
Notebook.findById(this.notebookId, function(err, notebook) {
cb(notebook);
})
});
And while the sole commenter appears to be one of those pedantic types, you also should not be afraid of embedding documents. Its one of mongos strong points from what I understand.
One uses the above code like so:
instance.getNotebook(function(nootebook){
// hey man, I have my notebook and stuff
});
While this addresses the broader problem rather than the specific question, I still thought it was worth submitting:
You can easily load an associated document from another collection (having a nearly identical result as defining a virtual) by using Mongoose's query populate function. Using the above example, this requires specifying the ref of the ObjectID in the Transaction schema (to point to the Notebook collection), then calling populate(NotebookId) while constructing the query. The linked Mongoose documentation addresses this pretty thoroughly.
I'm not familiar with Mongoose's history, but I'm guessing populate did not exist when these earlier answers were submitted.
Josh's approach works great for single document look-ups, but my situation was a little more complex. I needed to do a look-up on a nested property for an entire array of objects. For example, my model looked more like this:
var TransactionSchema = new Schema({
...
, notebooks: {type: [Notebook]}
});
var NotebookSchema = new Schema({
...
, authorName: String // this should not necessarily persist to db because it may get stale
, authorId: String
});
var AuthorSchema = new Schema({
firstName: String
, lastName: String
});
Then, in my application code (I'm using Express), when I get a Transaction, I want all of the notebooks with author last name's:
...
TransactionSchema.findById(someTransactionId, function(err, trans) {
...
if (trans) {
var authorIds = trans.notebooks.map(function(tx) {
return notebook.authorId;
});
Author.find({_id: {$in: authorIds}, [], function(err2, authors) {
for (var a in authors) {
for (var n in trans.notebooks {
if (authors[a].id == trans.notebooks[n].authorId) {
trans.notebooks[n].authorLastName = authors[a].lastName;
break;
}
}
}
...
});
This seems wildly inefficient and hacky, but I could not figure out another way to accomplish this. Lastly, I am new to node.js, mongoose, and stackoverflow so forgive me if this is not the most appropriate place to extend this discussion. It's just that Josh's solution was the most helpful in my eventual "solution."
As this is an old question, I figured it might use an update.
To achieve asynchronous virtual fields, you can use mongoose-fill, as stated in mongoose's github issue: https://github.com/Automattic/mongoose/issues/1894

Resources