Is retrofit of any use if I am already using okhttp3, Moshi and Rxjava in my project? - retrofit

I have done some R&D on above libraries and have used some in my project.I am using Moshi for json parsing, OkHttp3 library for http connections and Rxjava for asynchronous and event based programming in my project. Now when I looked at retrofit, I felt its of no use as I have already used above main components of retrofit myself.
Just want to know the ideas of the people whether I am thinking in right direction or not.
Edit: From my point of view, Retrofit only provides clean interface of http client where one can customize requests,headers etc with annotations.

This is a good choice of libraries from my point of view. The first three are developed by Square and they work very well together. However the main difference is that each library works on a different layer.
OkHttp: transport layer. Deals with http protocol. Performs networking.
Moshi: Json parser. Transforms bytes from OkHttp into a Java objects.
Retrofit: Rest layer. Transforms HTTP logic (status codes), into REST logic.
RxJava: provides tools to create reactive code, instead of imperative code.

Related

Microservice to Microservice Architecure using gRPC : .NET Core

So I've this Microservice architecture where there is an ApiGateway, 2 microservices i.e., Configurations. API and API-1. The Configuration. API is mainly responsible to parse the JSON request and
access the DB and update Status tables, also to fetch required data, it even adds up more values to the JSON request and send it to the API-1. API-1 is responsible to just generate report based on the json passed.
Yes I can merge the configurations. API to the API-1 and make it a single service/container but the requirement is not to merge and create two different components i.e., 1 component purely based on
fetching the data, updating the status while the other just to generate the reports.
So here are some questions:
: Should I use gRPC for the configuration.API or is there a better way to achieve this.
Thank you.
RPC is a synchronous communication so you have to come up with strong reason to use it in service to service communication. it brings the fast and performant communication on the table but also coupling to the services. if you insist use rpc it is better to use MASSTRANSIT to implement the rpc in less coupled way. however in most cases the asynchronous event-base communication is recommended to avoid coupling (in that case look at CAP theory, SAGA, circuit breaker ).
since you said
but the requirement is not to merege and create two different
components
and that is your reason and also base on the fact
also to fetch requried data, it even adds up more values to the JSON
request and send it to the API-1
i think the second one makes scenes more. how ever i cant understand why you change the database position since you said the configuration service is responsible for that.
if your report service needs request huge data to generate report you have to think about the design. there is no more profile on you domain so there cannot be an absolute answer to this. but consider data reduce from insertion or request or some sort of pre-calculation if you could and also caching responses.

Flex 4.5 remoting objects

I am very new to remoting in flex. I am using flex 4.5 and talking to a web application built by someone else on the team using AMF. They have used Zend_AMF to serialize and unserialize the data.
One of the main issues I am facing at the moment is that I will need to talk to a lot of services (about 60 or so).
From examples on remoting I have seen online and from adobe, it seems that I need to define a remoting object for EACH service:
<mx:RemoteObject id="testservice" fault="testservice_faultHandler(event)" showBusyCursor="true" destination="account"/>
With so many services, I think I might have to define about 60 of those, which I don't think is very elegant.
At the same time, I have been playing with Pinta to test out the AMF endpoint. Pinta seems to be able to allow one to define an arbitary amount of services, methods and parameters without any of these limitations. Digging through the source, I find that they have actually drilled down deep into the remoting and are handling a lot of low level stuff.
So, the question is, is there a way to approach this problem without having to define loads or remoteobjects and without having to go down too deep and start having to handling low level remoting events ourselves?
Cheers
It seems unusual for an application to require that many RemoteObjects. I've worked on extremely large applications, and we typically end up with no more than ~6-10 RemoteObject declarations.
Although you don't give a lot of specifics in your post about the variations of RemoteObjects, I suspect you may be confusing RemoteObject with Operation.
You typically declare a RemoteObject instance for every end-point in your application. However, that endpoint can (and normally does) expose many different methods to be invoked. Each of these server-side methods gets results in a client-side Operation.
You can explicitly declare these if you wish, however the RemoteObject builds Operations for you if you don't declare them:
var remoteObject:RemoteObject;
// creates an operation for the saveAccount RPC call, and invokes it,
// returning the AsyncToken
var token:AsyncToken = remoteObject.saveAccount(account);
token.addResponder(this);
//... etc
If you're interacting with a single server layer, you can often get away with a single RemoteObject, pointing to a single destination on the API, which exposes many methods. This is approach is often referred to as an API Façade, and can be very useful, if backed with a solid dependency injection discipline on the API.
Another common approach is to segregate your API methods by logical business area, eg., AccountService, ShoppingCartService, etc. This has the benefit of being able to mix & match protocols between services (eg., AccountService may run over HTTPS).
How you choose to split up these RemoteObjects is up to you. However, 60 in a single applications sounds a bit suspect to me.

How do you use Restful APIs?

When you are using another web applications Restful API, do you usually use a wrapper library or do you use the Restful API directly with a HTTP client?
This is a hard question to answer, primarily because in the current state of the industry, the right answer is very impractical.
The correct answer, in my opinion, is you should not use a wrapper API. The uniform interface is HTTP and that should be the programming model the client works against.
Having said that, there is nothing wrong with having helper classes that parse media types to generate strongly typed versions. Something like this,
var response = HttpClient.Get(linkTofoo);
if (response.ContentType =='application/foo') {
var strongFoo = FooHelper.Parse(fooResponse);
HandleFoo(strongFoo);
}
Unfortunately, a vast majority of apis that claim to be RESTful, are not. They do not respect the self-description and hypermedia constraints and therefore they make it very difficult to interact with them in a RESTful way. They require you to do client side URI construction and have prior knowledge of the types that will be returned from endpoints.
The sad fact is that with many APIs you have little choice but to use a provided client proxy library. This should not be the case.
If you are looking for evaluating a client library, just make sure it is a stateless one. The client library should not start managing the lifetimes of the returned objects. Http caching does that, so unless it is an uber-smart library that can invalidate object references when the cached representation expires, avoid any stateful client libraries.
e.g.
var foo = remotelibrary.GetFoo(233);
var bar = foo.bar; // If this causes an HTTP request
var barcopy2 = foo.bar // and this doesn't because it already has a reference to bar
// I would be very leary of using this library
It is almost certainly going to be better in the long run to use a library. This allows you to abstract away the restful-ness of the design and ignore the whole HTTP business.
If there is not an existing library for whatever API you're using then you should consider writing your own.

Disappointed with BlazeDS... are there ways around these disadvantages?

I used to use soap webservices for transferring chart data to my flex app, but recently switched over to using BlazeDS because of performance, convenient typing, etc.
I'm considering switching over to using JSON (as I do in other parts of the app) for these reasons:
Proliferation of DTOs for communicating with flex.* (With JSON, I just use JsonConfig to exclude properties as desired.)
Difficult to debug (whereas JSON is good ol' plaintext).
Problems with load balancing without sticky sessions.
Anyone else run into these problems with BlazeDS? Is BlazeDS worth the hassle?
* I could use the Externalizable interface instead of distinct DTOs, but it's also a pain.
I wouldn't give up on using remoting. Performance of remoting will be much better than JSON. Remember ActionScript doesn't have a method to decode JSON, so you'd need to use an AS library which will be slower than anything built into the player. You'd be better of using XML than JSON.
You should be able to exclude specific properties as desired by marking them as transient. ActionScript has [Transient] metadata and the idea came from Java. The C# library we use for remoting has Transient support. I'm sure BlazeDS does too.
Debugging is easy with the right tools. You should get Charles. It provides very nice views of AMF request and response messages (assuming you're using HTTP and not RTMP, I don't know about RTMP debugging).
http://www.charlesproxy.com/
You also seem to be choosing between BlazeDS and anything-not-remoting. You have more options. BlazeDS is just one remoting implementation that Adobe made available. They also have a commercial one. There are also many open-source remoting projects available. We use a wonderful one for C# called Fluorine. Open-source Java options are Red5 and OpenAMF, but I think there are others as well.
http://red5.org/
http://openamf.com/
There's also a distinction between RTMP and HTTP remoting. You can get data into Flex through either of these protocols and each will have it's advantages/disadvantages. I personally prefer HTTP remoting unless you absolutely need the functionality RTMP provides (push, streaming). HTTP will be easier to debug and should not have problems with a load balancer--it's just HTTP calls where the content happens to be binary.

Why are Mediators coupled to Proxies in Flex PureMVC?

I've just recently learned the PureMVC framework, and am a little confused as to the coupling between Proxy and Mediator objects. The links on this page connect to some documents describing the framework. (Please note, the links on the aforementioned page open PDFs.)
The diagrams and examples of PureMVC I've examined often show a direct coupling between a Mediator and Proxy. When the proxy's state is updated, rather than sending a new Notification, the Mediator (which retrieves a reference to the Proxy from the Facade) has its state updated.
This certainly seems to simplify the logic of the code, but it also directly couples two seemingly disparate components together. To my understanding, a Mediator's purpose is to translate Events from a view into PureMVC Notifications. Proxies are meant to perform some function to gather data and relay it back to the view. These two components seem to exist in different layers of the application, and perhaps shouldn't necessarily be coupled together.
Wouldn't it make more sense to have the Proxy objects send their own Notifications when their state updates, which are forwarded to the interested Mediator by the Facade?
Even if you update the mediator through notifications it would be coupled to the proxy but that's ok, it has to be.
As long as you don't couple the proxy i would say it's ok.
Juan
Wouldn't it make more sense to have the Proxy objects send their own Notifications when their state updates, which are forwarded to the interested Mediator by the Facade?
Yes, this is exactly what needs to happen. PureMVC is simply a Notifier / Observer pattern implementation with a lightweight MVC structure linked by a Facade. I strictly advise not to couple Proxies to Mediators; only allow Mediators to respond to Notifications that are sent by the Proxy when the data state changes. This allows those classes to be fully decoupled.
If you are using Flex, I would recommend HydraMVC / HydraFramework which is a Flex-specific port of PureMVC MultiCore, mimicking the API of PureMVC however is much less verbose, and includes a way to decouple server interaction from Proxies via a DelegateRegistry. (Full disclosure, I am the principal developer on this project, however it is fully OSS and free to use / contribute.) Regardless which MVC framework you implement, I still strongly recommend fully decoupling Proxies / Mediators via Notifications.

Resources