SSE implementation in Spark REST - server-sent-events

Looking for a simple example of a GET rest api that produces text/event-stream. How do I keep a track of listeners, how to use emitter in spark? Basically, how do I achieve SSE using spark framework (Server side code)

Related

Microservice to Microservice Architecure using gRPC : .NET Core

So I've this Microservice architecture where there is an ApiGateway, 2 microservices i.e., Configurations. API and API-1. The Configuration. API is mainly responsible to parse the JSON request and
access the DB and update Status tables, also to fetch required data, it even adds up more values to the JSON request and send it to the API-1. API-1 is responsible to just generate report based on the json passed.
Yes I can merge the configurations. API to the API-1 and make it a single service/container but the requirement is not to merge and create two different components i.e., 1 component purely based on
fetching the data, updating the status while the other just to generate the reports.
So here are some questions:
: Should I use gRPC for the configuration.API or is there a better way to achieve this.
Thank you.
RPC is a synchronous communication so you have to come up with strong reason to use it in service to service communication. it brings the fast and performant communication on the table but also coupling to the services. if you insist use rpc it is better to use MASSTRANSIT to implement the rpc in less coupled way. however in most cases the asynchronous event-base communication is recommended to avoid coupling (in that case look at CAP theory, SAGA, circuit breaker ).
since you said
but the requirement is not to merege and create two different
components
and that is your reason and also base on the fact
also to fetch requried data, it even adds up more values to the JSON
request and send it to the API-1
i think the second one makes scenes more. how ever i cant understand why you change the database position since you said the configuration service is responsible for that.
if your report service needs request huge data to generate report you have to think about the design. there is no more profile on you domain so there cannot be an absolute answer to this. but consider data reduce from insertion or request or some sort of pre-calculation if you could and also caching responses.

DiagnosticSource - DiagnosticListener - For frameworks only?

There are a few new classes in .NET Core for tracing and distributed tracing. See the markdown docs in here:
https://github.com/dotnet/corefx/tree/master/src/System.Diagnostics.DiagnosticSource/src
As an application developer, should we be instrumenting events in our code, such as sales or inventory depletion etc. using DiagnosticListener instances and then either subscribe and route messages to some metrics store or allow tools like Application Insights to automatically subscribe and push these events to the AI cloud?
OR
Should we create our own metrics collecting abstraction and inject/flow it down the stack "as per normal" and pretend I never saw DiagnosticListener?
I have a similar need to publish "health events" to Service Fabric which I could also solve (abstract) using DiagnosticListener instances sprinkled around.
DiagnosticListener intends to decouple library/app from the tracing system: i.e. any library can use DiagnosticSource` to notify any consumer about interesting operations.
Tracing system can dynamically subscribe to such events and get extensive information about the operation.
If you develop an application and use tracing system that supports DiagnostiListener, e.g. ApplicationInsights, you may use either DiagnosticListener to decouple your code from tracing system or use it's API directly. The latter is more efficient as there is no extra adapter that converts your DS events to AppInsights/other tracing systems events. You can also fine-tune these events more easily.
The former is better if you actually want this layer of abstraction.
You can configure AI to use any DiagnosticListener (by specifying includedDiagnosticSourceActivities) .
If you write a library and want to rely on something available on the platform so that any app can use it without bringing new extra dependencies - DiagnosticListener is your best choice.
Also consider that tracing and metrics collection is different, tracing is much heavier and does not assume any aggregation. If you want just custom-metrics/events without in/out-proc correlation, I'd recommend using tracing system APIs directly.

Is retrofit of any use if I am already using okhttp3, Moshi and Rxjava in my project?

I have done some R&D on above libraries and have used some in my project.I am using Moshi for json parsing, OkHttp3 library for http connections and Rxjava for asynchronous and event based programming in my project. Now when I looked at retrofit, I felt its of no use as I have already used above main components of retrofit myself.
Just want to know the ideas of the people whether I am thinking in right direction or not.
Edit: From my point of view, Retrofit only provides clean interface of http client where one can customize requests,headers etc with annotations.
This is a good choice of libraries from my point of view. The first three are developed by Square and they work very well together. However the main difference is that each library works on a different layer.
OkHttp: transport layer. Deals with http protocol. Performs networking.
Moshi: Json parser. Transforms bytes from OkHttp into a Java objects.
Retrofit: Rest layer. Transforms HTTP logic (status codes), into REST logic.
RxJava: provides tools to create reactive code, instead of imperative code.

Can I use Event Hub as a backplane for SignalR

Now I'm using service bus topic as a backplane for signalr. But event hub is much cheaper than topics. Therefore, I want use Event Hub as a backplane for SignalR.
Can I do this now or in near feature ?
Not sure this is a real answer, but it looks like to me. I am not sure whether Event Hub is suitable for your goal, but assuming you verified and it is, then you can write your own backplane supporting it. For exercise I wrote one which was using SignalR itself (doh) as a backplane system and it was not damn difficult. You can take inspiration from the source code of the one for Service Bus and roll out your own. In particular you'll have to put most of your logic when implementing your version of a ScaleoutMessageBus-derived class.

What is the advantage of using OData with Web API?

I am already using the standard WebAPI and returning JSON objects to my client. Now I saw an application that returned OData.
Can someone explain if there is any reason for me to use OData if I do not want to query my data from anything other than my own client running in the browser. Are there advantages that I could get through using OData ?
If you are only using your data in your own browser application, there is only few advantages to use OData in your situation:
OData is able to provide metadata about your service interface that can be used to generate client code to access the service. So if you have lots of client classes that you need to create, this could speed up your process. On the other hand, if you can share your classes between the server and an ASP.NET based client or if you only have a few classes, this might not be relevant in your situation.
Another - bigger - advantage in your situation is the support for generic queries against the service data. OData supports IQueryable so that you can decide on the client side on how to filter the data that the service provides. So you do not have to implement various actions or use query parameters to provide filtered data. This also means that if you need a new filter for your client, it is very likely that you do not have to change the server and can just put up the query on the client side. Possible filters include $filter expressions to filter the data, but also operations like $skip and $top that are useful when paging data. For details on OData and queries, see this link.
For a complete overview about OData and Web API see this link.
Here are few advantages of OData.
OData is a open protocol started by Microsoft is based on Rest Services so we can get data base on URL.
It suppport various protocol like http,atom,pub and also support JSON format.
No need to create proxy classes which we used to do it in web service.
You will able to write your own custom methods.
It is very light weight so the interaction between client and server will be fast compared to web service and other technologies.
Very simple to use.
Here are few reference links.
http://sandippatilprogrammer.wordpress.com/2013/12/03/what-is-odata-advantages-and-disadvantages/
http://geekswithblogs.net/venknar/archive/2010/07/08/introduction-odata.aspx
http://www.zdnet.com/blog/microsoft/why-microsofts-open-data-protocol-matters/12700
I agree with the answers already posted, but as an additional insight...
You mentioned that:
... if I do not want to query my data from anything other than my own
client running in the browser...
You may not wish to run it normally through anything but your own cilent, but using oData you could use other querying tools for debugging. For example LinqPad allows you to use oData endpoints (such as that provided by stackoverflow).
It's probably not a good enough reason to implement oData if you don't have another reason to do so, but it's an added bonus.

Resources