Understanding Microservices and GRPC - grpc

I have a Python Website and a GRPC service that's going to be written in GO.
I am looking to introduce microservices to the website.
-The GRPC Chat server will be written in GOLANG. The GRPC client would be written in python as my website is python. In future I would like a swift GRPC stub to communicate with this go server as well.
Is this the correct architecture?
My website current GRPC client
chat submit button-> python grpc client-> go grpc server -> DB -> gogrpc server-> pygrpc client->UpdateChat
A request from any languages future mobile GRPC Client
UpdateChatRequest-> Any GRPC Client -> go GRPC server -> DB -> go GRPC Server -> Any GRPC Client->UpdateChat

If you are intending on building multiple what you may call "gRPC Clients" that talk to the Go gRPC server to perform CRUD operations for chat, this looks like a decent model. I will say that nowadays, a lot of gRPC implementations are used for inter-microservice communication more than they are used for your standard (client -> gateway -> micro-service cluster) client-server comms. In my opinion, this is largely due to gRPC-web not being able to fully operate over HTTP/2. Now, I honesty don't see a problem with implementing gRPC on thick clients (like on native mobile or desktop), because the gRPC can be fully supported in those scenarios, just not in the browser yet.
If you are not using gRPC-web on your front end, it might be worth looking in to. gRPC-web requires a proxy anyways, so you're operating over HTTP/1.1 whether you use gRPC or not until browsers support more directly coupling with HTTP/2.
It is a little unclear, but you arch isn't necessarily screaming micro-services. In reality, it is better not to view the "micro-services" concept as an overnight transition that should take place. Your arch is a monolith right now. Using technologies like gRPC doesn't automatically make your architecture a micro-services architecture, but it does better prepare you for the gradual transition you should take to develop your architecture into more of a distributed, service-based model. Of course, the term "micro-service" is often a buzz word with many different scopes of meaning, it is important to know what context you are thinking in.

Related

Using RabbitMQ over HTTP

I have to connect an old but critical software to RabbitMQ. The software doesn't support AMQP, but it can do HTTP Requests.
Does RabbitMQ support plain HTTP? Or should I use a "proxy" or "app" that actively transforms the HTTP Requests to AMQP 1.0 and pushes it to the RabbitMQ server?
https://www.rabbitmq.com/management.html
The management plugin supports a simple HTTP API to send and receive messages. This is primarily intended for diagnostic purposes but can be used for low volume messaging without reliable delivery.
As mentioned, it's designed for very low loads, but it may be usable. If you need higher loads, then by all means cast around for a library that does the job and create a proxy. Most languages will have something. I've personally created a lightweight API using Lumen and https://github.com/bschmitt/laravel-amqp to tie a few disparate services together in the past, and it seems to work very well.
It is possible not but really recommended depending on load. You have three options really, two of which are web socket based and one that seems like what you're looking for. I'd suggest starting with the rabbitmq docs.

Difference between SignalR and Pusher

I want to create a web app using React as the front end technology. A requirement for the app is that the server will be able to update all the clients with information about changes (not have to be an exact real time, but should update after no more than 10 seconds).
Solutions like clients requesting updates from the server every several seconds are out of the question.
Requirements:
1) The server's should be implemented with either .NET or with Node.js.
2) The connection MUST be secured via port 443 of the IIS.
I read a bit about Micorsoft's SignalR and about Pusher Channels which seems to provide exactly the kind of service I require.
Could you please elaborate about what exactly are the differences between them? When should I choose each? Which of them got more community support? Which is easier to implement? Stuff like that...
Both SignalR and Pusher Channels ultimately both use websockets to deliver messages to clients, so both should meet your requirements to deliver messages to clients in realtime.
1) Both offerings also meet your requirements for both library support:
SignalR supports .NET:
https://dotnet.microsoft.com/apps/aspnet/signalr
Pusher Channels has server support for both nodejs and .NET:
https://github.com/pusher/pusher-http-node
https://github.com/pusher/pusher-http-dotnet
2) Both offerings also meet your requirements for sending messages over TLS/WSS:
SignalR:
https://kimsereyblog.blogspot.com/2018/07/signalr-with-asp-net-core.html
Pusher Channels:
Securing Pusher's messages
In terms of the differences between them this depends on your implementation, if you just run SignalR on your own ISS server then it will be down to you to manage all of the websocket connections and all of the scaling challenges that come with this.
However similar to how Channels works, SignalR also has a managed websocket service, so you do not need to manage the connections or scaling. You just make an API request with the message you want to send to either Channels or SignalR and this message is then broadcast to the interested clients connected by websockets. In this scenario you do not manage the websocket connections yourself.
However in terms of pricing Channels appears to be far more competitive (especially the free offering), so if you are looking at the managed offering Channels looks to be a better value proposition:
https://azure.microsoft.com/en-gb/pricing/details/signalr-service/
https://pusher.com/channels/pricing
Both offerings look fairly similar in terms of implementation (assuming you are using the managed service). The complexity would increase if you implement SignalR on ISS:
https://learn.microsoft.com/en-us/aspnet/core/signalr/scale?view=aspnetcore-2.2
In terms of support Pusher has a free application support offering:
https://support.pusher.com/hc/en-us
Hope this helps!
This presentation has some answers A 10 Minute Guide to Choosing a Realtime Framework

easy server and client communication

I want to create a program for my desktop and an app for my android. Both of them will do the same, just on those different devices. They will be something like personal assistants, so I want to put a lot of data into them ( for example contacts, notes and a huge lot of other stuff). All of this data should be saved on a server (at least for the beginning I will use my own Ubuntu server at home).
For the android app I will obviously use java and the database on the server will be a MySQL database, because that's the database I have used for everything. The Windows program will most likely be written in of these languages: Java, C#c C++, as these are the languages I am able to use quite well.
Now to the problem/question: The server should have a good backend which will be communicating with the apps/programs and read/write data in the database, manage the users and all that stuff. But I am not sure how I should approach programming the backend and the "network communication" itself. I would really like to have some relatively easy way to send secured messages between server and clients, but I have no experience in that matter. I do have programming experience in general, but not with backend and network programming.
side notes:
I would like to "scale big". At first this system will only be used by me, but it may be opened to more people or even sold.
Also I would really like to a (partly) self programmed backend on the server, because I could very well use this for a lot of other stuff, like some automation features in my house, which will be implemented.
EDIT: I would like to be able to scale big. I don't need support for hundreds of people at the beginning ;)
You need to research Socket programming. They provide relatively easy, secured network communication. Essentially, you will create some sort of connection or socket listener on your server. The clients will create Sockets, initialize them to connect to a certain IP address and port number, and then connect. Once the server receives these connections, the server creates a Socket for that specific connection, and the two sockets can communicate back and forth.
If you want your server to be able to handle multiple clients, I suggest creating a new Thread every time the server receives a connection, and that Thread will be dedicated to that specific client connection. Having a multi-threaded server where each client has its own dedicated Thread is a good starting point for an efficient server.
Here are some good C# examples of Socket clients and servers: https://msdn.microsoft.com/en-us/library/w89fhyex(v=vs.110).aspx
As a side note, you can also write Android apps in C# with Xamarin. If you did your desktop program and Android app both in C#, you'd be able to write most of the code once and share it between the two apps easily.
I suggest you start learning socket programming by creating very simple client and server applications in order to grasp how they will be communicating in your larger project. Once you can grasp the communication procedures well enough, start designing your larger project.
But I am not sure how I should approach programming the backend and
the "network communication" itself.
Traditionally, a server for your case would be a web server exposing REST API (JSON). All clients need to do http requests and render/parse JSON. REST API is mapped to database calls and exposes some data model. If it was in Java, it would be Jetty web server, Jackson Json parser.
I would really like to have some relatively easy way to send secured
messages between server and clients,
Sending HTTP requests probably the easiest way to communicate with a service. Having it secured is a matter of enabling HTTPS on the server side and implementing some user access authentication and action authorization. Enabling HTTPS with Jetty for Java will require few lines of code. Authentication is usually done via OAuth2 technique, and authorization could be based on ACL. You may go beyond of this and enable encryption of data at rest and employ other practices.
I would like to "scale big". At first this system will only be used by
me, but it may be opened to more people or even sold.
I would like to be able to scale big. I don't need support for
hundreds of people at the beginning
I anticipate scalability can become the main challenge. Depending on how far you want to scale, you may need to go to distributed (Big Data) databases and distributed serving and messaging layers.
Also I would really like to a (partly) self programmed backend on the
server, because I could very well use this for a lot of other stuff,
like some automation features in my house, which will be implemented.
I am not sure what you mean self-programmed. Usually a backend encapsulates some application specific business logic.
It could be a piece of logic between your database and http transport layer.
In more complicated scenario your logic can be put into asynchronous service behind the backend, so the service can do it's job without blocking clients' requests.
And in the most (probably) complicated scenario your backend may do machine learning (for example, if you would like you software stack to learn your home-being habits and automate house accordingly to your expectations without actually coding this automation)
but I have no experience in that matter. I do have programming
experience in general, but not with backend and network programming.
If you can code, writing a backend is not very hard problem. There are a lot of resources. However, you would need time (or money) to learn and to do it, what may distract you from the development of your applications or you may enjoy it.
The alternative to in-house developed of a backend could be a Backend-as-a-Service (BaaS) in cloud or on premises. There are number of product in this market. BaaS will allow you to eliminate the development of the backend entirely (or close to this). At minimum it should do:
REST API to data storage with configurable data model,
security,
scalability,
custom business-logic
Disclaimer: I am a member of webintrinsics.io team, which is a Backend-as-a-Service. Check our website and contact if you need to, we will be able to work with you and help you either with BaaS or with guiding you towards some useful resources.
Good luck with your work!

Service Oriented Architecture - AMQP or HTTP

A little background.
Very big monolithic Django application. All components use the same database. We need to separate services so we can independently upgrade some parts of the system without affecting the rest.
We use RabbitMQ as a broker to Celery.
Right now we have two options:
HTTP Services using a REST interface.
JSONRPC over AMQP to a event loop service
My team is leaning towards HTTP because that's what they are familiar with but I think the advantages of using RPC over AMQP far outweigh it.
AMQP provides us with the capabilities to easily add in load balancing, and high availability, with guaranteed message deliveries.
Whereas with HTTP we have to create client HTTP wrappers to work with the REST interfaces, we have to put in a load balancer and set up that infrastructure in order to have HA etc.
With AMQP I can just spawn another instance of the service, it will connect to the same queue as the other instances and bam, HA and load balancing.
Am I missing something with my thoughts on AMQP?
At first,
REST, RPC - architecture patterns, AMQP - wire-level and HTTP - application protocol which run on top of TCP/IP
AMQP is a specific protocol when HTTP - general-purpose protocol, thus, HTTP has damn high overhead comparing to AMQP
AMQP nature is asynchronous where HTTP nature is synchronous
both REST and RPC use data serialization, which format is up to you and it depends of infrastructure. If you are using python everywhere I think you can use python native serialization - pickle which should be faster than JSON or any other formats.
both HTTP+REST and AMQP+RPC can run in heterogeneous and/or distributed environment
So if you are choosing what to use: HTTP+REST or AMQP+RPC, the answer is really subject of infrastructure complexity and resource usage. Without any specific requirements both solution will work fine, but i would rather make some abstraction to be able switch between them transparently.
You told that your team familiar with HTTP but not with AMQP. If development time is an important time you got an answer.
If you want to build HA infrastructure with minimal complexity I guess AMQP protocol is what you want.
I had an experience with both of them and advantages of RESTful services are:
they well-mapped on web interface
people are familiar with them
easy to debug (due to general purpose of HTTP)
easy provide API to third-party services.
Advantages of AMQP-based solution:
damn fast
flexible
cost-effective (in resources usage meaning)
Note, that you can provide RESTful API to third-party services on top of your AMQP-based API while REST is not a protocol but rather paradigm, but you should think about it building your AQMP RPC api. I have done it in this way to provide API to external third-party services and provide access to API on those part of infrastructure which run on old codebase or where it is not possible to add AMQP support.
If I am right your question is about how to better organize communication between different parts of your software, not how to provide an API to end-users.
If you have a high-load project RabbitMQ is damn good piece of software and you can easily add any number of workers which run on different machines. Also it has mirroring and clustering out of the box. And one more thing, RabbitMQ is build on top of Erlang OTP, which is high-reliable,stable platform ... (bla-bla-bla), it is good not only for marketing but for engineers too. I had an issue with RabbitMQ only once when nginx logs took all disc space on the same partition where RabbitMQ run.
UPD (May 2018):
Saurabh Bhoomkar posted a link to the MQ vs. HTTP article written by Arnold Shoon on June 7th, 2012, here's a copy of it:
I was going through my old files and came across my notes on MQ and thought I’d share some reasons to use MQ vs. HTTP:
If your consumer processes at a fixed rate (i.e. can’t handle floods to the HTTP server [bursts]) then using MQ provides the flexibility for the service to buffer the other requests vs. bogging it down.
Time independent processing and messaging exchange patterns — if the thread is performing a fire-and-forget, then MQ is better suited for that pattern vs. HTTP.
Long-lived processes are better suited for MQ as you can send a request and have a seperate thread listening for responses (note WS-Addressing allows HTTP to process in this manner but requires both endpoints to support that capability).
Loose coupling where one process can continue to do work even if the other process is not available vs. HTTP having to retry.
Request prioritization where more important messages can jump to the front of the queue.
XA transactions – MQ is fully XA compliant – HTTP is not.
Fault tolerance – MQ messages survive server or network failures – HTTP does not.
MQ provides for ‘assured’ delivery of messages once and only once, http does not.
MQ provides the ability to do message segmentation and message grouping for large messages – HTTP does not have that ability as it treats each transaction seperately.
MQ provides a pub/sub interface where-as HTTP is point-to-point.
UPD (Dec 2018):
As noticed by #Kevin in comments below, it's questionable that RabbitMQ scales better then RESTful servies. My original answer was based on simply adding more workers, which is just a part of scaling and as long as single AMQP broker capacity not exceeded, it is true, though after that it requires more advanced techniques like Highly Available (Mirrored) Queues which makes both HTTP and AMQP-based services have some non-trivial complexity to scale at infrastructure level.
After careful thinking I also removed that maintaining AMQP broker (RabbitMQ) is simpler than any HTTP server: original answer was written in Jun 2013 and a lot of changed since that time, but the main change was that I get more insight in both of approaches, so the best I can say now that "your mileage may vary".
Also note, that comparing both HTTP and AMQP is apple to oranges to some extent, so please, do not interpret this answer as the ultimate guidance to base your decision on but rather take it as one of sources or as a reference for your further researches to find out what exact solution will match your particular case.
The irony of the solution OP had to accept is, AMQP or other MQ solutions are often used to insulate callers from the inherent unreliability of HTTP-only services -- to provide some level of timeout & retry logic and message persistence so the caller doesn't have to implement its own HTTP insulation code. A very thin HTTP gateway or adapter layer over a reliable AMQP core, with option to go straight to AMQP using a more reliable client protocol like JSONRPC would often be the best solution for this scenario.
Your thoughts on AMQP are spot on!
Furthermore, since you are transitioning from a monolithic to a more distributed architecture, then adopting AMQP for communication between the services is more ideal for your use case. Here is why…
Communication via a REST interface and by extension HTTP is synchronous in nature — this synchronous nature of HTTP makes it a not-so-great option as the pattern of communication in a distributed architecture like the one you talk about. Why?
Imagine you have two services, service A and service B in that your Django application that communicate via REST API calls. This API calls usually play out this way: service A makes an http request to service B, waits idly for the response, and only proceeds to the next task after getting a response from service B. In essence, service A is blocked until it receives a response from service B.
This is problematic because one of the goals with microservices is to build small autonomous services that would always be available even if one or more services are down– No single point of failure. The fact that service A connects directly to service B and in fact, waits for some response, introduces a level of coupling that detracts from the intended autonomy of each service.
AMQP on the other hand is asynchronous in nature — this asynchronous nature of AMQP makes it great for use in your scenario and other like it.
If you go down the AMQP route, instead of service A making requests to service B directly, you can introduce an AMQP based MQ between these two services. Service A will add requests to the Message Queue. Service B then picks up the request and processes it at its own pace.
This approach decouples the two services and, by extension, makes them autonomous. This is true because:
If service B fails unexpectedly, service A will keep accepting requests and adding them to the queue as though nothing happened. The requests would always be in the queue for service B to process them when it’s back online.
If service A experiences a spike in traffic, service B won’t even notice because it only picks up requests from the Message Queues at its own pace
This approach also has the added benefit of being easy to scale— you can add more queues or create copies of service B to process more requests.
Lastly, service A does not have to wait for a response from service B, the end users don’t also have to wait for long— this leads to improved performance and, by extension, a better user experience.
Just in case you are considering moving from HTTP to AMQP in your distributed architecture and you are just not sure how to go about it, you can checkout this 7 parts beginner guide on message queues and microservices. It shows you how to use a message queue in a distributed architecture by walking you through a demo project.

Internet connectivity for a playn multiplayer action game

The short story: me and friend are making a multiplayer action game and we thought playn would be great for this. Android, java and HTML5 support is the most important ones but we don't want to cut out the others if not necessary.
The problem is now when we want to implement the networking part of it. We have implemented our own capable server and thought we would use long polling http requests for communication. We estimate now we need some way to have one thread running for the communication that use messages and two multithread safe queues. One queue for incoming messages that the update() part can consume from and one queue for outgoing messages to the server.
Is there any way to implement this without losing platform support? Or any other idea how we can implement this?
PlayN currently has no cross-platform support for persistent socket connections to a server. You will need to implement your own cross-platform abstraction. You can use WebSockets for the HTML5 backend, and you can look for a WebSockets library for Android and whatever other platforms you intend to support.
You can also use the Nexus library, which is designed to work with PlayN and provide client/server communication. However, it raises the level of abstraction substantially beyond passing simple messages between the client and server, so it might be easier to just implement your own simple WebSockets based communication than to learn how Nexus works.

Resources