Does Azure Service Fabric do the same thing as Docker? - asp.net

My thinking is that people use Docker to be sure that local environment is the same as production and that I they can stop thinking about where are their apps running physically and balancing mechanisms should just allocate apps in best places for that moment.
I'm 100% web based and I'm going to move to cloud together with our databases, and what cannot be moved will be seamlessly bridged so the corporate stuff and the cloud will become one subnetwork.
And so I'm wondering, maybe Service Fabric already does the same thing that Docker does plus it gives as address translation service (fabric:// that acts a bit like DNS for the processes in fabric space) plus (important for some) encourages on demand worker allocation - huge scalability perk.
Can Service Fabric successfully replace Docker?
Is it gaining audience and acceptance? Because otherwise even the greatest invention can fail.

It's confusing since Docker (the company) is trying to stake claims in everything cloud.
Docker Engine (what most people call "Docker") is a containerization technology. It can give you
Process isolation
Network isolation
Consistent application environment
Docker Hub is an image registry. It stores Docker images so you can download them as part of your deployment.
Docker Cloud is an orchestration system for Docker. It can give you
Scale your applications up and down
Connect your applications to each other
CI testing, integrated with Docker Hub (this isn't part of orchestration, just another thing it does)
Service Fabric is an orchestration system. It can orchestrate Docker containers, but it can also integrate more tightly with your services if you build specifically for Fabric. (Docker is completely agnostic about what runs inside a container.)
So Service Fabric is mostly comparable to Docker Cloud, though it's not an exact match. There are some other Docker-based orchestration solutions (Kubernetes is probably the biggest) and there are other cloud-based micro-service solutions (Heroku is probably the best-known).
The primary disadvantage of Service Fabric is that it's a Microsoft technology and so you're going to be tied to Azure to a greater degree than if you were running Docker. The other is that Docker has a broader range of choices for building your stack: all three Docker-things I listed above have at least one open-source alternative (this is also a big disadvantage of Docker, since nobody's laying out a single Best Practices For You document).
If you love Microsoft and if cobbling systems together is not something that's important to you, then Service Fabric should be a fine alternative to the Docker ecosystem. (And you can still run Docker containers under it.)

The key similarities between the Service Fabric and Docker containerization:
Both Dockers and SF, are capable of creating an immutable image out of your micro-service implementation, on both the platforms - Linux and Windows.
Both Dockers and SF, are capable of orchestrating your containerized application within a cluster of VMs. These VMs can be anywhere - public cloud, private cloud or your own data center. Please note that both of them are cloud platform agnostic, that means, they don't have strong affinity on any of the cloud service. So as long as you are not using any cloud specific feature within your micro-service, this should be fine.
Both Dockers and SF, are capable exhibiting essential capabilities of an orchestrating platform: Service Discovery, Service level load balancing, Network level isolation among services, fail-over handling and replication control etc.
The key differences between the Service Fabric and Docker containerization:
Docker container is essentially a deployment / packaging construct. That said, docker doesn't dictate on what you are packaging within a container as part of your service implementation. Neither it provides any programming construct to implement your kind of service. Whereas, Service Fabric provides programming constructs in the form of base types / interfaces from which your service implementation can start with a certain kind of service declared - stateful service, stateless service, virtual actor.
In the Docker world, everything is a container, i.e. your minimum deployment / orchestration unit is a container. Hence, it doesn't recognize or support an individual process. Whereas, in SF, we have a provision wherein, your micro-service derived from stateless / stateful service can be orchestrated and governed as a process. However, SF also supports container orchestration the way Docker does. Also, the latest version of SF allows packaging your stateful / stateless service within a container.
With above facts in mind, please note that SF doesn't have any strong affinity on any cloud provider. It can run equally on any public cloud - Azure, AWS or GCP, as long as you are able to create the VMs with desired platform.

It is not comparable at all. With service fabric, you get health monitoring, code integration with the fabric, logging, monitoring, load-balancing, and other intelligent features. Your application can even execute shutdown code. Service Fabric is not just for Microsoft technologies and even docker can reside inside SF so is rkt or Unix OS. Security and networking features(in-line with web apps) is another plus. Reliable collections is simply brilliant. And a roadmap to better application building and performance is guaranteed for the companies adopting it (history says so).
This question is highly favoring 'greatest invention' Docker. This comparison can do good for Docker marketing but no one will replace SF for Docker. Docker is just a tiny OS copy (nothing to do with services, applications or intelligence). Docker even has nothing to do with application development, it wasn't the intention. Just that people have started to find the need for isolation and sharing. And that is what Docker is all about.

Related

Monolithic to Microservcies architecture

My company has a .NET project which serves following use cases
It listens on a WebSocket Port with certificate based client authentication.
It listens on another WebSocket Port with authorization header based client authentication.
It listens on a TCP Port with certificate based client authentication.
On the above three ports different set of client devices are connected.
Now my company wants to convert this application in .NET Core so that this can be deployed on Linux servers to save the deployment cost. As an architect I am thinking in the direction of adopting microservices architecture along with migrating the application in .NET Core. So I am thinking that the above application can be break down into three microservices based on the above use cases.
AFAIK microservices architecture means breaking down your application into smaller services which serves a particular use case. So breaking this complete application into three different microservices is correct or not.
My organization is very new to micro services architecture.I just want to know that whether this thinking is correct or not as per architecture.
Thanks in advance for your help.
Generally I'd try to break things down based on business domains (or business capabilities) instead of technical reasons.
A good place to start might be reading about domain driven design and bounded contexts - see here - there's some good further reading at the end of that link.
Yes you are thinking in right direction.
Here are my suggestion-
you should go for .NET Core and Docker for implementing your Microservices in a better way.
There can be multiple cases when you will go for docker container in this scenario-
1: Run the same image in multiple containers
2: Manage different Containers
3: Run the same image in multiple environments
4: Tag and Run image with different versions
And other reason to go for micro services with docker-
Microservices are smaller in size
Microservices are easier to develop, deploy, and debug, because a fix only needs to be deployed onto the microservice with the bug, instead of across the board
Microservices can be scaled quickly and can be reused among different projects
Microservices work well with containers like Docker
Microservices are independent of each other, meaning that if one of the microservices goes down, there is little risk of the full application shutting down.
You do more research on the same and easily go for micro services architecture.
This may not be answering your question, but I thought it could be useful, especially in light of the fact that your organization is very new to micro-services.
I would recommend to carefully evaluate the advantages and especially disadvantages (complexity) that micro-services architecture introduces.
Just a few examples of things that you will need to think about are log aggregation, communication between services (sync vs async), E2E and integration tests, eventual consistency, etc. Obviously you may end up not having to deal with some of these, but all of them do become a lot more complicated with micro-services.
There should be good business justification to take on the additional complexity (=cost).
Microservices shouldn't be measured on how small enough but how autonomous and independent they are. Microservices are great to be designed around business and domain context and mentioned in details Identifying domain boundires
Since you are starting to build microservices in .Net core why not consider severless microservices ? You have plenty on options in major clouds (AWS,Azure) to build serverless microservices. Serverless are quicker to build and you get generous free tier and you don't have to manage clusters. Is there specific reason you would want to use Kubernetes? you can read more about cloud native and servlerss here Design Cloud native and Serverless

spring cloud dataflow and airflow

We have airflow as workflow management tool to schedule/monitor tasks and also some have applications using Spring cloud dataflow for loose coupling across processes via producer and consumer talking message bus Kafka and Grafana dashboards for UI (ETL). Kubernetes and AWS (EKS) are options for deployments.
We are starting to creating data pipelines which will have sources( Files on S3 or server or databases), processors( custom applications, AL/ML pipelines ) and destinations (Kafka, s3, databases, ES). I am planning to use airflow to manage overall management of pipelines and tasks within pipeline via SCDF based applications or future applications written python as AL/ML piece expands. Is this correct approach or can I let go of one over other?
Based on your requirements, SCDF would fit and provide options to manage your streaming data pipelines.
While you can still research to find any other possible approaches, I can provide some more hints on what SCDF provides to meet some of your requirements.
SCDF provides out of the box apps which you can extend/customize. These apps include S3 source and sink which you can use out of the box. For a complete list of out of the box apps, you can refer the page here
Apparently, SCDF has Kubernetes deployer which you can work on any Kubernetes based platforms. You can configure your K8s specific properties as a set of kubernetes deployer properties when you deploy the applications.
You can embed a python based application as a processor/transformer in the streaming data pipeline. You can check this receipe from the SCDF site to know more about this.
You can also embed tensorflow application as a processor application inside a pipeline.

What is the recommended way to interface with a CorDapp via an API?

I am designing a CorDapp, which would require user input as well as API integration, and I am considering various approaches to expose flows and vault queries to the outside world.
Default option seems to be to use Corda RPC. Unless I missed something, there are only Java bindings for it, which is effectively restricting the clients to only be JVM-based. This is somewhat inconvenient, and ideally I would like something like OpenAPI to make it more open and implementation-agnostic.
Another option is to use some kind of Corda RPC to OpenAPI proxy. I know about Braid, and I'm sure there are others. Braid seems to support deployment as a Corda service packed together with the flows into the CorDapp itself, effectively making it running embedded into the Corda JVM.
Braid can be deployed as a standalone proxy too, which I suppose is option three.
Instinctively I find the embedded mode more attractive, as it reduces the number of moving parts, as opposed to a standalone mode. However, I am concerned that such model may be in fact become discouraged at some point, either because Corda developers consider it to be a misuse of services facility, or because some organisations will not be keen to deploy such code onto their nodes, especially when they may be running multiple CorDapps. I would imagine anything deployed as part of Corda JVM would at least require more scrutiny due to potential impact on other things running there, which in turn would reduce the agility.
I wonder what approach to integrate with a CorDapp is actually recommended?
Edit 1: I know it is technically possible to embed a webserver into the node and expose a REST API from there, at least in the current version of Corda (4.3 at the time of writing). The question is more about whether it is a good idea to do so, or not, and why.
Take a look at the question I had asked on Stackoverflow regarding front end for CordApp. This might be of some help.
Following is the link -
"Corda: Can we develop Dapps that will be run by IIS webserver to talk to Corda platform?"
You can use any front-end technology you want.
As of Corda 3, your backend must be JVM-based, for two reasons:
You need to load various flow, state and other class definitions onto
the classpath to pass as arguments to flows, retrieve objects from the
vault, etc.
You need to use the CordaRPCClient library to create an
RPC connection to the node
If you really need to write your back-end
in another language, there are a few workarounds:
Create a thin Java webserver that sits between your main webserver and
the node. The Java webserver translates HTTP requests from the main
webserver into RPC calls to the node, and RPC responses from the node
into HTTP responses to the main webserver
This is the approach taken
by libraries such as Braid
Use a library such as GraalVM to compile
non-JVM languages to JVM bytecode
An example of writing a JVM
webserver in Javascript using GraalVM is available here:
https://github.com/nitesh7sid/cordapp-example-nodejs-server-graalvm

In GCP share a VPN gateway with other projects

I'm in the process of starting the design of the networks (VPC, subnetworks and such) as part of the process of moving a rather complex organization on-premise structure, on the cloud.
The chosen provider is GCP and I read and taken the courses to be associate engineer. However, the courses I've followed don't go into details of the technical aspects of doing something like this, just present you with the possible options.
My background is of a senior backend, then fullstack, developer. So I lack some of the very interesting and useful knowledge of a sysadmin unfortunately.
Our case is as follows:
On premise VMs on several racks, reachable only inside a VPN
Several projects on the GCP Cloud
Two of them need to connect to the on-premise VPN but there could be more
Some projects see each other resources (VMs, SQL, etc) using VPC Peering
Gradually we will abandon the on-premise, unless we find some legacy application that really is messed up
Now, I could just create a new VPN connection for every project from Hybrid Connectivity -> VPN but I'd rather create a project dedicated to having the VPN gateway set up and allow other projects to use that resources.
Is this a possible configuration? Is it a valid design? As far as I explored the VPN creation, it seems that I'll have to create a VM that will expose an IP acting as gateway, if that's the case I was thinking to be using the VPC peering to allow other projects to exit into the on premise VPN. No idea if I'm talking gibberish here. I'm still waiting for some information (IKE shared key, etc) before attempting anything, so I'm rather lost at this point.
You have to take in consideration several aspect:
Cost: if you set up a VPN in each project, and if you have to double your connectivity for HA, it will be expensive. If you have only 1 gateway project, it's cheaper
Cheaper, imply trade off. VPN have limited bandwidth: 3Gbps (Cloud Interconnect also, but higher and more expensive). If all your projects use the same VPN thanks to mutualization, take care at this bottleneck.
If you want to mutualise, at least for DEV/UAT project, I recommend you to use VPC Peering, I mean 1 VPN project, and others with VPC peering. Take care at your IP range assign for peering. If you are interested, I wrote an article on this
It's also possible to use Shared VPC, which is great! But there is less compatibility with several product (for example, serverless VPC Connector for Cloud Function and App Engine isn't yet compliant with shared VPC).

Migrate from legacy network in GCE

Long story short - I need to use networking between projects to have separate billing for them.
I'd like to reach all the VMs in different projects from a single point that I will use for provisioning systems (let's call it coordinator node).
It looks like VPC network peering is a perfect solution to this. But unfortunately one of the existing networks is "legacy". Here's what google docs state about legacy networks.
About legacy networks
Note: Legacy networks are not recommended. Many newer GCP features are not supported in legacy networks.
OK, naturally the question arises: how do you migrate out of legacy network? Documentation does not address this topic. Is it not possible?
I have a bunch of VMs, and I'd be able to shutdown them one by one:
shutdown
change something
restart
unfortunately it does not seem possible to change network even when VM is down?
EDIT:
it has been suggested to recreate VMs keeping the same disks. I would still need a way to bridge legacy network with new VPC network to make migration fluent. Any thoughts on how to do that using GCE toolset?
One possible solution - for each VM in the legacy network:
Get VM parameters (API get method)
Delete VM without deleting PD (persistent disk)
Create VM in the new VPC network using parameters from step 1 (and existing persistent disk)
This way stop-change-start is not so different from delete-recreate-with-changes. It's possible to write a script to fully automate this (migration of a whole network). I wouldn't be surprised if someone already did that.
UDPATE
https://github.com/googleinterns/vm-network-migration tool automates the above process, plus it supports migration of a whole Instance Group or Load Balancer, etc. Check it out.

Resources