I need to call one service from the other so I want to be sure it's ready before I use it. In my case, both microservices are running in the same database.
In the startup script, I would like to wait until the dependency is operational and abort if it isn't ready after a specified time. So this leads to the questions:
How do I detect whether another microservice is started?
What is the best approach to wait for another microservice to start.
If it doesn't start what is the right way to abort starting the current microservice.
Thanks.
Looks like it's actually easy. I set up the "provides" and "dependencies" in the respective foxx manifests. This prevents the dependent service from being mounted before the "provider" is ready to accept requests.
Related
I have a .Net core application that consists of some background tasks (hosted services) and WEB APIs (which controls and get statuses of those background tasks). Other applications (e.g. clients) communicate with this service through these WEB API endpoints. We want this service to be highly available i.e. if a service crashes then another instance should start doing the work automatically. Also, the client applications should be able to switch to the next service automatically (clients should call the APIs of the new instance, instead of the old one).
The other important requirement is that the task (computation) this service performed in the background can’t be shared between two instances. We have to make sure only one instance does this task at a given time.
What I have done up to now is, I ran two instances of the same service and use a SQL server-based distributed locking mechanism (SqlDistributedLock) to acquire a lock. If a service could acquire a lock then goes and do the operation while the other node waiting to acquire the lock. If one service crashed the next node could be able to acquire the lock. On the client-side, I used Polly based retry mechanism to switch the calling URL to the next node to find the working node.
But this design has an issue, if the node which acquired the lock loses the connectivity to the SQL server then the second service managed to acquire the lock and started doing the work while the first service is also in the middle of doing the same.
I think I need some sought of leader election (seems done it wrongly), Can anyone help me with a better solution for this kind of a problem?
This problem is not specific to .Net or any other framework. So please make your question more general so as to make it more accessible. Generally the solution to this problem lies in the domain of Enterprise Integration Patterns, so consult the references as the status quo may change.
At first sight and based on my own experience developing distributed systems, I suggest two solutions:
use a load balancer or gateway to distribute requests between your service instances.
use a shared message queue broker to put requests in and let each service instance dequeue a request for processing.
Either is fine and I can use both for my own designs.
I have a process which I will be invoking manually for the first time in prod environment. Thing is, the process stops when the server is down or if the server is stopped. In this scenario, I will not be able to invoke the process manually everytime since it will be in production environment and not feasible also. So i need to know how can i invoke a process automatically once the server is up?
Heard that one way is to write a custom component to start the process using livecycle implementation class.
Please let me know how to go about it?
Any help regarding this is much appreciated!
Thanks
There are at least two ways you can do this.
First is the custom component route. You invoke the process on component life-cycle start to ensure that the invocation happens every time your component is deployed.
Second is the servlet route. You invoke the process on the initialisation of the servlet making sure that the server started.
The servlet implementation is a better fit for purpose, the only downside is, you need to package and deploy it separately as it won't be a part of the LCAs.
You can find the code samples on how to invoke LC processes using APIs on adobe docs. You can use Java API, WS API or Rest, whichever you are more comfortable with.
http://help.adobe.com/en_US/livecycle/9.0/programLC/help/index.htm
Team:
I need to invoke a WF activity (XAML) from a WF service (XAMLX) asynchronously. I am already referencing the Microsoft.Activities.Extensions framework and I'm running on the Platform Update 1 for the state machine -- so if the solution is already in one of those libraries I'm ready!
Now, I need to invoke that activity (XAML) asynchronously -- but it has an output parameter that needs to set a variable in the service (XAMLX). Can somebody please provide me a solution to this?
Thanks!
* UPDATE *
Now I can post pictures, * I think *, because I have enough reputation! Let me put a couple out here and try to better explain my problem. The first picture is the WF Service that has the two entry points for the workflow -- the second is the workflow itself.
This workflow is an orchestration mechanism that constantly restarts itself, and has some failover mechanisms (e.g. exit on error threshold and soft exit) so that we can manage our queue of durable transactions using WF!
Now, we had this workflow working great when it was all one WF Service because we could call the service, get a response back and send the value of that response back into another entry point in a trigger to issue a soft exit. However, a new requirement has arrisen asking us to make the workflow itself a WF activity in another project and have the Receive/Send-Reply sequences in the WF Service Application project.
However, we need to be able to startup this workflow and forget about it -- then let it know somehow that a soft exit is necessary later on down the road -- but since WF executes on a single thread this has become a bit challenging at best.
Strictly speaking in XAML activities Parallel and ParallelForEach are how you perform asynchrony.
The workflow scheduler only uses a single thread (much like UI) so any activity that is running will typically be running on the same thread, unless it implements AsyncCodeActivity, in which case you are simply handing back the scheduler thread to the runtime while waiting for a callback from whichever async code your AsyncCodeActivity implementation is calling.
Therefore are you sure this is what you want to achieve? Do you mean you want to run it after you have sent your initial response? In this case place your activity after the Send Reply.
Please provide more info if these suggestions don't answer your question./
Update:
The original requirement posed (separating implementation from the service Receive/Send activities) may actually be solved by hosting the target activity as a service. See the following link
http://blog.petegoo.com/index.php/2011/09/02/building-an-enterprise-workflow-system-with-wf4/
Kind of an open question that I run into once in a while -- if you have an EJB stateful or stateless bean, or possibly a direct servlet process, that may with the wrong parameters start running long on a production system, how could you effectively add in a manual 'kill switch' for an administrator/person to specifically kill that thread/process?
You can't, or at least you shouldn't, interfere with application server threads directly. So a "kill switch" look definitively inappropriate to me in a Java EE environment.
I do however understand the problem you have, but would rather suggest to take an asynchronous approach where you split you job in smaller work unit.
I did that using EJB Timers and was happy with the result: An initial timer is created for the first work unit. When the app. server executes the timer, it then register as second one that correspond to the 2nd work unit, etc. Information can be passed form one work unit to the other because EJB Timers support the storage of custom information. Also, timer execution and registration is transactional, which is fine to work with database. You can even shutdown and restart the application sever with this approach. Before each work unit ran, we checked in database if the job had been canceled in the meantime.
Problem: Some 300 candidates make a test using Flex. A test consist of some 100 exercises. After each exercise a .NET service is called to store the result. If a candidate finishes a test, all the data of his/her test is denormalized by Asp.NET. This denormalization can take some cpu and can take 5 to 10 seconds. Now, most of the times, some of the candidates have finished their test earlier than the rest, but still some 200 of them wait until their time is up. At that moment, 200 candidates finish their test and 200 sessions are denormalized at the same time. At this point, server load (cpu) is too high and cause calls to the webserver to go wrong. Now, instead of all these sessions being normalized concurrently, I would like to add them to a queue using MSMQ.
Question:
How do you process the Queue?
Do you start a separate thread in the Application_Start of global.asax that listens to the queue? If there are messages, they are dealt one at the time.
Is it necessary to do this in a separate thread? What if in the global.asax you just call a singleton for instance that starts listening to the queue? In what thread will this singleton run? (what's the thread that calls global.asax)
What are best practices to implement this? Links? Resources? Tutorials? Examples?
I don't like the idea, but could you put an exe on the root of your website, an exe that starts a process listening to the queue...
If you get a message out of the queue, do you remove it when you pull it out or do you remove it if denormalization for this session was successful? If you remove it when you pull it out and something goes wrong...
I could also create my own queue in memory, but restarting the webserver would empty the queue and a lot of sessions would end up not being normalized, so I guess this is really a bad idea.
Is MSMQ a good choice or are there better alternatives?
You could consider using a WCF-Service with MSMQ transport. I used this approach in an application that calculates commissions:
User completes asp.net wizard configuring calculation parameters
Calculation Job is sent to WCF-Service using MSMQ transport
Service transaction is completed as soon as Job entered MSMQ
New transaction scope is created for processing Job instances
One drawback is that the transaction will require MSDTC which will add some overhead when targeting MS SQL Server and even more when dealing with Oracle.
IDesign provides a lot of useful samples and best practices on WCF queueing.
Personally, I use a servicebus for scenario's like that. I know this sounds like an overkill, but I think the .net servicebusses are so good that they require the least amount of code written by you, because it's not easy to create a good scheduler for background processes without disturbing the threads of the application pool the webapp is running in. NServicebus and MassTransit are both good an well enough documented servicebuses for your scenario. With a servicebus, you have a framework that writes to msmq and listens to msmq in several apps connected by the messagequeue. The bus makes it easy for you to create a separate app that runs as a background service and is connected with your web-app by the message queue. When you use topself (included in nservicebus and masstransit), an installer/uninstaller for the seperate apps is automatically generated by the service bus.
Question: Why don't you like the idea of having a separate exe?
How do you process the Queue?
Do you start a separate thread in the Application_Start of global.asax
that listens to the queue? If there are messages, they are dealt one at
the time.
Is it necessary to do this in a separate thread? What if in the
global.asax you just call a singleton for instance that starts listening to
the queue? In what thread will this singleton run? (what's the thread that
calls global.asax)
[skip]
I don't like the idea, but could you put an exe on the root of your website, an exe that > starts a process listening to the queue...
Normally another program processes the queue - not ASP.NET. Either a windows service or an executable that you run under a scheduler (and there's no reason to put it in the root of your website).
If you get a message out of the queue, do you remove it when you pull
it out or do you remove it if denormalization for this session was
successful? If you remove it when you pull it out and something goes
wrong...
For critical work, you perform a transactional read. Items aren't removed from the queue until you commit your read operation, but while the transaction is open, no other process can get the item.
What are best practices to implement this? Links? Resources? Tutorials? Examples?
This tutorial is a good introduction and John Breakwell's blog is excellent and offers a lot of good links (including the ones in his easy-to-find sidebar "MSMQ Documentation").