I am working on Service bus topic trigger function. we have used consumption plan. It is observed that the function goes to sleep after inactivity and it does not wake up when there is new message on topic.
Function is deployed using pipeline using arm templates and we are not making any changes after deployment.
Same issue is discussed in multiple discussions.
https://github.com/Azure/Azure-Functions/issues/229
Probable solution is at below location.
https://github.com/Azure/Azure-Functions/issues/210
But not sure what needs to be done as developer here. Can someone please help in here?
Related
I have a couple of IoT devices hosted on Thinger.IO and as part of their code execution from time to time they try to invoke thinger.io endpoints. This basically is their way of letting you connect with your business back-end services and handle IoT devices events.
It basically looks something like this:
as here at step 3 we make a reference to Thinger.IO's input resources. This basically lets your back-end to invoke functions on your IoT device. The issue that I am facing right now is related to step 2
My endpoints just stopped getting invoked. When I try to test the endpoint using their embedded client:
I get an error which is saying:
I don't really understand that. The last time an endpoint was invoked was on the 27th of February (5 days ago) and since then I've had my device completely turned off.
SIDE NOTE: The problem is not with my back-end because we can successfully invoke the endpoint using Postman.
Thee free cloud (community version) of Thinger.io has some rate limiters to throttle requests per user. However, it seems that you are not reaching those limits, so it should be a bug introduced in latest release 2.9.9 in Community Version. Will look into it. Thanks for reporting.
Edit: It should be fixed now in 2.9.91 version. Consider using a private cloud instance if you are connecting a couple of devices ;)
I'm currently experimenting with Pact and stumbled over a problem with the workflow and can't find a satisfying solution. So hopefully someone can help me. :-)
First of all, that's my current workflow for changes on the consumer side:
The consumer changes are uploaded to Github in a feature branch
Pact tests are run on the CI system and the resulting pact is uploaded to the pact broker with the tags [feature-branch-name] and verify_feature (currently I only use the latter one)
The consumer waits for the verification (using the can-i-deploy tool)
The pact broker triggers the provider via webhook (trigger: contract_content_changed)
The provider runs pact-verify for the latest version tagged verify_feature and uploads the result
The consumer checks the result, if verification was successful the branch can be merged
So far so good. The problem arises when the consumer introduces breaking changes:
After the consumer uploads the changes to Github, the process described above is executed and the provider verification fails which in turn causes the consumer build to fail as expected.
Now the necessary changes are made on the provider side. The provider runs pact-verify against the consumer version of the testing stage and - if successful - the new version is then merged and deployed.
Now the new consumer version should be able to be merged as well. Alas, it does not work, because this version has not been verified again. And when I restart the CI job, the pact is re-uploaded to the pact broker, but since the pact content does not change no webhook is triggered. The consumer version is never verified.
So, what am I doing wrong?
You need the new WIP pacts feature. It's currently under development, and could be available for beta testing in pact-js (and other languages that wrap the pact-ruby-standalone) within days if you wanted to try it out. If you're using pact-jvm, you'll have to wait a little longer, but we may be able to provide a work around. I've written a blog post on WIP pacts, but haven't published it yet because the feature is not ready for public release. You can find the post here http://blog.pact.io/p/51906e22-ccce-486b-9993-c21794e557d4/ I'd love to get your feedback on it.
Hop on to slack.pact.io and # me if you'd like to chat further about this.
Can someone explain how the IIB (Integration Bus) and the Business Monitor are connected together?
I've defined monitoring events in my flow.
I then exported the monitoring information to create a model.
Used the model to create an EAR file which I installed in BM as a model.
Used the mqsichangeflowmonitoring command to activate monitoring on all flows.
But when I run my flow, nothing happens, no events are recorded, nothing shows up in business space.
So I think some crucial link between the 2 systems is missing but I can't figure out what it is.
I've already read about creating topics or so, but this information wasn't clear to me.
If someone could shed some light on this it would be greatly appreciated.
you must configure the database for monitoring first.
Create DataCaptureStore and DataCaptureSource.
Define the topic and subscription object.
Define the backout queue.
for each terminal you like to record event define the data you want to export with the event.
for more details about steps above :
take this Link it helps : View Monitoring events Via IBM Integration Bus Web UI
You may have already done this, but you did not mention it...
You need to install a message-driven bean ( MDB ) which subscribes to the topic and forwards the IIB events to Business Montitor as CBEs. See http://www-01.ibm.com/support/knowledgecenter/SSMKHH_9.0.0/com.ibm.etools.mft.doc/ac60392_.htm
Trying to work out why some of my application servers have creeped up over 1s response times using newrelic. We're using WebApi 2.0 and MVC5.
As you can see below the bulk of the time is spent under 'WebTransaction'. The throughput figures aren't particularly high - what could be causing this, and what are the steps I can take to reduce it down?
Thanks
EDIT I added transactional tracing to this function to get some further analysis - see below:
Over 1 second waiting in System.Web.HttpApplication.BeginRequest().
Any insight into this would be appreciated.
Ok - I have now solved the issue.
Cause
One of my logging handlers which syncs it's data to cloud storage was initializing every time it was instantiated, which also involved a call to Azure table storage. As it was passed into the controller in question, every call to the API resulted in this instantiate.
It was a blocking call, so it added ~1s to every call. Once i configured this initialization to be server life-cycle wide,
Observations
As the blocking call was made at the time of the Controller being build (due to Unity resolving the dependancies at this point) New Relic reports this as
System.Web.HttpApplication.BeginRequest()
Although I would love to see this a little granular, as we can see from the transactional trace above it was in fact the 7 calls to table storage (still not quite sure why it was 7) that led me down this path.
Nice tool - my new relic subscription is starting to pay for itself.
It appears that the bulk of time is being spent in Account.NewSession. But it is difficult to say without drilling down into your data. If you need some more insight into a block of code, you may want to consider adding Custom Instrumentation
If you would like us to investigate this in more depth, please reach out to us at support.newrelic.com where we will have you account information on hand.
I have a web application which fetches information from a web service. It works fine in our development environment.
Now I'm trying to get it to work in a customer's environment instead, where the web service is from a third party. The problem is that the first time the application tries to fetch information it cannot connect to the web service. When it tries again just seconds later it works fine. If I wait a couple of hours and try again, the problem occurs again.
I'm having a hard time believing this is a programming error, as our customer and the maker of the web service thinks. I think it has to do with one of the IIS or some security in the network. But I don't have much to go on and can't reproduce the error in our development environment.
Is it failing with timeOutException when you try to connect first time?. If yes, this could be the result on start up time of the service
I have a rule: "Always assume its your fault until you can demonstrate otherwise". After over 20 years, I still stick to it.
So there are therefore two cases:
The code is broken
There is a specific issue with the live environment
Since you want to demonstrate that the problem is (2) you need to test calls to the service, from the live environment, using something other than your application. Exactly what will depend on the nature of the web service but we've found SoapUI to be helpful.
The other thing that's not clear is whether you are making calls to the live service from your development environment - if, in testing, you're not communicating with the same instance of the service then that's an additional variable that will need to be considered (and I appreciate that you're not always given the option).
Lastly, #Krishna is right - there may be a spin up issue with the remote service (hence my question about whether you're talking to the same service from your dev environment) and -horrible as it is - the solution in the first instance may simply be to find a way to allow for this!
The error was the web service from the third party. The test stub we got to develop against was made in C# and returned only dummy answers. The web service in the customer environment actually connected to a COM object. The first communication with the COM object after a longer wait took almost a minute.
Good for me that the third party developers left the source code on the customer servers...