problems on our organization-id (not happening with another organization) - apigee

We are not able to call any of our APIs on our organization, we are receiving classification_failure on each API call even the testing ones that are related to weather.yahoo.com etc..
each time we open the API proxy the message the following message is displayed (No server entry found with ID 1c53328c-57b6-4f69-bdf0-ce4b39e64ef3) .
we had an increase of the traffic on our apis , is it possilble that apiggee closed the api because they monitored high volume of traffic.
I created another organization it is all working fine

I passed this along to our support team. It may be a hardware problem. (Just saw this problem with a colleague and he got it resolved. Check yours to see if it was related)
You can also email help#apigee.com with details about your org and traffic spike.

Related

Application insights | Sometimes End-to-end transaction details do not show all telemetry

I have .Net core App deployed on azure and enabled application insights.
Sometimes Azure application insights End-to-end transaction details do not display all telemetry.
Here it only logs the error and not request or maybe request logged but both do not display together over here(difficult to find out due to many people use it)
Should be like:
Sometimes request log but with no error log.
What could be the reason for happening this? do I need to look into application insights specific set-up/feature?
Edit:
As suggested by people here, try to disable the Sampling feature but still not works, Here is open question as well.
This usually happens due to sampling. By default, adaptive sampling is enabled in the ApplicationInsights.config which basically means that only a certain percentage of each telemetry item type (Event, Request, Dependency, Exception, etc.) is sent to Application insights. In your example probably one part of the end to end transaction got sent to the server, another part got sampled out. If you want, you can turn off sampling for specific types, or completely remove the
AdaptiveSamplingTelemetryProcessor
from the config which completely disables sampling. Bear in mind that this leads to higher ingestion traffic and higher costs.
You can also configure sampling in the code itself, if you prefer.
Please find here a good overview of how sampling works and can be configured.
This may be related to :
When using SDK 2.x, you have to track all events and send the telemetries to Application insights
When using auto-instrumentation with 3.x agent, in this case the agent collect automatically the traffic, logs ... and you have to pay attention to the sampling file applicationinsights.json where you can filter the events.
If you are using java, below the accepted Logging libraries :
-java.util.logging
-Log4j, which includes MDC properties
-SLF4J/Logback, which includes MDC properties

Thinger.IO endpoints return "rate limit reached" without any further explanation

I have a couple of IoT devices hosted on Thinger.IO and as part of their code execution from time to time they try to invoke thinger.io endpoints. This basically is their way of letting you connect with your business back-end services and handle IoT devices events.
It basically looks something like this:
as here at step 3 we make a reference to Thinger.IO's input resources. This basically lets your back-end to invoke functions on your IoT device. The issue that I am facing right now is related to step 2
My endpoints just stopped getting invoked. When I try to test the endpoint using their embedded client:
I get an error which is saying:
I don't really understand that. The last time an endpoint was invoked was on the 27th of February (5 days ago) and since then I've had my device completely turned off.
SIDE NOTE: The problem is not with my back-end because we can successfully invoke the endpoint using Postman.
Thee free cloud (community version) of Thinger.io has some rate limiters to throttle requests per user. However, it seems that you are not reaching those limits, so it should be a bug introduced in latest release 2.9.9 in Community Version. Will look into it. Thanks for reporting.
Edit: It should be fixed now in 2.9.91 version. Consider using a private cloud instance if you are connecting a couple of devices ;)

Secured WCF service timing out on 2nd invocation of client channel

We have a secured & authenticated WCF service which cannot use service references. Thus, we provide the interface for the contracts and open client channel manually.
We have found out that as long we open it once, everything works fine. We can call several methods several times. However, if the channel is closed or just set to a new instance, the Login() (which happens to be required for first step prior to using the service), times out.
To make the matters even more mysterious, this only happens on our production server. If I run the same project locally, I am able to login many times as I want. Consuming the methods inside a web browser (even on a code-behind ASPX page) do not have this problem even with the production server. ONLY when it's a .NET client trying to open a client channel against the production server, do we have this problem.
We are not even sure where to start looking. Any advices would be greatly appreciated.
UPDATE:
As per #Rene's suggestion, we turned on logging on both sides. From client's log, there is a record of error which is basically the same timeout error we already got via the exception. Nothing meaningful. On the server's logs, there are records of service methods being invoked successfully even after 2nd login() and from server's POV, the request is served.
Additionally, I discovered that I could not even reproduce this issue on my machine using same test project to reproduce this problem. This reproduces on my developer's machine. I verified that we were at same version of .NET framework and Visual Studio. It has to be surely a client-side problem. What could be it?
In case anyone else is looking for answer, we finally found it -- the issue is due to the need to set on client's side System.Net.ServicePointManager.DefaultConnectionLimit to some higher value. The default value is 2 but in reality this allows only one proxy to be created and be usable. Setting it to 3 would allow 2 proxies to be created & be used.

google-analytic measurement endpoint is ignoring x-forwarded-for header! any solution?

Recently, I began playing with GA Measurement protocol, it has huge potential for custom-made apps, especially for event tracking of webapps.
The problem I'm facing is;
GA is always using the requester's IP as the source IP!
even GA docs says;
"IP Address – Is implicitly sent in the HTTP request and is used to
compute all the geo / network dimensions in Google Analytics."
That's a big problem! Why?
As in my case;
I'm proxying different tracking calls thru one backend hosted in Heroku.
And funny enough, all tracked calls appears to be from US (Heroku) in that case....
There should be a better solution!
Has anyone dealt with similar problem and any suggested solution to tackle this problem?
This isnt supported yet (as of Sept 2013). There is a Google Group thread that is tracking this feature request. I Google team member has said they are considering it.
https://groups.google.com/forum/#!topic/google-analytics-measurement-protocol/8TAp7_I1uTk
Update on Feb 24, 2014
This feature has been added to Google Universal Analytics. The parameter name is 'uip'. It should be a valid IP address. It will always be anonymized just as though aip (anonymize IP) had been used.
I'm working on the same issue. Here is a possible solution I've come up with to track geolocalisation: When making the call to GA, add the requester IP to a custom variable[1]. Now you can export all your GA data and translate this IP (from your custom variable) to a location (MaxMind[2]'s databases seem good, and are under CC) and display it yourself.
1: https://developers.google.com/analytics/devguides/collection/gajs/gaTrackingCustomVariables
2: http://dev.maxmind.com/geoip/geolite#IP_Geolocation-1

Duplicate Email notifications on Mercury Pressflow (drupal)

We’re running into an issue sending duplicate notifications to our users using the Notifications module on our Mercury Pressflow implementation. The duplicate messages are identical save one thing- the [node-url] token is being replaced with ‘default’ in one of the messages. All the other tokens in the message are being replaced correctly.
The duplicate emails do not happen consistently, maybe 10-15% of the notifications sent out, however a duplicate message always has the proper url & the ‘default’ url.
The only major modification we’ve made to Mercury was spinning off MySQL to it’s own server and adding replication. We currently have the reads set up to round robin between the 2 MySQL instances.
I have done the following troubleshooting based on finding similar issues
made sure the cron job is calling the correct url
replaced all configurations named ‘default’ with the site name (Memcached, Varnish, and Apache configs)
disabled caching in an init_hook in the notifications module
Has anyone out there experienced anything similar with Notifications and Mercury? Any and all advice is greatly appreciated.
The "Mercury" stack is external to Drupal and doesn't affect how email is queued or sent. Something within your messaging/notifications configuration or use is causing multiple messages to be created.
If you have any custom code here, I would look at that and try to trace the token variance.

Resources