I'm implementing the Nokia HERE REST API into my application and just wondered why the statistics are always empty.
I'm currently using the "Consumer Free" plan and I think my app downloads 250 - 750 tiles per day during development - but they aren't showing up there (In the statistics).
Is this simply to less data to show in the statistics?
https://developer.here.com/applications/{app_id}/usage
From our earlier reply
"The transactions are recorded for the production environment, kindly make sure the requests are being made to Production and not CIT environment. CIT urls have the domain *.cit.api.here.com"
Related
I am using WordPress and Wp-rocket is installed on my website.
Now my issue is, I am getting 98 in the performance but my Core Web Vitals Assessment still shows Failed.
Any idea how to solve core web vitals? LCP is showing 2.9s. Do I need to work on this?
Core Web Vitals are measured over field data, not the fixed, repeatable definitions that lab-based tools like Lighthouse uses to analyse your website. See this article for good discussion on them both.
Often times Lighthouse is set too strictly, and people complain it shows worse performance than is seen by the site's real users, but it is just as easy to have the opposite as you see here. PageSpeed Insights (PSI) tries to use settings that are broadly applicable to all sites to give you "insights" into how to improve your performance but the results should be calibrated towards the real-user data that you see at the top of the audit.
In your case, I can see from your screenshots that you are seeing a high Time to First Byte (TTFB) in your real user data of 1.9 seconds. This makes passing the LCP limit of 2.5 seconds quite tough as it only leaves 0.6 seconds for that.
The question is why are you seeing that long TTFB in the field, when you don't see the same in your lab-based results - where you see a 1.1 second LCP time - including TTFB? There could be a number of reasons, and several potential options to resolve:
Your users are further away from your data centre, whereas PSI is close by. Are you using a CDN?
Your users are predominately using poorer network conditions than Lighthouse uses. Do you just need to serve less to them in these cases? For example, hold back images for those on slower network conditions using the Effective Connection Type API and only load them on demand so LCP is text by default? Or don't use web fonts for these users. Or other forms of progressive enhancements.
Your page visits are often jumping through several redirect steps - all of which add to TTFB, but for PSI you put the end of URL in directly so miss this in the analysis. This can often be out of your control if the referrer uses a link shortener (e.g. Twitter does).
Your page visits are often from uncached pages, that take time to generate. But when using PSI you run the test a few times, and so are benefiting from that page being cached and so is served quickly. Can you optimise your back-end server code, or improve your caching?
Your pages are not eligible for the super-fast in-memory bfcache for repeat visits when going back and forth throughout the site, which can be seen as a free web-performance win!.
Your pages often suffer from contention when lots of people visit at once, and that wasn't apparent in the PSI tests.
Those are some of the more common reasons for a slow TTFB but you may understand your sites, your infrastructure, and your users better to understand the main reason. Once you solve that, you should see your LCP times reduce and hopefully pass CWV.
We are currently monitoring a web API using the data in the Performance page of Application Insights, to give us the number of requests received per operation.
The architecture of our API solution is to use APIM as the frontend and an App Service as the backend. Both instances have App Insights enabled, and we don't see a reasonable correlation between the number of requests to APIM and the requests to the App Service. Also, this is most noticeable only in a couple of operations.
For example,
Apim-GetUsers operation has a count of 60,000 requests per day (APIM's AI instance)
APIM App Insights Performance Page
AS-GetUsers operation has a count of 3,000,000 requests per day (App Service's AI instance)
App Service App Insights Performance Page
Apim-GetUsers routes the request to AS-GetUsers and Apim-GetUsers is the only operation that can call AS-GetUsers.
Given this, I would expect to see ~60,000 requests on the App Service's AI performance page for that operation, instead we see that huge number.
I looked into this issue a little bit and found out about sampling and that some App Insights features use the itemCount property to find the exact number of requests. In summary,
Is my expectation correct, and if so what could cause this? Also, would disabling adaptive sampling and using a fixed sampling rate give me the expected result?
Is my expectation wrong, and if so, what is a good way to get the expected result? Should I not use the Performance page for that metric?
Haven't tried a whole lot yet as I don't have access to play with the settings until I can find a viable solution, but I looked into sampling and itemCount property as mentioned above. APIM sampling is set to 100%.
I ran a query in Log Analytics on the requests table and when I just used the requests count, I got a number that was closer to the one I see in APIM, but when I use a sum of the itemCount, as suggested by some MS docs, I get that huge number as seen in the performance page.
List of NuGet packages and version that you are using:
Microsoft.Extensions.Logging.ApplicationInsights 2.14.0
Microsoft.ApplicationInsights.AspNetCore 2.14.0
Runtime version (e.g. net461, net48, netcoreapp2.1, netcoreapp3.1, etc. You can find this information from the *.csproj file):
netcoreapp3.1
Hosting environment (e.g. Azure Web App, App Service on Linux, Windows, Ubuntu, etc.):
App Service on Windows
Edit 1: Picture of operation_Id and itemCount
So we are using some RUM metrics on our site now, and one error that has started cropping up is as follows:
XHR error GET https://l9.dca0.com/srv-id/?uid=a1729baf-1b2b-1c5c-b50a-bfb5d1bf04e8
Failed to load
Additionally, here's a screenshot of our RUM metrics showing a series of these errors:
I've touched base with everyone on my team and we do not know what dca0.com is or why multiple different subdomains are being called. I did do a fair amount of googling and was not able to find anything on that url beyond some WHOIS lookups that yielded no useful info.
Does anyone know what this url is, what its used for? As best I can tell, this error only comes from devices running Apple operating systems, either iOS or Mac OS. Is this perhaps some kind of Mac functionality that I'm unfamiliar with?
Any help is appreciated, even just a thread to pull on as I'm at a wall on this topic!
After some intensive debugging I found this related to one of our marketing services: Adroll. I would check if you have the same or similar retargeting services.
I was able to confirm that this is an XHR request made after the site loads. This is why it is tricky to find via normal methods. RUM metrics does a nice job capturing this.
From the findings, this looks to be an event tracker. Likely a collector for augmenting an ad farm.
Doing a Whois lookup for this domain returns a private registration. Tracing this IP back returns various AWS points in Oregon and California (can be routed through many more). This is typical of this type of tracker.
We are applying unittests, integration tests and we are practicing test driven and behaviour driven development.
We are also monitoring our applications and servers from outside (with dedicated software in our network)
What is missing is some standard for a live monitoring inside the apllication.
I give an example:
There should be a cron-like process inside the application, that regularily checks some structural health inside our data structures
We need to monitor that users have done some regular stuff that does not endanger the health of the applications (there are some actions and input that we can not prevent them to do)
My question is, what is the correct name for this so I can further research in the literature. I did a lot of searching but I almosdt always find the xunit and bdd / integration test stuff that I already have.
So how is this called, what is the standard in professional application development, I would like to know if there is some standard structure like xunit, or could xunit libraries even bee used for it? I could not even find appropriate tagging for this question, so please if you read this and know some better tags, why not add them to this answer and remove the ones that don't fit.
I need this for applications written in python, erlang or javascript and those are mostly server side applications, web applications or daemons.
What we are already doing is that we created http gateway from inside the applications that report some stuff and this is monitored by the nagios infrastructure.
I have no problem rolling some cron-like controlled self health scheme inside the applications, but I am interested about knowing some professional standardized way of doing it.
I found this article, it already comes close: Link
It looks like you are asking about approaches how to monitor your application. In general, one can distinguish between active monitoring and passive monitoring.
In active monitoring, you create some artificial user load that would mimic real user behavior, and monitor your application based on these artificial responses from a non-existing user (active = you actively cause traffic to your application). Imagine that you have a web application which allows to get weather forecast for specific city. To have active monitoring, you will need to deploy another application that would call your web application with some predefined request ("get weather for Seattle") every N hours. If your application does not respond within the specified time interval, you will trigger alert based on that.
In passive monitoring, you observe real user behavior over time. You can use log parsing to get number of (un)successful requests/responses, or inject some code into your application that would update some values in database whenever successful or not successful response was returned (passive = you only check other users' traffic). Then, you can create graphs and check whether there is a significant deviation in user traffic. For example, if during the same time of the day one week ago your application served 1000 requests, and today you get only 200 requests, it may mean some problem with your software.
I had meeting with a local newspaper company's owner. they are planning to have a newly designed website. their current website is static and doesnt have any kinds of database. But their weekly pageview figure is around 317k. This figure surely will increase in the future
The question is if i create a Wordpress system for them will the website run smoothly with new functionalities (news,galleries may be). it is not neccessary to use lots of plugins. can their current server support wordpress package without any upgrade.
Or shall i think to use php to design website.
Yes - so long as the machinery for it is adequate, and you configure it properly.
If the company uses CDN (like akamai), ask them if this thing can piggyback on their account, then make them do it anyway when they throw up a political barrier. Then, then stop sweating it, turn keepalive on and ignore anything below this line. Otherwise:
If this is on a VPS, make sure it has guaranteed memory and I/O resources - otherwise host it on a hardware machine. If you're paranoid, something with a 10k RPM drive and 2-3 gigs of ram will do (memory for apache and mysql to have breathing room and hard drive for unexpected swap file compensation.)
Make sure the 317k/w figure is accurate:
If it comes from GA/Omniture/another vendor tracking suite, increase the figures by about 33-50% to account for robots that they can't track.
If the number comes from house stats/httpd logs, assume it's 10-20% less (since robots don't typically hit you up for stylesheets and images.)
If it comes from combined reports by an analyst whose job it is to report on their own traffic performance, scratch your head and flip a coin.
Apache: News sites in America have lunchtime and workday winddown traffic bursts around or about 11 am, and 4 pm, so you may want to turn Keepalive off (having it on will improve things during slow traffic periods, but during burst times the machine will spin into an unrecoverable state.)
PHP: Make sure some kind of opcode caching is enabled on the hosting machine (either APC or eAccelerator). With opcode caching, memory footprint drops off significantly and machine doesn't have to borrow as much from the swap file - hard drive.
WP: Make sure you use WP3.4, as ticket http://core.trac.wordpress.org/ticket/10964 was closed in favor of this ticket's fix: http://core.trac.wordpress.org/ticket/18536. Both longstanding issues address query performances on large volume sites, but the overall improvements/fixes help everywhere else too.
Secondly, make sure to use something like the WP Super Cache caching plugin and configure it appropriately. If volume of content on this site is going to be permanently small, you shouldn't have to take any special precautions - otherwise you may want to alter the plugin/rules so to permanently archive older content into a static file. There is no reason why 2 year old content should be constantly respidered at full resource cost.
Robots.txt: prepare and properly register a dynamic sitemap with google/bing/etc. If you expect posts to be unnecessarily peppered with a bunch of tags and categories by people who don't understand what they actually do, you may want to Disallow /page/*, /category/* and /tag/*. Otherwise, when spider robots swarm the site, for every post you'll be slammed by an amount increased by number of tags/cats it has. And then some.
For several years The Baltimore Sun hosted their reader reward, sports and editorial database projects directly off a single collocated machine. Combined traffic volume was factors larger than what you mention, but adequately met.
Here's a video of httpd status w/keepalive on during a slow hour, at about 30 req./sec: http://www.youtube.com/watch?v=NAHz4GRY0WM#t=09
I would not exclude WordPress for this project based only off of the weekly pageview of < a million. I have hosted WordPress sites that receive much, much more traffic and were still very functional. Whether or not WordPress is the best solution for this type of project though based off of the other criteria you have is completely up to you.
Best of luck and happy coding!
WP is capable of handling huge traffic. See this list of people who are using WP VIP services:
Time,DowJones,NBC Sprts,CNN and many more.
Visit WordPress VIP site: http://vip.wordpress.com/clients/