Is that possible? I know I can extract some timing data using the Navigation Timing API, but that's not enough. Also know that I can use a proxy like Browsermob Proxy, but I was wondering if I could do the same with just client side code (JS).
Perhaps something using the Resource Timing API might be what you are looking for? I am looking for an answer to a similar issue and I found this article which has been useful:
http://calendar.perfplanet.com/2012/an-introduction-to-the-resource-timing-api/
You can see the code that implements the API here:
https://github.com/andydavies/waterfall/blob/master/waterfall.js
I hope this is helpful.
Related
I wish to make a "simple" HTTPS request from my particle photon - I don't care about the response, it's just a trigger.
I'm not too good with the Arduino language, but I found this library, which I included in my code (via the Particle Build Platform).
A link to some tutorial or docs using this lib would be highly appreciated, since my googling didn't gice me anything I could figure out how to do.
I found the answer in this tutorial (and would like to share it): https://docs.particle.io/tutorials/projects/maker-kit/#tutorial-2-next-bus-alert
This is the way to do it:
Create a webhook via https://console.particle.io/
Then the params
Requesting the webhook in your code in https://build.particle.io
...
// HTTPS REQUEST
String data = String(1);
Particle.publish("RequestName", data, PRIVATE);
This is something I have never seen before and I do not know if my Google search skills are lacking, but I cannot find anything saying it is and actual way of specifying the HTTP verb for a request.
To give some context on where I have encountered this: I am working on a project to create a very basic LRS to capture Statements from an Articulate Story.
I had Fiddler running to monitor the requests and noticed the Articulate Story tries to POST to a specified endpoint like so: 'endpoint/statements?method=PUT'
Anybody know what is up with this?
Upon further reading of the xAPI specification and the Articulate Documentation, this is something Articulate does... See this link [Implementation of the Tin Can API for articulate content][1]
[1]:https://articulate.com/de-DE/support/article/Implementing-Tin-Can-API-to-Support-Articulate-Content https://articulate.com/de-DE/support/article/Implementing-Tin-Can-API-to-Support-Articulate-Content
Earlier today, I was able to send snapshots to the Face API and get responses including faceAttributes describing emotion.
I'm using JavaScript via XMLHttpRequest.
Now, though I've not changed the code, I get OK 200 from the API calls, but the responseText and the response properties are both, "[]".
I'd like to troubleshoot to see what I'm doing wrong, but it seems like the only information available in the cognitive services portal relates to quota.
Where should I look for further analytics?
You'll get an empty response if the API does not detect a face in the image or if the image file is too large (>4MB). You can confirm by testing with an image you know previously worked. To get the best results, make sure the face is well-lit and all features are reasonably visible.
Hello from Cognitive Services - Face API Team,
I wonder the problem belongs to one specific image or all API calls?
For a quick check, you can try the image on the online demo [1].
[1] https://azure.microsoft.com/en-us/services/cognitive-services/face/
Unfortunately doing the troubleshooting from the external perspective is quite difficult since you don't get any logs. The most common steps are to try to repro your problem using either the testing console (https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) or a tool such as curl or Fiddler so that you can see the raw REST request and response.
With one of those tools you can try to change up your request, try to call a different API, make sure there are no additional details being returned in the body or response headers, etc.
If all else fails please open a support incident from the Azure management portal and we can work with you.
We are also working to improve the logging and troubleshooting capabilities, but it may be some time to see improvements in this area.
An interface has as a requirement that we do not include an expect-100. (The documentation is assuming I will be using c# or php code to talk and has the code to not send the expect-100)
I quickly googled around a bit and found many topics on how to disable this when not using BizTalk and found multiple posts that would make me believe that BizTalk sends an expect-100 by default as well. (BizTalk Data Services: Extended to bring management functions through IUpdatable and Adding Custom HTTP Headers to messages send via HTTP Adapter.). I have had trouble in finding someone trying to disable it.
Since I have found the C# code to disable it, would a solution be to create a custom pipeline component that disables this?
This is not something I would worry about. I don't recall ever seeing Expect: 100- continue in any trace to or from BizTalk using WCF.
I will say that it is very strange that they would have a dependency on not seeing this. Either way, if WCF is sending it, you should be able to remove it with a Behavior.
You'll have to set this all up to even see if it's a problem. Here's where I say just try it and see what happens.
I need a nginx server that receives HTTP request and sends back response from Redis-store and this should be non-blocking. After Googling and going through forums, i came across the nginx_redis2_module. I tried going through the code but was not able to understand how it works. How have they achieved non-blocking operation? Have they achieved this by adding events to nginx's event loop ? Is there any document or sample code how this is done ?
source : https://github.com/openresty/redis2-nginx-module
The essence of nginx is non blocking modules.
It is complex area.
Here you may found some starting points: how to write Nginx module?
FYI:
When used in conjunction with lua-nginx-module, it is recommended to
use the lua-resty-redis library instead of this module though, because
the former is much more flexible and memory-efficient.