This is something I have never seen before and I do not know if my Google search skills are lacking, but I cannot find anything saying it is and actual way of specifying the HTTP verb for a request.
To give some context on where I have encountered this: I am working on a project to create a very basic LRS to capture Statements from an Articulate Story.
I had Fiddler running to monitor the requests and noticed the Articulate Story tries to POST to a specified endpoint like so: 'endpoint/statements?method=PUT'
Anybody know what is up with this?
Upon further reading of the xAPI specification and the Articulate Documentation, this is something Articulate does... See this link [Implementation of the Tin Can API for articulate content][1]
[1]:https://articulate.com/de-DE/support/article/Implementing-Tin-Can-API-to-Support-Articulate-Content https://articulate.com/de-DE/support/article/Implementing-Tin-Can-API-to-Support-Articulate-Content
Related
TL;DR
Is there a tool can record all the network activity as I visit a website and create a mock server that responds to those requests with the same responses?
I'm investigating ways of mocking the complex backend for our React application. We're currently developing against the real backend (plus test/staging environments). I've looked around a bit and it looks there are a number of tools for mocking individual endpoints/features and sending the rest through to the real API (Mirage is leading the pack at the moment).
However, the Platonic ideal would be to mock the entire server so that a front end dev can work without an internet connection (again: Platonic ideal). It's a crazy lofty goal, I know this. And of course it would require mocking not only our backend but also requests any 3rd-party data sources. And of course the data would be thin and dumb and stale. But this is just for ultra-speedy front end development, it's just mocking. The data doesn't need to be rich, it'll be up to us to make it as useful/realistic as we need it to be.
Probably the quickest way would be to recreate the responses the backend is already sending, and then modifying as needed for new features or features under test etc.
To do this, we might go into Chrome DevTools and recreate everything on the network tab. Mock every request that was made by hardcoding response that returned. Taking it from there, do smart things like use url pattern matching to return a simple placeholder image for any request to get a user's avatar.
What I want to know is: is there any tool out there that does this automatically? That can watch as I load the site, click a bunch of stuff, take a bunch of actions, and spit out or set up a mock that recreates all the responses? And then we could edit any of them as we saw fit to simplify.
Does something like this exist? Maybe it's a browser tool. Maybe it's webpack middleware. Maybe it's a magic rooster.
PS. I imagine this may not be a specific, actionable enough question for SO. I'll understand if it's closed, but I'd really appreciate being directed somewhere where such questions/discussions would fit? I'm new enough to this world that SO is all I know!
There is a practice called service virtualization - a subset of the test double family.
Wikipedia has a list of tools you can use to do that. Here a couple of examples from that list:
Open Source Wiremock will let to record the mocks and edit the responses programmaticaly
Commercial Traffic Parrot will let you record the mocks and edit the responses via a UI and/or programatically
https://mswjs.io/ can mock all the requests for you. It intercepts all your client`s requests and returns your defined mock data.
I am using PageSpeed Insights API to grab speed metrics of different websites and integrate the data in a tool I'm creating.
If I try a query using the API test tool (https://developers.google.com/speed/docs/insights/v4/reference/pagespeedapi/runpagespeed), then everything is fine and I get the info I need.
However, when I perform the very same query (as far as I can see) from my server, the response json does not include the same information. Some information is just missing.
Basically, other than the 'initial_url', all the metrics information that should be included in the 'loadingExperience' branch is missing. No info on 'FIRST_CONTENTFUL_PAINT_MS' or 'DOM_CONTENT_LOADED_EVENT_FIRED_MS'.
On the other hand, I can't seem to find the way to request info on USABILITY and SECURITY under the 'ruleGroups' branch. According to the API reference, this branch should feature information on these aspects too, but nothing like that is return after the query. Just the SPEED branch info is returned.
This is the URL I use to query the API:
https://www.googleapis.com/pagespeedonline/v4/runPagespeed?url=https://stackoverflow.com&strategy=mobile&screenshot=true&locale=en&key=XXXXXXXXmyAPIKeyXXXXXXXX';
Am I missing anything? I have checked the API documentation and Google'd for more info on this, but I can't seem to find any parameter to force request this information.
(By the way, this is my first question at StackOverFlow, so I hope I have shared all the necessary information. And apologies if my english is bad. I do my best.)
I'm having the same issue with some websites. The problem is related to the website. Some websites are providing the userExperience.metrics object and some are not. I have no idea what is causing this.
However you can try to use strategy=desktop parameter to get the userExperience.metrics object in version 5. This worked for me.
Working URL: https://www.googleapis.com/pagespeedonline/v5/runPagespeed?url=https://stackoverflow.com&strategy=desktop&key=[SetYourApiKeyHere]
Earlier today, I was able to send snapshots to the Face API and get responses including faceAttributes describing emotion.
I'm using JavaScript via XMLHttpRequest.
Now, though I've not changed the code, I get OK 200 from the API calls, but the responseText and the response properties are both, "[]".
I'd like to troubleshoot to see what I'm doing wrong, but it seems like the only information available in the cognitive services portal relates to quota.
Where should I look for further analytics?
You'll get an empty response if the API does not detect a face in the image or if the image file is too large (>4MB). You can confirm by testing with an image you know previously worked. To get the best results, make sure the face is well-lit and all features are reasonably visible.
Hello from Cognitive Services - Face API Team,
I wonder the problem belongs to one specific image or all API calls?
For a quick check, you can try the image on the online demo [1].
[1] https://azure.microsoft.com/en-us/services/cognitive-services/face/
Unfortunately doing the troubleshooting from the external perspective is quite difficult since you don't get any logs. The most common steps are to try to repro your problem using either the testing console (https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) or a tool such as curl or Fiddler so that you can see the raw REST request and response.
With one of those tools you can try to change up your request, try to call a different API, make sure there are no additional details being returned in the body or response headers, etc.
If all else fails please open a support incident from the Azure management portal and we can work with you.
We are also working to improve the logging and troubleshooting capabilities, but it may be some time to see improvements in this area.
While reading some articles about writing web servers using Twisted, I came across this page that includes the following statement:
While it's convenient for this example, it's often not a good idea to
make a resource that POSTs to itself; this isn't about Twisted Web,
but the nature of HTTP in general; if you do this, make sure you
understand the possible negative consequences.
In the example discussed in the article, the resource is a web resource retrieved using a GET request.
My question is, what are the possible negative consequences that can arrive from having a resource POST to itself? I am only concerned about the aspects related to the HTTP protocol, so please ignore the fact that I mentioned about Twisted.
The POST verb is used for making a new resource in a collection.
This means that POSTing to a resource has no direct meaning (POST endpoints should always be collections, not resources).
If you want to update your resource, you should PUT to it.
Sometimes, you do not know if you want to update or create the resource (maybe you've created it locally and want to create-or-update it). I think that in that case, the PUT verb is more appropriate because POST really means "I want to create something new".
There's nothing inherently wrong about a page POSTing back to itself - in fact, many of the widely-used frameworks (ASP.NET, etc.) use that method to handle various events that happen on the client - some data is posted back to the same page where the server processes it and sends a new reponse.
I know that using non-GET methods (POST, PUT, DELETE) to modify server data is The Right Way to do things. I can find multiple resources claiming that GET requests should not change resources on the server.
However, if a client were to come up to me today and say "I don't care what The Right Way to do things is, it's easier for us to use your API if we can just use call URLs and get some XML back - we don't want to have to build HTTP requests and POST/PUT XML," what business-conducive reasons could I give to convince them otherwise?
Are there caching implications? Security issues? I'm kind of looking for more than just "it doesn't make sense semantically" or "it makes things ambiguous."
Edit:
Thanks for the answers so far regarding prefetching. I'm not as concerned with prefetching since is mostly surrounding internal network API use and not visitable HTML pages that would have links that could be prefetched by a browser.
Prefetch: A lot of web browsers will use prefetching. Which means that it will load a page before you click on the link. Anticipating that you will click on that link later.
Bots: There are several bots that scan and index the internet for information. They will only issue GET requests. You don't want to delete something from a GET request for this reason.
Caching: GET HTTP requests should not change state and they should be idempotent. Idempotent means that issuing a request once, or issuing it multiple times gives the same result. I.e. there are no side effects. For this reason GET HTTP requests are tightly tied to caching.
HTTP standard says so: The HTTP standard says what each HTTP method is for. Several programs are built to use the HTTP standard, and they assume that you will use it the way you are supposed to. So you will have undefined behavior from a slew of random programs if you don't follow.
How about Google finding a link to that page with all the GET parameters in the URL and revisiting it every now and then? That could lead to a disaster.
There's a funny article about this on The Daily WTF.
GETs can be forced on a user and result in Cross-site Request Forgery (CSRF). For instance, if you have a logout function at http://example.com/logout.php, which changes the server state of the user, a malicious person could place an image tag on any site that uses the above URL as its source: http://example.com/logout.php. Loading this code would cause the user to get logged out. Not a big deal in the example given, but if that was a command to transfer funds out of an account, it would be a big deal.
Good reasons to do it the right way...
They are industry standard, well documented, and easy to secure. While you fully support making life as easy as possible for the client you don't want to implement something that's easier in the short term, in preference to something that's not quite so easy for them but offers long term benefits.
One of my favourite quotes
Quick and Dirty... long after the
Quick has departed the Dirty remains.
For you this one is a "A stitch in time saves nine" ;)
Security:
CSRF is so much easier in GET requests.
Using POST won't protect you anyway but GET can lead easier exploitation and mass exploitation by using forums and places which accepts image tags.
Depending on what you do in server-side using GET can help attacker to launch DoS (Denial of Service). An attacker can spam thousands of websites with your expensive GET request in an image tag and every single visitor of those websites will carry out this expensive GET request against your web server. Which will cause lots of CPU cycle to you.
I'm aware that some pages are heavy anyway and this is always a risk, but it's bigger risk if you add 10 big records in every single GET request.
Security for one. What happens if a web crawler comes across a delete link, or a user is tricked into clicking a hyperlink? A user should know what they're doing before they actually do it.
I'm kind of looking for more than just "it doesn't make sense semantically" or "it makes things ambiguous."
...
I don't care what The Right Way to do things is, it's easier for us
Tell them to think of the worst API they've ever used. Can they not imagine how that was caused by a quick hack that got extended?
It will be easier (and cheaper) in 2 months if you start with something that makes sense semantically. We call it the "Right Way" because it makes things easier, not because we want to torture you.