I tried to fetch the data from https://m.jetstar.com/Ink.API/api/flightAvailability?LocaleKey=en_AU&ChildPaxCount=0&DepartureDate=2016-03-21T00%3A00%3A00&ModeSaleCode=&Destination=NGO&CurrencyCode=TWD&AdultPaxCount=1&ReturnDate=&InfantPaxCount=0&Origin=TPE
it couldn't be done by curl -vv https://m.jetstar.com/Ink.API/api/flightAvailability?LocaleKey=en_AU&ChildPaxCount=0&DepartureDate=2016-03-21T00%3A00%3A00&ModeSaleCode=&Destination=NGO&CurrencyCode=TWD&AdultPaxCount=1&ReturnDate=&InfantPaxCount=0&Origin=TPE it will return nothing,
However, browser can fetch whole data.
What's wrong with that?
It seems to me that "m.jetstar.com" is filtering requests that don't include the headers that a browser would send. Your curl statement needs to fully emulate a browser to get the data. One way to see what I'm saying is to open developer tools in Google Chrome, select the network tab, run the URL in the browser then goto to the row indicating the call and right click, then copy the request as a curl statement, then paste it to a notepad and you'll see all the additional headers you need. Additionally, that curl statement should work.
check if you have set any HTTP_REQUEST variable for proxy settings. Verify by calling curl command in verbose mode. curl -v
I had setup a variable earlier and when I check the curl output in verbose mode it told me that it was going to proxy address. Once I deleted the HTTP_REQUEST variable from advanced system settings, it started working. Hope it helps.
Related
I'm troubleshooting an issue that I think may be related to request filtering. Specifically, it seems every connection to a site made with a blank user agent string is being shown a 403 error. I can generate other 403 errors on the server doing things like trying to browse a directory with no default document while directory browsing is turned off. I can also generate a 403 error by using a tool like Modify Headers for Google Chrome (Google Chrome extension) to set my user agent string to the Baidu spider string which I know has been blocked.
What I can't seem to do is generate a request with a BLANK user agent string to try that. The extensions I've looked at require something in that field. Is there a tool or method I can use to make a GET or POST request to a website with a blank user agent string?
I recommend trying a CLI tool like cURL or a UI tool like Postman. You can carefully craft each header, parameter and value that you place in your HTTP request and trace fully the end to end request-response result.
This example straight from the cURL docs on User Agents shows you how you can play around with setting the user agent via cli.
curl --user-agent "Mozilla/4.73 [en] (X11; U; Linux 2.2.15 i686)" [URL]
In postman its just as easy, just tinker with the headers and params as needed. You can also click the "code" link on the right hand side and view as HTTP when you want to see the resulting request.
You can also use a heap of hther HTTP tools such as Paw and Insomnia, all of which are quite well suited to your task at hand.
One last tip - in your chrome debugging tools, you can right click the specific request from the network tab and copy it as cURL. You can then paste your cURL command and modify as needed. In Postman you can import a request and past from raw text and Postman will interpret the cURL command for you which is particularly handy.
How do i run a cURL command like this
curl -X GET http://www.in.com/
I am using windows 8.
How do i run this inside a browser say using NETWORK tab/or any other tab
I know that there is an option in Network tab to copy as curl command ,but i want to execute it right over there in firefox not in cmd /terminal.
What do i do to modify that command and execute it in the network tab over there only.
Yes , i am not asking for places like http://onlinecurl.com/ which allow you to execute curl commands online
Is it possible ?
I have tried FIrebug in firefox but it according to my research does not have this option.
If yes please tell me how .
Thanx in advance!
Why not just put the URL in the address field for the browser? The browser is going to do a "GET" request on this URL and return the results. This is what curl is doing. If you are trying to do a "PUT" or a "POST" request, then you would need to do something different, but for "GET", it should just work.
I am getting the valid response while made curl request :
bin/gremlin-server.bat conf/gremlin-server-rest-modern.yaml
curl "http://localhost:8182?gremlin=100-1"
curl "http://localhost:8182?gremlin=g.V()"
But via browser I am getting the below massage :
{"message":"no gremlin script supplied"}
Also tried as below but no result:
http://localhost:8182/gremlin?script=g.V()
http://localhost:8182/graphs/tinkergraph/tp/gremlin?script=g.traversal().V()
http://localhost:8182/graphs/tinkergraph/tp/gremlin?script=g.V()
Any suggestion on what is the valid way of passing script via browser.
I'm not sure this is a "bug" exactly, but Gremlin Server didn't respect very complex ACCEPT headers. For example, when I try to resolve one of your first two URLs in Chrome, I get:
{
message: "no serializer for requested Accept header: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"
}
By default, Gremlin Server doesn't support that ACCEPT. If the browser had requested application/json or simply *.* it would work. Note that *.* is present with quality 0.8, but Gremlin Server wasn't parsing the header that way to determine that. As a result, it couldn't find the serializer to properly deal with that.
There is no workaround for the browser that I know of. I've created an issue to get this fixed:
https://issues.apache.org/jira/browse/TINKERPOP3-752
I've also seen this error when I have forgotten to start cassandra before starting gremlin-server.
My problem was due to the presence of spaces in the request.
This worked;
curl http://localhost:8182/?gremlin="g.V().has('name','foody')"
but this didn't;
curl http://localhost:8182/?gremlin="g.V().has('name', 'foody')"
Try removing them from yours and they should work.
I found the answer thanks to your question, so I will pitch for the right answer for #stacey-morgan:
You queried on the CLI:
curl "http://localhost:8182?gremlin=100-1"
Then you may have queried (as it is not clear from your question)
http://localhost:8182/gremlin?script=100-1
Or the others you have done, just as I was doing:
http://localhost:8182/gremlin?script=g.V()
You would get the error message.
The correct way of doing it just paste the content of the "" from the curl command. So
http://localhost:8182?gremlin=100-1
Then similarly for your other queries:
http://localhost:8182/?gremlin=g.V()
http://localhost:8182/?gremlin=g.traversal().V()
Note: trailing slash should be there though it works without it on my FF. That is HTTP.
Using: Ubuntu & Titan1.0.0-hadoop1.
I'm trying to automate the login to a website and submission of a form.
Is there a browser plugin (for firefox or Chrome) that allows you to record HTTP GET and POST requests in a form that allows them to be played back at a later point? I'm looking for something that will be possible to automate from a script e.g. via curl or wget.
I've tried using the Chrome developer tools to capture POST form data but I get errors when trying to replicate the request with wget which suggests I'm missing some cookies or other parameters. Ideally there would a nice automated way of doing this rather than doing lots of trial and error.
For a simple interaction, you don't really need a tool like Selenium that will record and playback requests.
You only need the tools you've already mentioned:
Chrome already comes with the Developer Tools that you need: use the Network tab. No plugin to download. I don't know if Safari will work -- I don't see a "Network" tab in its Developer Tools.
Both curl and wget support cookies and POST data, but I've only tried curl for automation.
There are several key steps that need to be done properly (this takes some experience):
The sequence of pages that are requested needs to model real user interaction. This is important because you have no idea exactly how the backend handles forms or authentication. This is where the Network tab of Chrome's Developer Tools comes in. (Note that there is "record" button that will prevent the clearing of the log.) When you prepare to log a real user interaction for your analysis, don't forget to clear your cookies at the beginning of each session.
You need to use all the proper options of curl and wget that will ensure that cookies and redirects are properly processed.
All POST form fields will likely need to be sent (you'll often see fields with nonce values to prevent CSRF
Here's a sample of 3 curl calls that I wrote for an automation script that I wrote to download broadband usage from my ISP:
curl \
--silent \
--location \
--user-agent "$USER_AGENT" \
--cookie-jar "$COOKIES_PATH.txt" \
'https://idp.optusnet.com.au/idp/optus/Authn/Service?spEntityID=https%3A%2F%2Fwww.optuszoo.com.au%2Fshibboleth&j_principal_type=ISP' >$USAGE_PATH-1.html 2>&1 && sleep 3 &&
# --location because the previous request returns with a series of redirects "302 Moved Temporarily" or "302 Found"
curl \
--silent \
--location \
--user-agent "$USER_AGENT" \
--cookie "$COOKIES_PATH.txt" \
--cookie-jar "$COOKIES_PATH.txt" \
--referer 'https://idp.optusnet.com.au/idp/optus/Authn/Service?spEntityID=https%3A%2F%2Fwww.optuszoo.com.au%2Fshibboleth&j_principal_type=ISP' \
--data "spEntityID=https://www.optuszoo.com.au/shibboleth&j_principal_type=ISP&j_username=$OPTUS_USERNAME&j_password=$OPTUS_PASSWORD&j_security_check=true" \
'https://idp.optusnet.com.au/idp/optus/Authn/Service' >$USAGE_PATH-2.html 2>&1 && sleep 1 &&
curl \
--silent \
--location \
--user-agent "$USER_AGENT" \
--cookie "$COOKIES_PATH.txt" \
--cookie-jar "$COOKIES_PATH.txt" \
--referer 'https://www.optuszoo.com.au/' \
'https://www.optuszoo.com.au//r/ffmu' >$USAGE_PATH-3.html 2>/dev/null
Note the careful use of --cookie-jar, --cookie, and --location. The sleeps, --user-agent, and --referer may not be necessary (the backend may not check) but they're simple enough that I include them to minimize the chance of errors.
In this example, I was lucky that there were no dynamic POST fields, e.g. anti-CSRF nonce fields, that I would have had to extract and pass on to a subsequent request. That's because this automation is for authentication. For automating other types of web interactions, after the user's already logged in, you're likely to run into more of these dynamically-generated fields.
Not exactly a browser plugin, but Fiddler can capture all the HTTP data passing back and forth; with FiddlerScript or FiddlerCore, it is then simple to export that into a text file - and pass that into cURL as request headers and request body.
In Firefox, turn on the Persist option in Firebug to be sure to capture the POST. Then install and use the "Bookmark POST" add-on to bookmark the POST request for later use.
Firefox Firebug already has a feature which allows you to copy a web request as a curl request, so you see all the various elements of the request on the command line.
Turn on the Firebug and right click on a request in the Net panel and pick Copy as cURL. Then use it in the curl
https://hacks.mozilla.org/2013/08/firebug-1-12-new-features/#copyAsCURL
Have you tried Selenium?
There are way too many methods for you to choose.
Use Firefox and selenium IDE. It can record your browser action
User selenium Web Driver. It can simulate different browser action by the script you write in Ruby or Java.
Use a macro plugin for Firefox to simulate absolute clicks and keypresses.
Use a OS level macro application and do the same as 3.
Write a script (such as PHP) to simulate the actual form post or cookie interations.
No.1 is common and easy to use.
No.4 can be powerful but you need time to polish the automation.
No.3 is in the middle of No.4 and No.1.
No.2 can be a tool for environment test and stress test also.
No.5 is seeming the most flexible and resource saving.
Request Maker chrome plugin does that.
https://chrome.google.com/webstore/detail/request-maker/kajfghlhfkcocafkcjlajldicbikpgnp?hl=en
The Safari developer tools and Firebug are sufficient for your needs.
Recently I cam across this beautiful chrome extension which does what you ask:
Katalon Recorder
Katalon Recorder will make your test automation work a lot easier.
Record, play, debug with speed control, pause/resume, breakpoints capabilities.
Enjoy fastest execution speed compared to other extensions with Selenium 3 core engine.
Make use of multiple locator types including XPath & CSS.
Use original Selenium IDE commands (Selenese), plus block statement if...elseIf...else...endIf and while...endWhile. Testing file input control is supported.
Import test data from CSV files for data-driven testing.
Report easily with logs, screenshots capturing, with historical data and analytics from Katalon Analytics.
Compose & organize test cases in suites. Never get your work lost with autosave feature.
Import original Selenium IDE (Firefox extension) tests.
Export to Selenium WebDriver scripts in these frameworks: C# (MSTest and NUnit), Java (TestNG and JUnit), Ruby (RSpec), Python (unittest), Groovy (Katalon Studio), Robot Framework, and XML.
Is it possible to send a HTTP DELETE request from the shell and if so, how?
curl -X delete URL
see (google cache, server seems slow) reference.
There are probably a lot of tools to do this, but the easiest, if you don't want to make sure those tools are available, might be to manually create your DELETE-request and send it to the server via telnet. I think the syntax is pretty much like this, although I've never used the delete command manually myself, only get and post.
DELETE /my/path/to/file HTTP/1.1
Host: www.example.com
Connection: close
There should be an extra newline (blank line) after the request to terminate it, but markdown won't allow me to do that. Store that in a file (or paste it in the console, if you don't want to use a script), then simply do
telnet www.example.com 80 < myRequest.txt
Of course, you can use a here-document as well.