I am getting the valid response while made curl request :
bin/gremlin-server.bat conf/gremlin-server-rest-modern.yaml
curl "http://localhost:8182?gremlin=100-1"
curl "http://localhost:8182?gremlin=g.V()"
But via browser I am getting the below massage :
{"message":"no gremlin script supplied"}
Also tried as below but no result:
http://localhost:8182/gremlin?script=g.V()
http://localhost:8182/graphs/tinkergraph/tp/gremlin?script=g.traversal().V()
http://localhost:8182/graphs/tinkergraph/tp/gremlin?script=g.V()
Any suggestion on what is the valid way of passing script via browser.
I'm not sure this is a "bug" exactly, but Gremlin Server didn't respect very complex ACCEPT headers. For example, when I try to resolve one of your first two URLs in Chrome, I get:
{
message: "no serializer for requested Accept header: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"
}
By default, Gremlin Server doesn't support that ACCEPT. If the browser had requested application/json or simply *.* it would work. Note that *.* is present with quality 0.8, but Gremlin Server wasn't parsing the header that way to determine that. As a result, it couldn't find the serializer to properly deal with that.
There is no workaround for the browser that I know of. I've created an issue to get this fixed:
https://issues.apache.org/jira/browse/TINKERPOP3-752
I've also seen this error when I have forgotten to start cassandra before starting gremlin-server.
My problem was due to the presence of spaces in the request.
This worked;
curl http://localhost:8182/?gremlin="g.V().has('name','foody')"
but this didn't;
curl http://localhost:8182/?gremlin="g.V().has('name', 'foody')"
Try removing them from yours and they should work.
I found the answer thanks to your question, so I will pitch for the right answer for #stacey-morgan:
You queried on the CLI:
curl "http://localhost:8182?gremlin=100-1"
Then you may have queried (as it is not clear from your question)
http://localhost:8182/gremlin?script=100-1
Or the others you have done, just as I was doing:
http://localhost:8182/gremlin?script=g.V()
You would get the error message.
The correct way of doing it just paste the content of the "" from the curl command. So
http://localhost:8182?gremlin=100-1
Then similarly for your other queries:
http://localhost:8182/?gremlin=g.V()
http://localhost:8182/?gremlin=g.traversal().V()
Note: trailing slash should be there though it works without it on my FF. That is HTTP.
Using: Ubuntu & Titan1.0.0-hadoop1.
Related
Ok, this is weird, and I can fix it by getting rid of spaces, but what is going on?
I have two files on my website, AAAy H.mp3 and AAAy L.mp3. I can browse to them just fine.
When I do:
curl "http://mikehelland.com/omg/AAAy L.mp3"
I get the mp3 file.
When I do:
curl "http://mikehelland.com/omg/AAAy H.mp3"
I get 400, bad request. Also doing:
curl "http://mikehelland.com/omg/AAAY H.mp3"
yields a 400.
Change the H to an L or A or M or anything else seems to work fine. What's going on?
This is because of how server interprets space in file name, try to replace it with %20 (which represents space symbol in url) like this:
curl "http://mikehelland.com/omg/AAAy%20H.mp3"
If you try to acess your file with browser and will open developer console, you will found that browser inserts this %20 in GET request. So it is the reason why you can access file with browser, but not from terminal.
Also, try to add --verbose option to curl command. I noticed that when you access some inexistent file without space in name, the is the field in response 'Server: Apache/2', but when you add space it is 'Server: nginx'.
So maybe there is special case when server stops handling requset because it can't distinguish what to do with first line in request
GET /omg/AAAy H.mp3 HTTP/1.1
because it expects HTTP/1.1 after /omg/AAAy, but not weird H.mp3. And maybe server looks at first symbol in "H.mp3" when parse for HTTP, and it made things broken. So I think the reason why "/omg/AAAy H.mp3" doesn't work, but "/omg/AAAy L.mp3" works is due to parsing mechanism of the server. Of course, without %20 all variants are forbidden by standard.
I tried to fetch the data from https://m.jetstar.com/Ink.API/api/flightAvailability?LocaleKey=en_AU&ChildPaxCount=0&DepartureDate=2016-03-21T00%3A00%3A00&ModeSaleCode=&Destination=NGO&CurrencyCode=TWD&AdultPaxCount=1&ReturnDate=&InfantPaxCount=0&Origin=TPE
it couldn't be done by curl -vv https://m.jetstar.com/Ink.API/api/flightAvailability?LocaleKey=en_AU&ChildPaxCount=0&DepartureDate=2016-03-21T00%3A00%3A00&ModeSaleCode=&Destination=NGO&CurrencyCode=TWD&AdultPaxCount=1&ReturnDate=&InfantPaxCount=0&Origin=TPE it will return nothing,
However, browser can fetch whole data.
What's wrong with that?
It seems to me that "m.jetstar.com" is filtering requests that don't include the headers that a browser would send. Your curl statement needs to fully emulate a browser to get the data. One way to see what I'm saying is to open developer tools in Google Chrome, select the network tab, run the URL in the browser then goto to the row indicating the call and right click, then copy the request as a curl statement, then paste it to a notepad and you'll see all the additional headers you need. Additionally, that curl statement should work.
check if you have set any HTTP_REQUEST variable for proxy settings. Verify by calling curl command in verbose mode. curl -v
I had setup a variable earlier and when I check the curl output in verbose mode it told me that it was going to proxy address. Once I deleted the HTTP_REQUEST variable from advanced system settings, it started working. Hope it helps.
I'm attempting to make a request using Paw, and I'm getting this mysterious error:
kCFStreamErrorDomainSSL error -9841
Attempts to execute the same request using cURL, other OS X REST clients, etc... all work with no problem at all. I've search for references to the -9841 instance of this error, and have turned up nothing.
As mentioned by Micha Mazaheri, in a previous comment, the best way to solve this problem is to go to Paw preferences and to choose a different client library from within the HTTP tab. I was not aware this was an option.
I'm using the Google URL Shortener from an ASP.NET website. It works
fine from my localhost, but on the test server I get the following
error:
System.Net.WebException: The remote server returned an error: (403)
Forbidden.
at System.Net.HttpWebRequest.GetResponse()
at GoogleUrlShortnerApi.Shorten(String url)
I'm using the exact code that is shown here:
http://www.jphellemons.nl/post/Google-URL-shortener-API-%28googl%29-C-sharp-class-C.aspx
Could it be that the key works only on my local computer, and not any other computer? I have obtained another key (using another Google account), but this one gives me the same error (403) both on my local computer, and on the test server.
I doubt very much the API is linked to a particular PC. You need to check the requests - both the URL and headers - that your program is sending out, they must be different in some way. Is your server behind some kind of proxy - e.g Apache? If not configured right this might also be mangling the request. Also make sure your requests are encoded correctly.
I made a few modifications, according to a tutorial by Scott Mitchell, and I change the following lines of code:
First, Instead of:
string post = "{\"longUrl\": \"" + url + "\"}";
I used:
string post = string.Format(#"{{""longUrl"": ""{0}""}}", url );
Second, I commented out these 2 lines:
request.ServicePoint.Expect100Continue = false;
request.Headers.Add("Cache-Control", "no-cache");
I don't know why, but suddenly it started working. So I wanted to see which of the 3 thins I did made the problem, so I returned each one, and - TADA - it still works, even with all 3 back there! So I really don't know what caused the problem, but since the code work without those 2 commented out lines, and the other modification, I am leaving it that way.
I hope this answer will help someone sometime...
Is there any way to determine if a POST endpoint exists without actually sending a POST request?
For GET endpoints, it's not problem to check for 404s, but I'd like to check POST endpoints without triggering whatever action resides on the remote url.
Sending an OPTIONS request may work
It may not be implemented widely but the standard way to do this is via the OPTIONS verb.
WARNING: This should be idempotent but a non-compliant server may do very bad things
OPTIONS
Returns the HTTP methods that the server supports for specified URL. This can be used to check the functionality of a web server by requesting '*' instead of a specific resource.
More information here
This is not possible by definition.
The URL that you're posting to could be run by anything, and there is no requirement that the server behave consistently.
The best you could do is to send a GET and see what happens; however, this will result in both false positives and false negatives.
You could send a HEAD request, if the server you are calling support it - the response will typically be way smaller than a GET.
Does endpoint = script? It was a little confusing.
I would first point out, why would you be POSTing somewhere if it doesn't exist? It seems a little silly?
Anyway, if there is really some element of uncertainty with your POST URL, you can use cURL, then set the header option in the cURL response. I would suggest that if you do this that you save all validated POSTs if its likely that the POST url would be used again.
You can send your entire POST at the same time as doing the CURL then check to see if its errored out.
I think you probably answered this question yourself in your tags of your question with cURL.