dealing with 300 http status code - http

So I am trying to access 'set-cookie' from the response I get from post(). However, I am getting a status code 300 and consequently, I am getting an KeyError: 'set-cookie'
When I read about code 300 it’s multiple choice and that I should get “a list of representation metadata and URI reference(s) from which [I] can choose the one most preferred.” . I wanted to read about how to do that and where it should be done but couldn’t find any sources. Where is that list and how can I choose from it? How can I redirect?
note: I never dealt with http request before

You've "never dealt with http request before" and at your first attempt you get this. Unlucky.
The first point of order is that you've provided no details of what is implementing the HTTP. Is it php, asp, ruby, serverside JavaScript, Java.... before you ask another question (or ask this one again, since it's likely to be closed) you use the search here to find similar questions. You're not just looking for answers but the level of information which questioners provide and how well received this is.
Fortunately I have a crystal ball and studied hard during my formative years at Hogwarts so I know you are using python.
The 300 status code is a dinosaur which has been ignored by browser makers since....well forever really. I'm guessing it's being thrown up here because your python binding wants to flag an error. Really it should be returning a 5xx code as it seems rather broken. Anyway, if you had looked you might have found this.
This might be related to the "keyerror" or that might be a further problem with your server-side code/config

Related

Gamelift Matchmaking times out after match found

Hoping to get some insight into the behavior I am seeing while trying to use GameLift Matchmaking.
I have my configuration setup as such that it does not require player acceptance, as such:
GameLiftMatchmakingConfiguration:
Type: AWS::GameLift::MatchmakingConfiguration
Properties:
AcceptanceRequired: false
...
When I go to the GameLift console and into the configuration I see that it was correctly set as well that it does not require acceptance.
This is where I am confused, because now I have it working where it places 2 users in PotentialMatchCreated and I get this event from GameLift. Then 30 seconds later, I get more events stating that these placements timed out and searching again.
The configuration documentation states that AcceptanceTimeoutSeconds is only required if AcceptanceRequired is true, which it is not for me.
the acceptance documentation states that you only call this When FlexMatch builds a match, all the matchmaking tickets involved in the proposed match are placed into status REQUIRES_ACCEPTANCE
Which its not, its in PotentialMatchCreated.
So my question is, what do I have to do to confirm a placement once GameLift places 2 users into a match? I am a bit surprised because I thought that the fact that it doesn't have to be accepted would mean that its automatically accepted match.
Also theres very little documentation I found regarding what to do in this situation, given the nature of this service not being as known as others I totally expected that but really hoping someone can help me on what to do next.
Any insight or help is greatly appreciated.
UPDATE1:
Additional information: I do not need to utilize GameLift fleets or builds at all. We have a browser game we are building and just want to utilize the matchmaking feature. So we dont have any game servers or anything like that, its just on our website where they would play the game and use our api's/websockets that puts the matchmaking on the server and notify the client when a match has been found with all the subsequent details.
UPDATE2:
To confirm my suspicions I decided to actually try to use the accept match endpoint and see what happens. Just as the documentation states, you can only accept a match if it requires acceptance. I get an error stating that I cannot accept a match that is not in REQUIRE_ACCEPTANCE state. Guessing this is a bug on AWS's side, I don't see any other endpoints that I can hit for being in state PotentialMatchCreated.
Figured out the issue. It has to do with the FlexModeMatch on the GameLiftMatchmakingConfiguration. For my use case, just needing matchmaking, STANDALONE is the correct implementation because we aren't having GameLift actually create game servers/sessions for us. I had mine using WITH_QUEUE which is why I believe I was having issues. Seemingly working correctly now.

Hampering website parsing by adding useless data inside actual data

I want to prevent or hamper the parsing of the classifieds website that I'm improving.
The website uses API with JSON responses. As a solution, I want to add useless data between my data as programmers will probably parse by ID. And not give a clue about it in neither JSON response body nor header; so they won't be able to distinguish it without close inspection.
To prevent users from seeing it, I won't give that "useless data" to my users if they don't request it explicitly by ID. From an SEO perspective, I know that Google won't parse the page with useless data if there isn't any internal or external link.
How reliable would that technic be? And what problems/disadvantages/drawbacks do you think can occur in terms of user experience or SEO? Any ideas or suggestions will be very much appreciated.
P.S. I'm also limiting big request counts made in a short time. But it doesn't help. That's why I'm thinking of this technic.
I think banning parsers won't be better because they can change IP and etc.
Maybe I can get a better solution by requiring a login to access more than 50 item details for example (and that will work for Selenium, maybe?). Registering will make it harder. Even they do it, I can see these users and slow their response times and etc.

How can I tell the difference between a post from a browser, and someone trying to post programmatically

Is there a way to determine if the request coming to a handler (lets assume the handler responds to get and post) is being performed by a real browser versus a programmatic client?
I already know that it is easy to spoof things like the User Agent and the Referrer, but are there other headers that are more difficult to spoof? Maybe headers that are not commonly available in classes like .net's HttpWebRequest?
The other path that I looked at is maybe using the Encrypted View State to send a value to the browser that gets validated on the server side, though couldn't that value simply be scraped from the previous response and added as a post parameter to the next request?
Any help would be much appreciated,
Cheers,
There is no easy way to differentiate because in the end, a post programitically looks the same to the server as a post by a user from the browser.
As mentioned, captcha's can be used to control posting but are not perfect (as it is very hard but not impossible for a computer to solve them). They also can annoy users.
Another route is only allowing authenticated users to post, but this can also still be done programatically.
If you want to get a good feel for how people are going to try to abuse your site, then you may want to look at http://seleniumhq.org/
This is very similar to the famous Halting Problem in computer science. See some more on the proof, and Alan Turing here: http://webcache.googleusercontent.com/search?q=cache:HZ7CMq6XAGwJ:www-inst.eecs.berkeley.edu/~cs70/fa06/lectures/computability/lec30.ps+alan+turing+infinite+loop+compiler&cd=1&hl=en&ct=clnk&gl=us
The most common way is using captcha's. Of course captcha's have their own issues (users don't really care for them) but they do make it much more difficult to programatically post data. Doesn't really help with GETs though you can force them to solve a captcha before delivering content.
Many ways to do this, like dynamically generated XHR requests that can only be made with human tasks.
Here's a great article on NP-Hard problems. I can see a huge possibility here:
http://www.i-programmer.info/news/112-theory/3896-classic-nintendo-games-are-np-hard.html
One way: You could use some tricky JS to handle tokens on click. So your server issues token-id's to elements on the page during the backend render phase. Log these in a database or data file. Then, when users click around and submit, you can compare the id's sent via the onclick() function. There's plenty of ways around this, but you could apply some heuristics to determine if posts are too fast to be a human or not, that is, even if they scripted the hijacking of the token-ids and auto submitted, you could check that the time between click events appears automated. Signed up for a twitter account lately? They use passive human detection that while not 100% foolproof, it is slower and more difficult to break. Many if not all of the spam accounts there had to be human opened.
Another Way: http://areyouahuman.com/
As long as you are using encrypted methods verifying humanity without crappy CAPTCHA is possible.I mean, don't ignore your headers either. These are complimentary ways.
The key is to have enough complexity to make for an NP-Complete problem in terms of number of ways to solve the total number of problems is extraordinary. http://en.wikipedia.org/wiki/NP-complete
When the day comes when AI can solve multiple complex Human problems on their own, we will have other things to worry about than request tampering.
http://louisville.academia.edu/RomanYampolskiy/Papers/1467394/AI-Complete_AI-Hard_or_AI-Easy_Classification_of_Problems_in_Artificial
Another company doing interesting research is http://www.vouchsafe.com/play-games they actually use games designed to trick the RTT into training the RTT how to be more solvable by only humans!

Is there anything not good using POST instead of GET?

I know the difference between POST and GET, however if I used POST instead of GET, anything not good besides not up to W3C standards?
Anything inefficiency, insecurity or anything else?
See the answer from deceze:
POST requests can't be bookmarked.
In all the interviews I've done, all the teaching I've done, this is the best place to start. There's a lot more, but start with this.
Ignore anything anyone says about security. A good hacker can change POST to GET easily.
If you get this far, know that POST changes data (adds a membership, or charges a credit card), whereas GET only fetches data (searches for red shirts). The makers of browsers make their browsers behave differently for the results of POST vs GET. The results of POST have side effects that you may not want to repeat (such as adding another membership or double charging a credit card).
If you understand THIS, then read about the POST-Redirect-GET pattern, and understand it well. (Then know that GET has a URL length limit, and that you may need to resort to POST in this case.)
Never use POST requests for normal view-only pages. POST requests can't be bookmarked, send in an email or otherwise be reused. They screw up proper navigation using the browsers back/forward buttons. Only ever use them for sending data to the server in one unique operation and (usually) have the server answer with a redirect.
Other than that, they're not more or less efficient or secure than GET requests, they're just for a different purpose.

What is the difference between GET and POST in the context of creating an AJAX request?

I have an AJAX request that sends a GET:'getPendingList'. This request should return a JSON string indicating a list pending requests that need to be approved. I'm a little confused about whether I should be using a GET or POST here.
From this website:
GET requests can be cached
GET requests can remain in the browser history
GET requests can be bookmarked
GET requests can be distributed & shared
GET requests can be hacked (ask Jakob!)
So I'm thinking: I don't want the results of this GET to be cached because the pending list could change. On the other hand, using POST doesn't seem to make much sense either.
How should I think about GET and POST? I've been told that GET is the same as a 'read'; it doesn't (or shouldn't) change anything on the server side. This makes sense. What doesn't make sense is the caching part; it wouldn't work for me if someone else cached my GET request because I'm expecting the data to change.
Yahoo's best practices might be worth reading over. They recommend using GET primarily for retrieving information and using POST for updating information. In a separate item, they also recommend that do you make AJAX requests cachable where it makes sense. Check it out, it's a good read.
In short, GET requests should be idempodent. POST requests are not.
If you are altering state, use POST - otherwise use GET.
And don't forget, when talking about caching with GET/POST, that is browser-caching.
Nothing stopping you from caching the data server-side.
Also, in general - JSON calls should be POST (here's why)
So, after some IRC'ing, it looks like the best way to do this is to use GET (in this particular instance), but to prevent caching. There are two ways to do this:
1) Append a random string to your GET request.
This seems like a hacky way to do this but it sounds like it might be the only solution for IE: Prevent browser caching of jQuery AJAX call result.
2) In your response from the server, set the headers to no-cache.
It's not clear what the definitive behavior is on this. Some folks (see the previous link) claim that IE doesn't respect the no-cache directives. Other folks seem to think that this works: Internet Explorer 7 Ajax links only load once.

Resources