Invalid HTTP Request - linkedin.com/oauth/v2/authorization - linkedin

Suddenly linkedin oauth2 stopped working! As per instructions found here:
https://developer.linkedin.com/docs/oauth2
When invoking this:
https://www.linkedin.com/oauth/v2/authorization?response_type=code&client_id=75jdo0an3ktnbx&redirect_uri=https://app.myapp.com/account/linkedin_login&state=fregfdgfasd&scope=r_basicprofile%20r_emailaddress
Instead of a valid response I get a 400 error:
LinkedIn
Invalid HTTP Request
Could not process this client request HTTP method request for URL. Please double-check the URL (address) you used, or contact us if you feel you have reached this page in error.

I am experiencing the same problem using Chrome, but not with Edge or Firefox. Contacted LI, reply was we are working on it, no estimate of when we will solve it. The new profile update seems to be botched in Chrome, OK with Edge and still not updated to the new look if using Firefox.
Linkedin has problems far deeper than poor coding, they forgot the meaning of being social in networking, the site is becoming a pile of stale resumes, non-existent debates and bad quality networking.

I am not OAuth fluent enough to tell you why, but they have 2 different systems: oAuth and oAuth legacy.
I personaly couldn't find a way to retrieve a valid token from OAuth but yes from OAuth legacy. The main difference is the URL and the authorization window.
You are actually using : https://www.linkedin.com/oauth/v2 for you api calls.
OAuth legacy is using https://www.linkedin.com/uas/oauth2.
The whole process is the same so you won't have to change your code, just the URL.
see OAuth legacy doc: linkedin.com/docs/oauth2-legacy
The bad side is the authorization window, the user has to literaly login (email + password) before clicking on the 'Authorized' button and being redirected to your callback URL.
I am agree, this website has something buggy. When visited from France (browser language set to FR-fr and an IP geolocalised in France), their whole interface is written in Dutch ...
Anyway, i hope it helps

Related

Scraping Websites via Google Cached Pages pages has been blocked

I'm trying to create a Service that Scraping websites by using Google Cached Pages.
Example
https://webcache.googleusercontent.com/search?q=cache:nike.com
The Response that I get is the HTML from Google cache, which is an older version of the Nike site.
And it works fine as long as I run it locally on my computer,
but when I deploy to google cloud platform, there I use porxy server
I get a 403 error that I can not access the information through a porxy server
Example of response from proxy server
433. That’s an error.Your client does not have permission to get URL /s
earch?q=cache:http://nike.com from this server. (Client IP address: XX.XXX.XX.XXX)<br
Please see Google's Terms of Service posted at
https://policies.google.com/terms If you believe that you
have received this response in error, please report your
problem. However, please make sure to take a look at our Terms of
Service (http://www.google.com/terms_of_service.html). In your email,
please send us the entire code displayed below. Please also
send us any information you may know about how you are performing your
Google searches-- for example, "I' m using the Opera browser on Linux
to do searches from home. My Internet access is through a dial-up
account I have with the FooCorp ISP." or "I'm using the Konqueror
browser on Linux t o search from my job at myFoo.com. My machine's IP
address is 10.20.30.40, but all of myFoo' s web traffic goes through
some kind of proxy server whose IP address is 10.11.12.13." (If y ou
don't know any information like this, that's OK. But this kind of
information can help us track down problems, so please tell us what
you can.)We will use all this information to diagnose the
problem, and we'll hopefully have you back up and searching with
Google agai n quickly! Please note that although we read all
the email we receive, we are not always able to send a personal
response to each and every email. So don't despair if you don't hear
back from u s! Also note that if you do not send us the
entire code below, we will not be able to help
you.Best wishes,The Google
Article that talks about the problem https://proxyserver.com/web-scraping-crawling/scraping-websites-via-google-cached-pages/
How can I solve this problem, and run requests from the cloud as well without being blocked? Add parameters?
Thanks :)
I guess that you should add a property in the header of your http request
for example :
URL u = new URL("https://www.google.com//search?q=c");
URLConnection c = u.openConnection();
c.setRequestProperty("User-Agent", "MSIE 7.0");
or
HttpRequest request =HttpRequest.newBuilder(new URI("https://www.google.com//search?q=c")).header("User-Agent", "MSIE 7.0").GET().build();
// note to change the URI
this two examples are in Java but the same concept is applied in all environments I guess
hope that was helpfull

Change Basic HTTP Authentication realm and login dialog message

I want to change the message that pops up during implementation of Basic Auth.The current default message is:
Server requires a username and password.
Something that would be more accurate for me is :
Server requires an Email and Password.
My problem is that i can't find or don't know where this message is set and if it can be changed. Most of the questions online are about Basic Auth implemention but this is not my problem -- i can implement it very fine. I just need a more accurate response for the user.
Here is how i force an authentication window using echo:
c.Response().Header().Set(echo.HeaderWWWAuthenticate, `Basic realm="Your Email is your Username"`)
return echo.ErrUnauthorized
NB: Only Firefox shows the realm message. Both Chrome and Opera do not.
This is not related to Go but actually to browser behaviour when receiving that header.
It seems Chrome/Chromium has a known issue with this related to the feature not considered secure by the development team, so I don't think you'd be able to fix it on your side unless you resort to some other authentication mechanism.
See here for more details:
https://bugs.chromium.org/p/chromium/issues/detail?id=544244#c32
Thanks for the responses but they were not satisfactory. I had to do some reading on this topic.
The correct answer is that the login prompt/dialog is a response built into the user-agent/browser and cannot be changed by the server. This also explains why some browsers show realm while others don't.
According to Wikipedia Basic access authentication all the server does is:
When the server wants the user agent to authenticate itself towards the server, it must respond appropriately to unauthenticated requests.
Unauthenticated requests should return a response whose header contains a HTTP 401 Unauthorized status[4] and a WWW-Authenticate field.[5]
The WWW-Authenticate field for basic authentication (used most often) is constructed as following:
WWW-Authenticate: Basic realm="User Visible Realm"

Google OAuth 1.0 - set scope port to 443 (AuthSub Token has wrong scope)

I have been trying to learn OAuth (1.0) and have been testing my code by trying to access my contacts on Google. This is easy because I don't have to set up a friend/consumer relationship (Google just allows anonymous/anonymous for the consumer token) and because Google has the OAuth Playground to help me along.
So I set my code up as follows to go to
Request Token: https://www.google.com/accounts/OAuthGetRequestToken?scope=https%3A%2F%2Fwww.google.com%2Fm8%2Ffeeds%2F
Authorized Request Token: https://www.google.com/accounts/OAuthAuthorizeToken
Access Token: https://www.google.com/accounts/OAuthGetAccessToken
Everything seemed to be going well - I got the request token alright, authorized it fine, and was able to get an access token. I then tried to make a request to https://www.google.com/m8/feeds/contacts/default/full/
Only problem was, I kept getting this error: "401: AuthSub token has wrong scope"
I was confused by this because when I made the same request with the same consumer information in the OAuth Playground ( http://googlecodesamples.com/oauth_playground/index.php ) everything would work out alright.
Eventually, I found the following question: HTTP/1.1 401 Token invalid - AuthSub token has wrong scope
The top answer led me to my solution - there was code in one of the JARs I was using that was written to always set the port to 443 for https or 80 for http. When I stepped through my code and changed the port to -1, my request worked out fine and I was able to get the information I wanted.
Unfortunately, I'm not able to change the code in the JAR file, so I'm going to have to fix things on my end. In the answer to that question, 'Jonathan' said:
Another workaround would be to include the :443 in the token scope; it just has to match
I tried changing my request token query string to ?scope=https%3A%2F%2Fwww.google.com **%3A443** %2Fm8%2Ffeeds%2F and Google just refused to give me a request token - it gave me a 400 error saying Invalid scope: https://www.google.com:443/m8/feeds/. Changing https to http didn't do anything. How would I do what Jonathan (who hasn't been online in almost a year) suggested?
The fact that Google's auth scopes are URLs is basically academic -- they aren't actually serving anything useful (see for yourself), so adding a port just confuses Google. So Jonathan was incorrect in his suggestion.
The only reason they even look like URLs is so that they could be expected to be universally unique (even this is only arguably true).
So don't put the :443 in your auth scope.

Firefox asks for username/password on every HTTP request with Digest Authentication enabled on IIS6

I've recently enabled Digest Authentication on an intranet website/application I am creating for my company in ASP.NET.
The reason I have done so is because Windows Authentication seemed to only work for some users, and not for others. I could not figure out why nor do I know enough about IIS to try and trace the issue. After some trial and error, I found that digest authentication seemed to give me the behaviour that I wanted. That is: allow only users with a valid account on the domain to log in to the website with their credentials.
The problem now, is that Firefox (3+) seems to ask for the user to authenticate on every HTTP request sent to the server. This does not appear to occur in Internet Explorer (6+) or Chrome.
I've tried searching for solutions but I always arrive at dead-ends. I'll find a discussion about the issue, and every posted solution leads to a dead link...or it's on Experts Exchange and I don't have access to view to solution.
The issue appears to be related (from what I've read) to the way the different browsers send their authentication headers vs how IIS interprets them. I'm not sure what I can do to change this though? One of the solutions I had found mentioned writing an ISAPI filter to fix this, but of course the link to the finished filter was broken and I have no idea how to go about making one myself.
I've tried messing with the NTLM and other auth related strings in about:config to try and force Firefox to trust my server but that doesn't seem to work either.
From a few other sources I've read, it appears that everything should work if I switch back to Windows Authentication, but then I'm back at square one where the authentication would work only for some users and not others.
A solution for either problem would work for me, but I have very little information for the Windows Authentication issue. If someone could guide me through tracing the problem I'd gladly post more information for it as well.
Here are the URLs I've found discussing what seems like the same problem. (Sorry I couldn't make them all links, it wouldn't let me post otherwise)
support.mozilla.com/tiki-view_forum_thread.php?locale=pt-BR&forumId=1&comments_parentId=346851
www.experts-exchange.com/Software/Internet_Email/Web_Browsers/Mozilla/Q_24427378.html
channel9.msdn.com/forums/TechOff/168006-Twin-bugs-in-IIS-IE-unfair-competitive-advantage-EDIT-SOLVED/
www.derkeiler.com/Newsgroups/microsoft.public.inetserver.iis.security/2006-03/msg00141.html
This is a know bug in FF. See Advanced digest authentication works from Internet Explorer however we receive multiple authentication prompts on each GET request from fire fox
IE 6 had the same bug.A potential workaround would be to re-enable "old" Digest in IIS6:
http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/1d6e22ac-0215-4d12-81e9-c9262c91b797.mspx?mfr=true
Currently, if the server send an opaque directive, the IE client will return this directive value as specified in the RFC. Unfortunately, for follow-on requests from the client where the nonce count is incremented (count 2 and beyond) the opaque directive value is not sent. This then fails authentication on the server and a 401 Unauthorized is returned. The IE client now requests the username and password for the new challenge and the file is retrieved.
This requires an additional round trip and the user is prompted for credential each time.
The RFC states that the opaque must always be sent on requests from the client.
The Digest implementation that IE6 is using is not RFC compliant (http://www.ietf.org/rfc/rfc2617.txt).
3.2.2 The Authorization Request Header
The values of the opaque and algorithm fields must be those supplied
in the WWW-Authenticate response header for the entity being
requested.
3.3 Digest Operation
A client should remember the username, password, nonce, nonce count and
opaque values associated with an authentication session to use to
construct the Authorization header in future requests within that
protection space.
Because the client is required to return the value of the opaque
directive given to it by the server for the duration of a session,
the opaque data may be used to transport authentication session state
information.
-------- Edit addition -----
Windows Authentication seemed to only work for some users, and not for others.
How did it fail? Did you enable impersonation?

Can the HTTP response header Authorization be managed from the server?

I'm playing with HTTP Basic Authorization. As we all know, when a client gets a 401 error on requesting a page, the client must collect authorization credentials from the users (typically in the form of a pop-up window).
Subsequent requests for resources under that part of the URL will be accompanied by "Authorization: Basic [hash]" where [hash] is the username/password mashed together and hashed.
What I'm interesting in is getting the client to not send the Authorization header even when requesting a resource that previously asked for it.
Three important questions:
Is this possible?
If possible, does this violate the HTTP/1.1 standard (I'm unclear that this case is covered by the spec)?
What browser support this?
Thanks for your time, Internet.
UPDATE: Apparently, this is an apache FAQ and I am SOL. Still, if you've got thoughts on this question, I'd love to hear about it. Thanks.
I don't think this is possible. The authenticated session lasts until the user shuts the browser window, and the browser will keep on blindly passing the credentials with each request under the same path.
Is there any specific reason why you want this functionality?
You can set the user and password in the URL:
http://user:password#example.com
If you use this syntax, the browser will generate the header for you.

Resources