I use Uber's API to create a WordPress plugin so people can order taxis to your physical location.
I'm looking at integrating the Price Estimates using the endpoint here - https://developer.uber.com/v1/endpoints/#price-estimates - I've a solution but having a bit of problem implementing it.
I'm getting an error on testing which in Google Chrome Developer Tools which states "Request header field Access-Control-Allow-Origin is not allowed by Access-Control-Allow-Headers."
I suspect it's due to the fact that the testing server is insecure, as the app Origin ID and Redirect ID begins with https:// (as I'm unable to add http://).
Will I be able to access the price estimates over http at all? I'm using the server_token way of authentication as feel it's probably the best way to do it.
Any help would be gratefully received :)
The Uber API only, and will only ever, support HTTPS.
Related
I am attempting to create a program which uses a user's Spotify data. I've conducted the following steps as per the documentation:
Set up application
Registered redirect urls on application dashboard
Obtained Client ID and secret.
The code I'm trying to use to get authentication is below:
client_id <- "<CLIENT_ID>"
redirect_url <- "http://localhost:8888/callback/"
link <- glue::glue('https://accounts.spotify.com/authorize?client_id={client_id}&response_type=code&redirect_uri={redirect_url}&scope=user-top-read playlist-modify-public playlist-modify-private user-read-private user-library-read user-library-modify')
browseURL(link,
browser = getOption("browser"),
encodeIfNeeded = FALSE)
I was able to get it to show an authorization page once, I tried to approve the application and received a localhost connection error (Connection Refused). This error now happens upon running the code (no authorization page generated).
I've gone through all the steps to fix this issue (Flushing DNS, Disabling Firewall, different redirect urls, resetting my router), but nothing seems to work.
Does anyone have any suggestions on what I might be doing wrong?
I think the proper way of doing OAuth 2.0 authentication is via the httr::oauth2.0_* family. They do not show an example for Spotify, but it should be rather straight forward to set the "dance" up with this framework.
Type demo("oauth2-github") (or refer to the Code repo on GitHub) for an example using Oauth 2.0 for GitHub and adapt the code for Spotify. Be aware that httr provides a convenience function (oauth_endpoints) for some providers (but not Spotify). Hence, you have to provide the necessary config (mainly the proper URLs) using oauth_endpoint (Note the missing s).
If you have particular questions, come back with some code and I am sure we cna help.
I'm trying to create a Service that Scraping websites by using Google Cached Pages.
Example
https://webcache.googleusercontent.com/search?q=cache:nike.com
The Response that I get is the HTML from Google cache, which is an older version of the Nike site.
And it works fine as long as I run it locally on my computer,
but when I deploy to google cloud platform, there I use porxy server
I get a 403 error that I can not access the information through a porxy server
Example of response from proxy server
433. That’s an error.Your client does not have permission to get URL /s
earch?q=cache:http://nike.com from this server. (Client IP address: XX.XXX.XX.XXX)<br
Please see Google's Terms of Service posted at
https://policies.google.com/terms If you believe that you
have received this response in error, please report your
problem. However, please make sure to take a look at our Terms of
Service (http://www.google.com/terms_of_service.html). In your email,
please send us the entire code displayed below. Please also
send us any information you may know about how you are performing your
Google searches-- for example, "I' m using the Opera browser on Linux
to do searches from home. My Internet access is through a dial-up
account I have with the FooCorp ISP." or "I'm using the Konqueror
browser on Linux t o search from my job at myFoo.com. My machine's IP
address is 10.20.30.40, but all of myFoo' s web traffic goes through
some kind of proxy server whose IP address is 10.11.12.13." (If y ou
don't know any information like this, that's OK. But this kind of
information can help us track down problems, so please tell us what
you can.)We will use all this information to diagnose the
problem, and we'll hopefully have you back up and searching with
Google agai n quickly! Please note that although we read all
the email we receive, we are not always able to send a personal
response to each and every email. So don't despair if you don't hear
back from u s! Also note that if you do not send us the
entire code below, we will not be able to help
you.Best wishes,The Google
Article that talks about the problem https://proxyserver.com/web-scraping-crawling/scraping-websites-via-google-cached-pages/
How can I solve this problem, and run requests from the cloud as well without being blocked? Add parameters?
Thanks :)
I guess that you should add a property in the header of your http request
for example :
URL u = new URL("https://www.google.com//search?q=c");
URLConnection c = u.openConnection();
c.setRequestProperty("User-Agent", "MSIE 7.0");
or
HttpRequest request =HttpRequest.newBuilder(new URI("https://www.google.com//search?q=c")).header("User-Agent", "MSIE 7.0").GET().build();
// note to change the URI
this two examples are in Java but the same concept is applied in all environments I guess
hope that was helpfull
and its URL is 'secured' with SSL (with httpS://mywebsite.nl).
However, I found out that, for a long time, at Google Analytics, I use http://mywebsite.nl, ('non-secured') at my property and view's 'Default URL'.
I have two questions:
Did I miss data because I used http instead of https in the property and view's Default URL?
Can I CHANGE the http to httpS (in Google Analytics property/view) without problem, or do I lose historical data because of that? (This probably also depends on answer of Q1...) Or should I ADD a new property and/or view with https Default URL?
Thanks!
you didn't
you don't lose the historical data, feel free to change it.
That "default url" is for your convenience. you can do anything with it. That's just what GA uses to form full URLs from page paths only. Instead of using the hostname dimension there.
Also, GA is gracious enough to warn you whenever you can do significant changes to your core data.
While I was trying my project on localhost it was working fine, using https://cors-anywhere.herokuapp.com/ since I got the CORS problem. But once I deployed the site on Netlify, it gave me the error 426 (Upgrade Required), with or without using https://cors-anywhere.herokuapp.com/.
These are the messages that appear on my console:
>Failed to load resource: the server responded with a status of 426 (Upgrade Required)
>Error: Request failed with status code 426
at createError.js:16
at settle.js:17
at XMLHttpRequest.<anonymous> (xhr.js:61)
I have been searching and some people seem to have similar problem to this. I have seen solutions like having my own server to pass the requests, but I don't know how to do it and, correct me if I am wrong, wouldn't that be the same as using https://cors-anywhere.herokuapp.com/?
Newsapi changed their pricing model.
You can't make requests from the browser anymore, you'll have to use a backend. I had the same problem and the easiest way around it was implementing a Node (Express) server.
I guess the free plan simply is not longer available in production.
"Requests from the browser are not allowed on the Developer plan, except from localhost."
Here's the updated plan page..
https://newsapi.org/pricing
Actually newsapi.org api in developer plan is no more working in production.
because in developer plan CORS is only enabled for localhost.
Developer Plan $0
CORS enabled for localhost.
https://newsapi.org/pricing
but if you want to fetch news in production then there is a alternative for this which have 1000 request per month free works in production also .
https://newsapi.in/
Newsapi.in this website provide api that have cors enabled for all origins.
Enjoy...
As others have mentioned, Newsapi no longer allows you to make requests from the browser.
newscatcher has a no-card free tier that allows for 10,000 requests. On top of that, depending on your use case you can even mail them to increase the limits for a short span of time, or to add extra data points.
I have a bunch of programs written in ASP.NET 3.5 and 4. I can load them fine (I'm in England) and so can my England based colleagues. My American colleagues however are suffering redirect loops when trying to load any of the apps. I have tried myself using Hide My Ass and can consistently recreate this issue.
I'm stumped. What could be causing a redirect loop for users in a specific country?!
The apps are hosted on IIS 6 on a dedicated Windows Server 2003. I have restarted IIS with no luck.
Edit
I should have made it clear that unfortunately I do not have access to the machines in the US to run Firefox Firebug/Fiddler. The message I get in Chrome is This webpage has a redirect loop..
When you say "a redirect loop", do you mean a redirect as in an http redirect? Or do you mean you have a TCP/IP routing loop?
A TCP/IP loop can be positively identified by performing a ping from one of the affected client boxes. If you get a "TTL expired" or similar message then this is routing and unlikely to be application related.
If you really meant an http redirect, try running Fiddler, or even better, HttpWatch Pro and looking at both the request headers, and the corresponding responses. Even better - try comparing the request/response headers from non-US working client/servers to the failing US counterparts
you could take a look with Live HTTP Headers in firefox and see what it's trying to redirect to. it could possibly be trying to redirect to a url based on the visitor's lang/country, or perhaps the dns is not fully propagated...
if you want to post the url, i could give you the redirect trace
What could be causing a redirect loop
for users in a specific country?!
Globalization / localization related code
Geo-IP based actions
Using different base URLs in each country, and then redirecting from one to itself. For example, if you used uk.example.com in the UK, and us.example.com in the US, and had us.example.com redirect accidentally to itself for some reason.
Incorrect redirects on 404 Not Found errors.
Spurious meta redirect tags
Incorrect redirects based on authentication errors
Many other reasons
I have tried myself using Hide My Ass
and can consistently recreate this
issue.
I have restarted IIS with no luck.
I do not have access to the machines
in the US to run Firefox
Firebug/Fiddler.
The third statement above don't make sense in light of the other two. If you can restart IIS or access the sites with a proxy, then you can run Fiddler, since it's a client-side application. Looking at the generated HTML and corresponding HTTP headers will be the best way to diagnose your problem.