http://api-public.netflix.com/catalog/titles/streaming
doesnt work anymore (it says inactive account). Is there a way to download full catalog? wanted to create app for my use.
Netflix shut down their developer program last year, which was the only way to get the proper credentials to sign a REST request URL and download the catalog... They are not going to be issuing any new developer keys, either. So, if you don't already have one, I am afraid you are out of luck.
If it is saying "account inactive" you are not authenticating your call--you cannot just paste that into a browser. There is a lot of information on the Netflix developer site about this here http://developer.netflix.com/page
I have the same question. I'm able to begin downloading the entire catalog, but due to the size it crashes my browser.
I started reading through this link (http://developer.netflix.com/forum/read/157154) but to be honest I don't really understand it.
Anyone care to enlighten me on how to set up a request, add Accept-Encoding headers, and then parse the output? Sorry for thread highjack...
Having the same issue, 401 for what used to work fine. Seems to be a Netflix issue, not sure what we can do.
Related
I shared a link with someone to a firebase site that I was hosting, and it worked for some time, but then all of a sudden they said they were getting the message:
We're sorry... ... but your computer or network may be sending
automated queries. To protect our users, we can't process your request
right now. See Google Help for more information.
I was also getting it, and started checking my other firebase hosted sites and started getting the message on all of them. I didn't understand. I couldn't find a common link to understand why it was happening. So many sites linked it to a reCAPTCHA problem, but my sites don't use reCAPTCHA...
I found this link:
GitHub forum Link
The user recommended making sure the url started with "https". As soon as I typed my url with "https://" at the start, everything came up. At that point, I tried all the other URLs, and they worked too. This may be a rule for all Google-related sites.
I'm not sure if it's relevant that Chrome often trims the url in the "omnibox" or address bar, hiding the protocol, making it easy to miss when copying/pasting? E.g. :
Note, I tried accessing these pages without https (by typing "http://") but my browser now seemed to correct it and force it to be "https://", so I couldn't replicate the problem again.
I don't know exactly why it started, but I know that I wish I found this information sooner, because it was very frustrating, and the info out there wasn't helpful, except for the link I posted above. So hopefully, when someone like me searches for "firebase" and the error text "your computer or network may be sending automated queries", they might see this and possibly be saved a headache.
My Firebase Storage loaded imgs keep getting blocked by https://firebasestorage.googleapis.com/robots.txt when trying to be indexed. There is nothing private on these imgs to be blocked, so is there a way to unblock them? I've tried to upload my own robots.txt to the bucket root but it seems this doesn't work either.
I assume you're trying to use something like the Twitterbot? Would be interested to hear more about the use case.
The good news is that we just removed our robots.txt file and will deploy this change in the next backend release, so bots will be allowed to crawl your bucket soon. Happy to update this thread once things are in production :)
that's great news! In my case it's Twitterbot that's been unable to follow Firebase Storage image links, therefore unable to display my CMS' preview images in shared Twitter Cards. I think you answered a question about that on the Twitter forums (I'm commenting here because that thread's been closed). Thanks also for saying you'll report back here when the change has been rolled out; any chance you can give a rough estimate, though? Like, is it likely to be months rather than weeks (or days!)?
Cheers.
I've just bought Paw and, while exploring the app found a mention of pawprints, which appear to be some sort of saved snippets or requests or something. I registered with the website and it tells me I have no saved pawprints. I've searched all over the help files and documentation and can't actually see how to create a pawprint, or even a clear definition of what a pawprint actually is.
So my questions are, what are pawprints and how do I use them?
Okay thanks Micha,
From the Blog Post (which Google couldn't find when I searched)
Last May, we launched Pawprint, a quick way to share the requests you tested in Paw. The idea of a getting a short link that you can paste anywhere, sharing what you just see on screen, was very appealing and something we wanted to do almost since the beginning of Paw.
That's handy to report bugs to the API provider (often those backend guys sitting on the other side of the room), or to show to the consumers (often the client folks playing with smartphones and web browsers) how your PATCH endpoint works.
In Paw, just hit ⌘/, and a permalink will be copied. Paste it anywhere from Slack and GitHub tickets, to StackOverflow answers.
You'll also get client code generated in many languages, plus cURL or HTTPie command lines, to run the same request from code.
Apparently the Paw website is being updated now to make this clearer.
So one of the many many tasks I'm faced with daily as a developer is trying to get our support department to get as much information about the end users environment as possible.
Browser version, current cookies, plugins, etc etc and it would be handy to point people to a specific page on our site and say "copy paste this to support".
In the past I've always written these by hand, and used third party tools (such as BrowserHawk) to get as much info as possible.
How does everyone else deal with getting this information from end users, is there a nice package I'm unaware of to give a detailed dump a users env without having to get the users to run an app?
Just to clarify I'm not looking at an elmah style reporting (which is very helpful as well!) but this mainly for the client side stuff.
Some months ago I have see the googles ads page have a cool nice report button. What this button do is that capture using javacript the page as it is and send you the report, with all the details, and an image of the actually page.
So I have found this library http://html2canvas.hertzen.com/ that make the same think.
And here are some example pages with this feedback.
http://hertzen.com/experiments/jsfeedback/
So I add this feedback option, and I ask from the users to point out the issue, and send the feedback, so for pages I have a very nice image for what is not going well.
The next think is that I log and check all errors, and I fix them soon.
I am transferring videos from my web server to youtube.
Everything was working great.
The authentication was by clientLogin with my google username & password but now, all of a sudden the url (https://www.google.com/media/accounts/ClientLogin) is returning 302:
302 MovedThe document has moved
here.
I saw some, but not a lot, of similar issues but nothing that solved this bizarre issue.
At first I thought that the authentication method was deprecated, but it doesn't seem so.
it is just advised not to use it.
I'm searching all day long and I'm still clueless.
Thank you all very much.
We have a big pilot in 5 days and it's a really big problem..
BTW
If anyone know of another method of programmatically uploading videos to youtube through an API, the clientLogin is not a problem anymore ;)
The above https://www.google.com/media/accounts/ClientLogin is not valid.
It should be https://www.google.com/accounts/ClientLogin (without the 'media')
or with a google service name (like 'youtube' in my case.)
Now I know 302 really well