Is there a way to add a feed or something to a website to show the upcoming football games (who's playing and at what time)?
I was thinking something like this: http://www.bbc.com/sport/football/fixtures
I think they have an RSS feed but I can't find how to utilise it. Is this even the right thing? I've never used any sort of feeds before.
I have found this: market.mashape.com/heisenbug/champions-league-live-scores I'm not sure if it displays the upcoming matches or not, but it's the closest thing I've found. Most of the sports APIs I've found seem to charge quite a lot per month to use. This one has a free version, but I don't fully understand it. It says 50 free per month, but 50 free what? Requests? If so, is it one 'request' per update (which is every 10mins with this plan)? Then it would only last just over 8hrs??? market.mashape.com/heisenbug/champions-league-live-scores/
Thanks
I did found two webpages which supplies the information you need. Since you didn't add source code, I think this answer will suit better for you.
First, enter to ScoresPro and pick any of the available sports rss feeds.
For this example I select Soccer.
Later, enter to Feedwind, paste the feed URL and press ENTER.
This is the result.
Related
I made a app and I use goole maps API. I would like know, you know when you make a request for place, API return 5 last reviews and reviews.rating, and rating, for how many reviews this rating is calculate ? How I can have this information do you know?
I calculated for 5 last reviews and rating, the average does not correspond in 5 reviews.rating. Thus how to know this average is calculated on how much reviews? Thanks
Edit : in this question (4 years ago) : how to get total number of reviews from google reviews I have try this solution user_ratings_total but that don't work
Edit 2 : it's certainly possible nobody's know ?
it is possible now to get total number of reviews using Place Details Place APIs call: https://developers.google.com/places/web-service/details#fields
as of Jan 2019, it returns user_ratings_total field: https://developers.google.com/maps/documentation/javascript/releases#335
which contains the total number of reviews.
If this isn't a long term project, give my API a shot:
http://reviewsmaker.com/api/google/?business=mumbai%20cafe&api_key=4a2819f3-2874-4eee-9c46-baa7fa17971c
You can just swap the business name; I created it local to the US though by the looks of your images it seems you're looking to do it for CA; user_ratings_total was indeed removed from places but the GMB API still has access to this data, I just kind of tweaked it a little bit.
Here's a tip on how you can get the data, if you create a custom RSS feed with the URLs for the places and (not sure what language your using) you can parse through the URLs and get the metadata out; or if you use Google CSE (Custom Search Engine) the PageMap for the schemas 'review', 'aggregatedreviews' will be easy to parse through as well. These are just clevar workarounds; it sucks they omit this data from the natural official API it was very useful.
I have a large set of podcast feed URLs which I'm periodically polling to check for updates. I'm really struggling to find a robust way to detect if a feed has changed that doesn't have any false positives. I'd like to be able to detect not just if there is a new episode, but also if an existing episode was updated.
RSS and Atom feeds provide pubDate, lastBuildDate or updated elements. However, I'm finding these frequently misused so that the feed is actually inserting the current date time into these fields each request. This makes them difficult to rely on to detect changes.
My next thought was to strip all date information from the podcasts, then MD5 hash the feed contents. I can then compare the feed hashes to detect changes to the feeds.
This seems to work for about 90% of the cases. However, there are still hundreds of podcasts that insert dynamic data into their feeds.
One podcast has the following as their podcast cover art:
http://erikglassman.hipcast.com/albumart/1000.1439649026.jpg
Where 1439649026 is what I assume is a timestamp. This second number changes with each request of their feed.
This is starting to seem like a losing battle. If I can't reliably trust the date fields of a podcast feed, and if some percentage of podcasts insert dynamic data into their feed text, how can I reliably detect changes to a feed in a robust way?
Everything you say is true, so it's not a good idea to try to detect changes at the feed level, instead look for them at the item level.
That generally works, if it doesn't the feed can't be used by anyone, so the source of the feed is likely to have fixed any problem. That's why I think it works so well.
I've been writing feed readers as long as they have existed, my current product is called River4, it's available as open source, MIT License, so you can use it as example code, for this and other issues.
This is where it checks if an item is new:
https://github.com/scripting/river4/blob/master/river4.js#L1411
That might move around as the code changes, so look for a routine called getItemGuid. It shows you how to get a value that uniquely identifies the item. I use this code for my podcatcher, http://podcatch.com/, and it seems to catch the new items, and doesn't get false positives.
Hope this helps! :-)
I think the question has been answered here before,but i could not find the desired topic.I am a newbie in web scraping.I have to develop a script that will take all the google search result for a specific name.Then it will grab the related data against that name and if there is found more than one,the data will be grouped according to their names.
All I know is that,google has some kind of restriction on scraping.They provide a custom search api.I still did not use that api,but hoping to get all the resulted links corresponding to a query from that api. But, could not understand what will be the ideal process to do the scraping of the information from that links.Any tutorial link or suggestion is very much appreciated.
You should have provided a bit more what you have been doing, it does not sound like you even tried to solve it yourself.
Anyway, if you are still on it:
You can scrape Google through two ways, one is allowed one is not allowed.
a) Use their API, you can get around 2k results a day.
You can up it to around 3k a day for 2000 USD/year. You can up it more by getting in contact with them directly.
You will not be able to get accurate ranking positions from this method, if you only need a lower number of requests and are mainly interested in getting some websites according to a keyword that's the choice.
Starting point would be here: https://code.google.com/apis/console/
b) You can scrape the real search results
That's the only way to get the true ranking positions, for SEO purposes or to track website positions. Also it allows to get a large amount of results, if done right.
You can Google for code, the most advanced free (PHP) code I know is at http://scraping.compunect.com
However, there are other projects and code snippets.
You can start off at 300-500 requests per day and this can be multiplied by multiple IPs. Look at the linked article if you want to go that route, it explains it in more details and is quite accurate.
That said, if you choose route b) you break Googles terms, so either do not accept them or make sure you are not detected. If Google detects you, your script will be banned by IP/captcha. Not getting detected should be a priority.
Any simple code snipped to GET the number of subscribers to any feed URL?
Thanks!
First off, I'll start by saying there is no easy way to do this. You, however, do have several options.
Option 1: Use feedburner. These statistics are not 100% correct, but its by far the least painful method, but you can only use it for future and not backwards to see how many people are already subscribed.
Option 2: Use Google Webmaster to calculate the number of subscribers.
Option 3: I found this perl script on rsslib.com that parses your server logs to figure out the number of subscribers
When you are using services like Feedburner, you can easily see the number of subscribers. If you are simply hosting the RSS feed yourself it will be pretty hard to find out returning visitors - you would need to include some kind of token identifying each user and match it to your server records.
I'd say you should use something like Feedburner and you are good to go.
Have you used pipes.yahoo.com to quickly and easily do... anything? I've recently created a quick mashup of StackOverflow tags (via rss) so that I can browse through new questions in fields I like to follow.
This has been around for some time, but I've just recently revisited it and I'm completely impressed with it's ease of use. It's almost to the point where I could set up a pipe and then give a client privileges to go in and edit feed sources... and I didn't have to write more than a few lines of code.
So, what other practical uses can you think of for pipes?
It's nice for aggregating feeds, yes, but the other handy thing to do is filtering the feeds. A while back, I created a feed for Digg (before Digg fell into the Fark pit of dispair). I didn't care about the overwhelming Apple and Ubuntu news, so I filtered those keywords out of Technology, which I then combined with Science and World & Business feeds.
Anyway, you can do a lot more than just combine things. If you wanted to be smart about it, you could set up per-subfeed and whole-feed filters to give granular or over-arching filtering abilities as the news changes and you get bored with one topic or another.
The one thing I have really used Y! Pipes for (rather than just playing around with it) is to clean up item titles, merge and finally de-dupe the feeds I got from querying multiple blog search engines with the same search term. This is something I’ve done in several very different contexts, eg. for my own ego surfing, in another case for the planet site set up by some conference’s organisers to keep an eye on their conference’s buzz, etc. Highly recommended.
You can do tons of things with pipes. For example for sites like digg or reddit, you can make one to bypass the site and go directly to the linked article (rewriting the RSS).
I like also to filter webcomics' feeds to keep just the comics, and then mix them all in only one feed
I've taken the liberty of copying your pipe and rearranging it a bit so that it's easier to add and remove tags:
Yahoo Pipe: StackOverflow Merge Tags
Tags are now listed in a string builder, so to add a tag you just have to hit the + button on the string builder and type in the tag preceded by a slash.
Well, pipes are real fast and useful.
Other effective uses might be:
1) combine many feeds into one, then sort, filter and translate it.
2) geocode your favorite feeds and browse the items on an interactive map.
3) power widgets/badges on your web site.
4) grab the output of any Pipes as RSS, JSON, KML, and other formats.
This is by no means a comprehensive list.
One of my favorite things to do with Yahoo! Pipes is to aggregate multiple craigslist feeds into a single feed. You can make a feed out of any category or search criteria on craigslist. I live in a university town and am always on the lookout for tickets to sporting events, for example. I have a half-dozen craigslist searches all being combined into a single feed via Yahoo! Pipes. This works a lot better for me than simply monitoring the entire "Tickets" category; filters out most of the tickets I am not interested in. Yes, this is another aggregating feeds example, but the craigslist usage is quite valuable with the ability to aggregate feeds that are themselves based upon searches.
I've used Pipes to translate blogs into English. I would have liked to use it to fetch the full text for blogs which only provide a summary of the content in the feed, but unfortunately they don't provide any input which fetches the content from a parameterizable source :-(.
Just stumbled on this while looking for ways to connect Excel to Pipes. A bit necromancer-ish, but here goes.
One thing I've done, is take an HTML page (science data) which has links to tons of CSV files for a bunch of Army Corps measurement stations. Each station has a big table of datafiles, all organized individually by month and year. I use YQL to parse out and organize the links to the individual CSV files in a way that Pipes can read them. Then, I use that as input into a Pipe, which has a user input for "Station" and "Date."
Using this, I can go to the Pipes page, type in those values and get the values only for a specific station and date, rather than have to find the station on a website, find the year and month in a big table, click the link, open the CSV file, and find the values for a day within that month's worth of data. I can even change the pipe to specify the hour, and the parameter, and then get a single value returned.
Now, I wish I could figure out how to program Excel so that I can use "=yahoo_function(station, datetime)" to place that value automatically into a cell give the values of other columns!