I just started podcasting, and made my first podcast. I recorded it and am using auphonic.com to level out the audio and add the intro and outro - once this is done, it pushes the mp3 output to AWS S3. It sounds amazing, the bucket is open to the public, and it's accessible to listen to via the URL made available in the file.
I need to create an RSS feed (for iTunes and to import podcasts onto my WordPress website via PowerPress). I spent too much time over the weekend researching this - I'm told that this can be done using AWS Lambda, but I can't seem to find any good examples of how this is done.
Can anyone offer good resources on how to create an RSS feed with AWS Lambda, or have other suggestions?
In the meantime, I've uploaded my podcast to anchor.fm, which creates the feed and I can import that way. Anchor.fm is "free", but it looks like I'm essentially giving away my content, which is okay for now as I get started- but long term I'd like to create my own RSS, as noted above.
Any help is much appreciated - browsing AWS forum had several people asking this question with no answers.
Search engine results initially presenting links to podcasts by or about AWS, iTunes, etc. doesn't make finding any explicit answer(s) either.
Happily, I remembered enough from this Reddit comment. It may not be complete or direct enough to implement verbatim, but hopefully, it helps you a bit because it got me on the right track.
Related
Does anyone know a web crawler tool for collecting contact details from a website? Say I have a www.website/contact.. I want to pull out the address, phone number, etc.. There are 2 tools I've been looking at: cralwer4j opensource jar for java and Scrapy opensource in Python. But I am finding it a bit hard to use for my scenario.
Any suggestions would be great. Thanks
You might google for "simple web crawler" to find a solution that fits you best. In the net there are plenty "pure python" based web crawlers. Based on sceleton code you add db wrap up. I think the most problem would be db setting and saving data in it.
What if there are 1000000s of websites to crawl.. Is there a way to crawl all websites in my are?
No problem for scripting. Just put millions addresses in a file (or files), open it for reading in python or other script. Then get link by link from it and crawl/scrape to your pleasure. Result you might also want to save in file (csv, json).
I'd also recommend you a ready simple python crawler.
I am developing a system that has a database for news headlines from various sources. I have not worked with RSS before so I am confused about a lot of things. Can anyone please point a good tutorial for how to develop such a thing? Thanks
In my mind, I have questions like:
1) How will I get the latest news feed? do I have to check the rss feed link every few minutes and see If it's different than the previous one?
2) Is it a good practice to parse the feed xml myself or use a feed reader kind of thing?
3) Will I have any control over the feed sent to me. e.g I only need news feed for Google or Intel.
RSS is a very standard format you can start learning at w3c school.
About your questions.
If you can talk with the RSS provider, maybe they can notify each time
something new comes. They can use, for example,
XML-RPC notification.
You can also ask the RSS provider how often should you check the feed
(in case they cannot provide any kind of notification).
I think it's better to develop your own bot. There is lot of
frameworks that can deal with rss format. In case you are working with C# you can try with SyndicationFeed Class
I'm not sure if I'm undestanding your problem, but if the provider
puts a RSS link at your disposal, you must actively navigate that
feed. When you have that feed, you can work with the metadata in
order to see what's interesting for you. For example checking "category" or "channel" node.
I'm in the early stages of designing an RSS app, and I'd like to include syncing to an online RSS feed service as a feature. Most such apps make use of Google Reader's feed/syncing features, but Google is now moving sync out of its Reader service, and also its API remains undocumented. Are there any alternatives to Google Reader that offer online syncing of feeds with a desktop client, and which have a documented API?
There should be an answer to this question, but I don't think there is.
I think we got lazy. Maybe it's time to roll up our sleeves and get to work.
What about Newsblur?
http://www.newsblur.com/
Don't know anything about them, but they appear to have a reasonable facsimile of a product in this vein.
Here are their API docs. http://www.newsblur.com/api
They are a subscription service, but you can have up to 64 feeds for free.
A couple suggestions, the original web RSS Reader BlogLines is still around, though now under new management since MerchantCircle purchased the service late last year.
The APIs maybe still functional:
Or they may be deprecated/turned off, haven't tried the APIs myself.
If BlogLines API is no longer around a better bet is LiveDoor Reader (along with it's open sourced version is called FastLadder).
Livedoor Reader is a Japanese service, but FastLadder pages and documentation are available in english and Japanese.
Downloadable Open sourced versions for running on your own machines be they windows, Mac OSX, or Linux from here
There's also a FastLadder Google source Code page.
There are RSS apps for both IOS and Android that sync with LiveDoor Reader/FastLadder instances. Just search for LDR in their respective app stores.
I don't think there's a ready answer yet, but I think Brent Simmons has a rough spec of what could be a start:
http://inessential.com/2010/02/08/idea_for_alternative_rss_syncing_system
Basically, imagine a server that manages feed subscription lists and captures annotations for feed items. Those annotations for items would be things like (un)read, starred, shared, saved, deleted, or whatever else an app might want to attach to a feed item. It should stay simple and not fetch or process feeds themselves - other apps and libraries do that fine already.
Feedlooks looks close too with no ties to Google Reader - not sure about the API, though
http://www.feedlooks.com/
Years back, I'd used a self hosted Open Source app called Gregarious - It appears to have gone missing recently.
Here's the Gregarious Archive from 2010
http://web.archive.org/web/20100925221312/http://gregarius.net/
Another contender for the do-it-yourselfer might be utilizing SimplePie.org
Planning to start a small aggregator for a personal project, so far I have a few inquiries on gathering information for the site. I'm still clueless on where to begin. what kind of infrastructure do i need? where do i get the feeds and can I sort them out depending on the theme of the info requested?
any feedback is appreciated. thanks
This is a pretty open-ended question, but here's where I'd start:
Technology for handling feeds -- WCF Syndication. Also, read and understand the RSS and Atom specs.
Infrastructure -- depends on your situation. Is it just for you, or a few friends or are you talking about building the next Google Reader? If it's smaller-scale, then look at a hosting solutions like GoDaddy, DiscountASP.NET, etc. (There's hundreds of them.) If you're talking a larger-scale type of solution, look at hosting it in the cloud - Rackspace, Amazon, Windows Azure.
Where do you get feeds? Pretty much anywhere. Personally, if this site is for other users, let the users enter them in (why be in the business of trying to guess what feeds people would want to subscribe to?).
I think you need to provide more requirements in order to get more solid feedback. Start with looking at WCF Syndication and get a feel for that library in terms of how to programmatically handle RSS and ATOM feeds (both subscribing and publishing). Once you understand that, I think you'll have a better handle on your next steps.
Hope this helps.
Hi I am currently designing a website for a client - the site will be written in asp.net with a cms built in. My client has come back saying he wants to play mp4s on the site - plus being able to embed some other videos from youtube, vimeo etc.... in his blog - I have managed to convice my client that playing .flv would be better for obvious reasons (which he has agreed is OK). but when I went back to my coder, he said that because of the fact its a dynamic site that it will take 2 days to get this working (in terms of creating the mechanics to allow my client to up load his movies etc.....)
Is this correct - as my client is under the impression that it should be a simple thing to do - while my coder tells me that its not that simple.
I am in the middle of all of this - can you help please!!!!
At the end of the day only the coder you are using knows exactly how much effort is required here. You have to trust them. This almost certainly not trivial. Make sure you and the coder understand exactly what's being asked for here and that neither of you are assuming anything about how the client expects it to work.
Is your client a programmer? Non-programmers should never dictate how long a programming task should take.
If you're cowboy coding without testing "today" would probably suffice, but any sane and professional development shop would never let this happen.
Now let's clarify what your client really told you to do:
Your dev seems to be assuming that he has to support adding/uploading videos from your CMS.
If your dev is going to use a 3rd party API like YouTube, 2 sounds reasonable. If you're going to serve it on your own site, it'd take at least a week's worth of programming to make sure your site can take such a heavy load of streaming data -- it's stupid, not to mention highly irresponsible, to assume it could be worked out in a day.
Now, if you're client is only really talking about embedding videos in blog entries or articles, that's a very trivial task: YouTube, Vimeo and other video sharing sites already supply the HTML embed code that's needed to display a video on a page. In fact that's a zero effort task assuming that your blog entry editor properly parses the embed code, or has an Edit HTML feature.
So, which one is which?
This might be a good occasion to use the <video> tags. It might simplify things at the cost of only supporting users with recent browsers.
Two days is a quite optimistic estimate for all that you've mentioned. Maybe for embedding YouTube videos only, but for upload/storage/streaming of videos on the local server it's a different thing entirely.
But if you don't understand programming yourself, then you have to trust the expert that you've hired to do the job for you, and you have to tell the client that is how long it will take. The fact is that these things aren't trivial to write, there's the front end website management interface that needs creating, and the back end server software that manages what to do with the uploaded file. Never mind integration and making sure it's easy for the client to run a workflow of upload file, incorporate that video inside some content in the CMS, and so on.
I just recently did this, you need to get videoLan http://www.videolan.org/
This streams mostly anything, after you set up a streaming site it's easy!