I have been trying to scrape all the biography wiki pages for weeks. The problem is I can't find a way to distinguish a page concerning a person or something else.
For instance the following pages:
view-source:https://en.wikipedia.org/wiki/Albert_Einstein
view-source:https://en.wikipedia.org/wiki/Spider
look pretty similar regarding their HTML code. I am sure there must be a keyword allowing you to know if the page is related to a person.
Has anyone faced the same problem?
Thanks in advance =)
I'm not sure there is a definite way to tell but you could build up a list of indicators that you think the page might be about a person and then match on these.
For example on the Albert Einstein page there is a section for "Born" and "Died" on the right pane. By having these present we can be pretty sure that this article is about a person (although if you look for died you'll probably only get dead people). These titles however aren't consistent and you would need to match against one or more of these to build up confidence that the article is indeed about a person. e.g. https://en.wikipedia.org/wiki/Lionel_Messi doesn't contain the "Born" header but it does contain "Date of birth".
Alternatively to this you could do some natural language parsing to try and figure out if the main text on the page is talking about a person. Lots of mentions of "he" or "she", probably means the article is talking about a person.
Related
How's it going?
I've found a lot of more detailed answers relating to specific problems relating to RSS feeds, but I can't really figure out how you USE one, basically.
Could someone explain?
I see the RSS feed icon at the top of a lot of Wordpress sites, including my own, but when I click it, it just seems to be a long XML file. I don't know what to do with it, or even why it would be there.
How do you use this? Are you meant to hit it with an API request, or is there a particular kind of software that you use?
Cheers
Before telling you what RSS, let me describe you a common problem that many people have.
Say there is a bunch of sites that you really like and it's sort of a
daily routine for you to go thru them. They may be a news site, your
friend's blog, but also craigslist bcause you're currently looking for
a new house and maybe a weather site to know how late you should stay
at work :)
The first thing you do when you get to work, is open your web browser
and these sites in new tabs. It's not particularly cumbersome because
there are just 4 sites. But think about it: maybe there is a new blog
that you start to like and ho, these cartoons are really funny. Maybe
there is also a bit of financial info that you're interested in and
the pictures that your brother is posting to Flickr every couple day:
they just had a new baby! Also, as you're trying to buy a house, you'd
love a little raise and you've figured that your boss really likes it
when you tell her that you've read about your company in the news or
when you tell her about a new competing product... There is also
StackOverflow. You're desperately trying to get this "expert" badge
and boost up your reputation: this may help with your boss too or even
when you're looking for a new job.
Opening all these tabs is starting to take a toll and you keep
forgetting an important one. You're also slowly getting tired of the
different reading experience that all these sites have: small fonts,
large fonts, ads all over...etc. Now you have a problem.
Imagine there is a tool that does the following: you can tell it what sites you care about, and then, this tool will look up the new stuff for you. It will show everything in a nice looking format. It should also help you identify what's really worth seeing ASAP or maybe have some kind of "serendipity" mode that you can go into and find interesting stuff that you would have missed otherwise. The tool will obviously send you to the original sites should you need more info about any particular story or classified...
This tool exists. It's usually called a Reader, mostly because it lets your read more things online. Often times you'll see them called "RSS reader", because RSS is what they use to get the information from all these sites. RSS is the pipe. You as a user should probably not know about it, but that's what the readers depend on. In an ideal world, when you're on site you like, you should just hit "follow" on a button like this one and then you'd be redirected to your reader of choice. Later when new content is added, you'll get it straight in your reader.
To get a bit into more technical details, RSS (like Atom) is an XML flavor. It's a collection (mostly reverse chronological) of entries. Entries have at least a title and a link to the actual story. They should also include a unique identifier and could have other elements like a description, an image, tags, author information... etc.
RSS is great because it's content agnostic. It can be used to represent a lot of different things (as described in the little story) and decouples the publishing platform from the subscribing platform: they don't even know the other one exists. RSS is their lingua-franca.
I wrote a blog post about this very question not long ago. Here's the link if you're interested in reading my personal interpretation. https://www.rss.com/whatisrss
An XML file is all the content of a page, with no markup. The XML represents the data in its rawest, most descriptive form. Many readers can interpret XML sources from a variety of places, and format all of the data in its own unique way.
It could be a project well beyond my skills right now but I've got around one full month to spend on it so I think I can do it. What I want to build is this: Gather news about a specific subject from various sources. Easy, right? Just get the rss feeds and display them on a page. Well, I want something more advanced: Duplicates removed and customized presentation (that is, be able to define/change the format in which the news headlines are displayed).
I've played a bit with Yahoo Pipes and some other tools and I am facing two big problems:
Some sources don't provide rss feeds. How do I create one?
What's the best method to find and remove duplicates. I thought about comparing the headlines and checking if there is a matching bigger than, say, 50%. Is that a good practice though?
Please add any other things (problems, suggestions, whatever) I might not have considered.
Duplication is a nasty issue. What I eventually ended up doing:
1. Strip out all HTML tags except for links (Although I started using regex, I was burned. I eventually moved to custom parsing to remove tags)
2. Strip out all whitespace
3. Case-desensitize
4. Hash all that with MD5.
Here's why you leave the link in:
A comment might be as simple as "Yes, this sucks". "Yes, this sucks" could be a common comment. BUT if the text "this sucks" is linked to different things, then it is not a duplicate comment.
Additionally, you will find that HTML tag escaping is weird with RSS feeds. You would think that a stray < would be double-encoded: (I think)&<;
But it is not. It is encoded <
But so too are HTML tags! :<p>
I eventually copied all the known HTML tags as parsed by Mozilla Firefox and manually recognized those tags.
Creating an RSS feed from HTML is quite nasty and I can only point you to services such as Spinn3r, which are fantastic at de-duplication and content extraction. These services typically use probability-based algorithms that are above me. I know of one provider that got away with regexing pages (They had to know that a certain page was MySpace-based or Blogger-based) but they did not perform admirably.
You might want to try to use the YQL module to scrape a webpage that doesn't provide RSS. Here's a sample of a YQL statement to scrape HTML.
About duplicates, take a look at this pipe.
Customized presentation: if you want it truly customized you'll have to manipulate the pipe results yourself, e.g. get it as JSON an manipulate it with Javascript, or process it server-side.
I am tasked with writing a program that, given a search term and the HTML source of a page representing search results of some unknown search engine (it can really be anything, a blog, a shop, Google, eBay, ...), needs to build a data structure of the results containing "what's in the results": a title for earch result, the "details" link, the position within the results etc. It is not known whether the results page contains any of the data at all, and whether there are any search results. The goal is to feed the data structure into another program that extracts meaning.
What I am looking for is not BeautifulSoup or a RegExp but rather some clever ideas or algorithms on how to interpret the HTML source. What do I do to find out what part of the page constitutes a single result item? How do I filter the markup noise to extract the important bits? What would you do? Pointers to fields of research covering what I try to to are aly greatly appreciated.
Thanks, Simon
I doubt that there exist a silver-bullet algorithm that without any training will just work on any arbitrary search query output.
However, this task can be solved and is actually solved in many applications, but with different approach. First you have to define general structure of single search result item based on what you actually going to do with it (it could be name, date, link, description snippet, etc.), and then write number of html parsers that will extract necessary necessary fields from search result output of particular web sites.
I know it is not super sexy solution, but it probably the only one that works. And it is not rocket science. Writing parsers is actually extremly simple, you can make dozen per day. If you will look into html source of search result, you will notice that output results are typically very structured and marked with specific div sections or class atributes, so it is very easy to find it in the document. You dont have even use any complicated HTML parsing library for that, something grep-like will be enough.
For example, on this particular page your question starts with <div class="post-text"> and ends with </div>. Everything in between is actually a post text with some HTML formatting that you may want to remove along with extra spaces and "\n". And this <div class="post-text"> appears on the page only once.
Once you go at large scale with your retrieval applicaiton, you will find out that there is not that big variety of different search engines on different sites, and you will be able to re-use already created parsers for sties using similar search engines.
The only thing you have to remember is built-in self-testing. Sites tend to upgrade and change design from time to time. If your application is going to live for some time, you will need to include into your parsers some logic that will check validity of their results and notify you every time search output has changed and is not compatible anymore with your parser. Then you will have to modify particular parser or write new one.
Hope this helps.
I currently implement a replace function in the page render method which replaces commonly used strings - such as replace [cfe] with the root to the customer front end. This is because the value may be different based on the version of the site - for example the root to the image folder ([imagepath]) is /Images on development and live, but /Test/Images on test.
I have a catalogue of products for which I would like to change [productName] to a link to the catalogue page for that product. I would like to go through the entire page and replace all instances of [someValue] with the relevant link. Currently I do this by looping through all the products in the product database and replacing [productName] with the link to the catalog page for that product. However this is limited to products which exist in the database. "Links" to products which have been removed currently wont be replaced, so [someValue] will be displayed to the user. This does not look good.
So you should be able to see my problem from this. Does anyone know of a way to achieve what I would like to easily? I could use regexes, but I don't have much experience of those. If this is the easiest way, using "For Each Match As String In Regex.Matches(blah, blah)" then I am willing to look further into this.
However at some point I would like to take this further - for example setting page layouts such as 3 columns with an image top right using [layout type="3colImageTopRight" imageURL="imageURL"]Content here[/layout]. I think I could kind of do this now, but I cant figure out how to deal with this if the imageURL were, say, [Image:Product01.gif] (using regex.match("[[a-zA-Z]{0,}]") I think would match just [layout type="3colImageTopRight" imageURL="[Image:Product01.gif] (it would not get to the end of the layout tag). Obviously the above wouldn't quite work, as I haven't included double quotes in the match string or anything, but you get the general idea. You should be able to get the general idea of what I am getting at and what I am trying to do though.
Does anyone have any ideas or pointers which could help me with this? Also if this is not strictly token replacement then please point me to what it is, so I can further develop this.
Aristos - hope reexplaining this resolves the confusion.
Thanks in advance,
Regards,
Richard Clarke
#RichardClarke - I would go with Regular Expressions, they're not as terrible to learn as you might think and with a bit of careful usage will solve your problems.
I've always found this a very useful tool.
http://derekslager.com/blog/posts/2007/09/a-better-dotnet-regular-expression-tester.ashx
goes nicely with a cheat sheet ;-)
http://www.addedbytes.com/cheat-sheets/regular-expressions-cheat-sheet/
Good luck.
I'm venturing into web programming for the first time and would like a nice way to display a frequency indicator of some data, in the form of a tag cloud.
For example, pretend I have some simple data of three types of pets: Dog, Cat, Monkey.
There are 5 Dogs, 27 Cats and 101 Monkeys.
Given this data, what's the best way to make a tag cloud to visually indicate that I have way too many monkeys, not as many cats, and that I definitely need obtain a few more dogs?
Update: It would be great if the solution was actually discussed and answered on stackoverflow. Linking externally is good to help support the answer, but leaving the links as an answer is not necessarily what stackoverflow is about. Anyone can google to find what has been linked. The hope is that stackoverflow will be the place to find the answer. This is just a request to help make stackoverflow better. :)
I don't believe this is the answer you're looking for, but there is a Cloud Control for ASP.NET available at CodeProject:
http://www.codeproject.com/KB/aspnet/cloud.aspx
It's looks fairly easy to use.
--
Edit: I should probably credit my source. The link above was found on the following web page:
http://www.technacular.com/2007/04/22/how-to-create-a-tag-cloud/
This page contains some additional general information related to building a Tag Cloud. Best of luck!
You need to first decide your metric (i.e. what you want to measure, in this case number of pets per type), and second how you map that metric onto a set of classes. These classes are equivalent to the styles you attach to the tags.
A quite simple mapping would be x[i] / sum(x) giving a ratio between 0 and 1. Define subranges on the range [0, 1], for example 4 ranges from 0..0.25, 0.25..0.50 and so on. Find the index of the subrange (0,1,2,3) and assign the tag a CSS class "tagX".
There are many approaches and techniques...
Clustering Algorithms for Tag Clouds
Design Tips for Building Tag Clouds
I hope this would help.
https://web.archive.org/web/20210616112719/https://aspnet.4guysfromrolla.com/articles/102506-1.aspx