Grab Xbox live friends list from bungie - asp.net

Hey all, I'm trying to grab and display a friends list from bungies friends list.aspx file:
https://www.bungie.net/Stats/LiveFriends.aspx
and display them in a desktop application.. VB or something
How would I be able to do this? Does it have anything to do with asp? Are there any tutorials that can show me how to grab and display information?

If you're really interested about consuming information from Xbox Live, you can apply for the XBL Community Developer program from free here: http://www.xbox.com/en-US/community/developer/
There you'll be provided with API access that will be quicker and more reliable then parsing data from the Bungie site.

You'll need to fetch the data ("scrape" it) through something like a WebRequest. That will give you the raw HTML or whatever it outputs.
I'm sure, without even looking, that it uses some kind of login as well, which you will have to support. I would guess that involves making a request with the credentials to some page and extract the cookie returned which you will have to pass around. The cookies are passed around as headers.
The first thing you'll have to do is examine the HTML returned and determine how to process it to get the information you want. I would use Chrome and it's excellent developer tools for this, or another browser like Opera or Firefox with similar capabilities. This will also work for figuring out how to handle the session cookie.

Maybe 360voice can help? Haven't looked at the API enough to know if it has what you need.
http://360voice.gamerdna.com/forum/topic.asp?TOPIC_ID=3

Related

Fetching data from API for ReasonReact app

I'm learning ReasonReact and I would like to fetch data from a API, that I'm going to use on my component. However, on the official website about Reason or ReasonReact there's nothing about this, neither I found something searching on Google. How can I do it?
You can use the existing bindings to HTTP client libraries, e.g.:
https://redex.github.io/package/bs-fetch
https://redex.github.io/package/bs-axios
The former works in browser only, the latter works in both browser and Node.
In general, if you're looking for a way to do something, Redex is a great place to look.

Prevent automated tools from accessing the website

The data on our website can easily be scraped. How can we detect whether a human is viewing the site or a tool?
One way is by calculating time which a user stays on a page. I do not know how to implement that. Can anyone help to detect and prevent automated tools from scraping data from my website?
I used a security image in login section, but even then a human may log in and then use an automated tool. When the recaptcha image appears after a period of time the user may type the security image and again, use an automated tool to continue scraping data.
I developed a tool to scrape another site. So I only want to prevent this from happening to my site!
DON'T do it.
It's the web, you will not be able to stop someone from scraping data if they really want it. I've done it many, many times before and got around every restriction they put in place. In fact having a restriction in place motivates me further to try and get the data.
The more you restrict your system, the worse you'll make user experience for legitimate users. Just a bad idea.
It's the web. You need to assume that anything you put out there can be read by human or machine. Even if you can prevent it today, someone will figure out how to bypass it tomorrow. Captchas have been broken for some time now, and sooner or later, so will the alternatives.
However, here are some ideas for the time being.
And here are a few more.
and for my favorite. One clever site I've run across has a good one. It has a question like "On our "about us" page, what is the street name of our support office?" or something like that. It takes a human to find the "About Us" page (the link doesn't say "about us" it says something similar that a person would figure out, though) And then to find the support office address,(different than main corporate office and several others listed on the page) you have to look through several matches. Current computer technology wouldn't be able to figure it out any more than it can figure out true speech recognition or cognition.
a Google search for "Captcha alternatives" turns up quite a bit.
This cant be done without risking false positives (and annoying users).
How can we detect whether a human is viewing the site or a tool?
You cant. How would you handle tools parsing the page for a human, like screen readers and accessibility tools?
For example one way is by calculating the time up to which a user stays in page from which we can detect whether human intervention is involved. I do not know how to implement that but just thinking about this method. Can anyone help how to detect and prevent automated tools from scraping data from my website?
You wont detect automatic tools, only unusual behavior. And before you can define unusual behavior, you need to find what's usual. People view pages in different order, browser tabs allow them to do parallel tasks, etc.
I should make a note that if there's a will, then there is a way.
That being said, I thought about what you've asked previously and here are some simple things I came up with:
simple naive checks might be user-agent filtering and checking. You can find a list of common crawler user agents here: http://www.useragentstring.com/pages/Crawlerlist/
you can always display your data in flash, though I do not recommend it.
use a captcha
Other than that, I'm not really sure if there's anything else you can do but I would be interested in seeing the answers as well.
EDIT:
Google does something interesting where if you're looking for SSNs, after the 50th page or so, they will captcha. It begs the question to see whether or not you can intelligently time the amount a user spends on your page or if you want to introduce pagination into the equation, the time a user spends on one page.
Using the information that we previously assumed, it is possible to put a time limit before another HTTP request is sent. At that point, it might be beneficial to "randomly" generate a captcha. What I mean by this, is that maybe one HTTP request will go through fine, but the next one will require a captcha. You can switch those up as you please.
The scrappers steal the data from your website by parsing URLs and reading the source code of your page. Following steps can be taken to atleast making scraping a bit difficult if not impossible.
Ajax requests make it difficult to parse the data and require extra efforts in getting the URLs to be parsed.
Use cookie even for the normal pages which do not require any authentication, create cookies once the user visits the home page and then its required for all the inner pages.This makes scraping a bit difficult.
Display the encrypted code on the website and then decrypt it on the loadtime using javascript code. I have seen it on a couple of websites.
I guess the only good solution is to limit the rate that data can be accessed. It may not completely prevent scraping but at least you can limit the speed at which automated scraping tools will work, hopefully below a level that will discourage scraping the data.

Best practice to implement back functionality in flex

I'm not using deep linking, so all the pages/states appear as http://site.com
Is it still possible to implement back functionality in this case? It looks like flex has a browser history feature, but not sure if it would still work given that all the pages are at site.com
The other option is that I would save information in the main file itself so I can go to the last page and retrieve whatever data was on it.
Anyone can advise what's generally the best practice way to handle back functionality?
Unfortunately I don't know of any other way to implement browser history (ie, back/forward) without using deep linking (eg, example.com/#foo). This is how the Flex browser history components implement it.
However, one thing you could do, if you need to keep the URL static, is stick your Flex application in a frame. That way the outter frame would still show example.com while the inner frame is at, for example, example.com/#widget=42.
Check out the Flex docs on Deep Linking and the Browser Manager.
The best practice and generally the only way to do it is to use deep linking.
it's best because the users can add a section of your application to "Favorites" and that's an important feature to have IMHO.
Do you need to keep your website with just http:// site.com or is that just how it looks now because deep linking is not implemented?
Checkout Angela's Accessible Rich Internet Application tutorial which gives instructions on how to quickly and easily set up deep linking using UrlKit. The added bonus of this site is that the tutorial is delivered in the environment described... IE: you can view the source for an "in production" example of the implementation.

Is it possible to get the favorites list from the browser?

Is it possible to read a user's favorites list using asp.net?
This would violate the visitors privacy. You would need a browser component that they installed locally to do such a thing.
ASP.NET is technically a server side technology... it does allow you to output html and javascript, but to be more concise your question should read:
"Is it possible to read a user's favorites (bookmarks) via javascript?"
Since you need a client side script to do this.
And the answer to that question, unfortunately for you, is no.
God, I hope not. And if so they need to fix that immediately.
I don't think so, ASP.net is on the server side not on client side so i m sure it's impossible to do it without a plugin (i m even not sure you can do it using a javascript), on firefox some social network provide specific plugin.
Is it possible? Yes.
Is it possible without the user having to accept a component install? No.
Is it possible using only ASP.NET without running code on the client? No.
You would have to create something like an ActiveX component or a Java applet that you run in the browser. The component would have to check for different browsers and different versions of each browser to know where the favorites would be stored. Some browsers may store them in such a way that it's practically impossible to read them.

What's the fastest way to get the info I need from MSDN?

In PHP, if I need info on a function I can just type http://php.net/function-name. If the function doesn't exist it performs a search of all functions. The documentation for every function is usually 1 page long and contains all relevant info needed (params, return types, sample code, comments, special cases).
When I search for something on MSDN it usually takes 2-3 clicks before I can even get to what I was looking for.
Since I spend a good amount of time trying to extract very basic information from MSDN, is there a website or service that condenses this information for quicker easier access?
For example, I know for Java there is http://javadocs.org/ which makes it easier to find documentation (http://javadocs.org/Color redirects to http://java.sun.com/j2se/1.5.0/docs/api/java/awt/Color.html)
Does anything like this exist already? Thanks.
Use Google and specify site:msdn.microsoft.com
http://www.google.com/search?q=system.net.mail+site%3Amsdn.microsoft.com
Note: I also use this method to search SO -- Google using site:stackoverflow.com
I asumme you use Visual Studio. So if you want to find out something about ClassX for example, just place the cursor on it, and press F1.
If I do this on the FileInfo class in Visual Studio, I get http://msdn.microsoft.com/en-us/library/system.io.fileinfo.aspx.
I find it much easier to use google and just type in something like "msdn [what I am looking for]". It tends to come up with better results than trying to fiddle my way through MSDN's website.
google ==> site:msdn.microsoft.com + keyword :)
there even is a custom google search for that: MSDN Search
Ask StackOverflow
Use Google
Note that using the search box in MSDN isn't even in most people's answers.
Use an Open Search plug-in for your browser. Like these ones. IE7, Fire Fox (and I think) Chrome use these. Chrome's implementation integrates with the address box, whereas Fire Fox and IE have a specific search dialogue in the top corner.
As others have said, MSDN falls into the category of sites of which it can be said: "Google searches X better than X searches X". Notable peers include Wikipedia and StackOverflow.
To make using google easier, google will allow you create custom search engines that are not only limited to searching within a specific site, but also allow you set up other requirements. For example, if you click on my name to see my SO user profile, you'll see I have MSDN and StackOverflow search links in the box at top right. I don't have it working just yet, but eventually I'll have the StackOverflow search setup to only return questions pages and exclude the user pages or tag pages.
That said, one thing you're missing is that when you're using a Microsoft language, you're probably also using Visual Studio. And if that is the case, the intellisense hints have the information you need 90% of the time. So in that sense it's even better than php, because you don't even need to open a web browser.
MSDN uses the following URL format for the most recent version of the documentation:
http://msdn.microsoft.com/en-us/library/[Namespace.Class.Etc].aspx
In Firefox, you can create a bookmark of the form:
http://msdn.microsoft.com/en-us/library/%s.aspx
Give it a Keyword, i.e. "msdn" and then type in your location bar:
msdn system.web.ui.webcontrols
And FF will take you to:
http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.aspx
Chrome will also let you set up a search like this, and you can also create a custom search provider for Internet Explorer using the test url:
http://msdn.microsoft.com/en-us/library/TEST.aspx
MSDN Developer Library is vast; I agree that can be cumbersome to find things manually, so I don't bother.
In fact, usually if you just specify the function name, Google will list MSDN on the first two or three options.
There is also the little known ...
http://www.google.com/microsoft.html
... not MSDN specific, but it works. :)

Resources