As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I have a web application that collect user ip addresses when user login, I want to drwa all collected ip addresses to a map that will show me a pin on the country have one or more user loged in from it
example: let us say we have the following ip addresses in a database
xx.ee.rr.tt = USA
cc.ee.ww.aa = UK
I need a world map with pins on USA and UK
There are a few good examples of mapping IP to country (or Country IP Blocks).
Here is a free restful API service you can call to do this - you're limited to 10k queries per hour.
They are on GitHub in case you want to copy and run the code locally and to get around this 10k constraint or speed things up.
Here is a sample output of a query on my IP address (I'll mask it, but you'll get the idea)
IP 198.70.xxx.xxx
Country United States
Region Ohio
City Xxxxx
Latitude and Longitude 4x.xxx -8x.xxx
Area and Metro codes 330 -
Here is a CodeProject article on consuming the WSIP2Country service. (If you go this route, note that the GetCountryCode method was deprecated and recommend using the GetCountryCodeAuth - see comments at the end of the article).
This works if you are only trying to derive location from the IP address - per your comment you want to also go the next step and pin it to a map. There are some great ways to incorporate Google Maps for this.
This site's IP Mapper method/code has been used quite successfully by many (it boasts: "Asynchronously Geocode IP Addresses on Google Maps"). They have a way to add a list of IP addresses in their example code:
var ipArray = ["111.111.111.111", "222.222.222.222", "123.123.123.123"];
IPMapper.addIPArray(ipArray);
They utilize the Google map API, so you should be able to modify their JavaScript code using the Google Map API docs as a guide if you want anything special.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm working on a project that requires certain statistics from another website, and I've created an HTML scraper that gets this data every 15 minutes, automatically. However, I stopped the bot now, as in their terms of use, they mention they do not allow it.
I really want to respect this, and especially if there's a law prohibiting me from taking this data, but I've been contacting them through email several times without a single answer, so now I've come to the conclusion that I'll simply grab the data, if it is legal.
On certain forums I've read that it IS legal, but I would much rather get a more "precise" answer here on StackOverflow.
And let's say that this is in fact not illegal, would they have any software to spot my bot making several connections every 15 minutes?
Also, when talking about taking their data, we're talking about a single number for each "team", and this number I will transfer in to our own number.
I'll quote Pablo Hoffman's (Scrapinghub co-founder) answer to "What is the legality of web scraping?", I found on other site:
First things first: I am not a lawyer and these comments are solely
based on my experience working at Scrapinghub, please seek legal
assistance accordingly.
Here are a few things to consider when scraping public data from websites (note that the following addresses only US law):
As long as they don't crawl at a disruptive rate, scrapers do not breach any contract (in the form of terms of use) or commit a crime
(as defined in the Computer Fraud and Abuse Act).
Website's user agreement is not enforceable as a browsewrap agreement because companies do not provide sufficient notice of the
terms to site visitors.
Scrapers accesses website data as a visitor,
and by following paths similar to a search engine. This can be done
without registering as a user (and explicitly accepting any terms).
In Nguyen v. Barnes & Noble, Inc. the courts ruled that simply placing a
link to a terms of use at the bottom of webpage is not sufficient to
"give rise to constructive notice." In other words, there is nothing
on a public page that would imply that merely accessing the
information is subject to any contractual terms. Scrapers gives
neither explicit nor implicit assent to any agreement, therefore
breaches no contract.
Social networks, for example, assign the value of becoming a user (based on call-to-action on public page), as the ability to: i) Gain access to full profiles, ii) Identify common friends/connections, iii) Get introduced to others, and iv) Contact members directly. As long as scrapers makes no attempt to perform any of these actions they do not gain "unauthorized access" to their services and thus does not violate CFAA
A thorough evaluation of the legal issues involved can be seen here: http://www.bna.com/legal-issues-raised-by-the-use-of-web-crawling-and-scraping-tools-for-analytics-purposes
There must be robots.txt file in root folder of that site.
There are specified paths, that are forbidden to harass with scrappers, and those, which is allowed (with acceptable timeouts specified).
If that file doesn't exists - anything is allowed, and you take no responsibility for website owners fail to provide that info.
Also, here you can find some explanation about robots exclusion standard.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Storm is a free and open source distributed realtime computation system. It receives streams of data and does processing on it. What if Storm goes down and part of the data never goes through it which means that calculations would not be in sync?
How can Storm solve this problem? If it can't, how could one solve this problem?
A similar question would be: How can I read old data that existed before Storm was added?
How can I read old data that existed before Storm was added?
The data must be stored somewhere (say, HDFS). You write a Spout which accepts data from some transport (say, JMS). Then, you would need to write replay code to read the appropriate data from HDFS, put it on a JMS channel, and Storm would deal with it. The trick is knowing how far back you need to go in the data, which is probably the responsibility of an external system, like the replay code. This replay code may consult a database, or the results of Storm's processing, whatever they may be.
Overall, the 'what if it goes down' question depends on what type of calculations you are doing, and if your system deals with back pressure. In short, much of the durability of your streams are dependent on the messaging/transport mechanism that delivers to Storm.
Example: If you need to simply tranform (xslt) individual events, then there is no real-time failure, and no state issues if Storm goes down. You simply start back up and resume processing.
The system that provides your feed may need to handle the back pressure. Messaging transports like Kafka can handle durable messaging, and allow Storm to resume where it left off.
The specific use case that results in "calculations would not be in sync" would need to be expounded upon to provide a better, more specific answer.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
It is a known fact that there are three blocks of IPv4 Addresses that were chosen to be reserved for private networks:
10.0.0.0 - 10.255.255.255 (10/8 prefix)
172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
(as specified by RFC 1918). However, although I can sort of see why 10.0.0.0 would be a natural choice, I can think of no particular reason why 172.16.0.0 and 192.168.0.0 were chosen among all the possibilities. I tried to google about this but got nothing, and the RFC document did not provide any explanation either. Was it really just a random decision?
AS stated by ganeshh.iyer "
10.0.0.0/8 was the old ARPANET, which they picked up on 01-Jan-1983. When they shut down the ARPANET in 1990, the 10.0.0.0/8 block was freed. There was much argument about if there should ever be private IP spaces, given that a goal of IPv4 was universal to all hosts on the net.
In then end, practicality won out, and RFC 1597 reserved the now well known private address spaces. When ARPANET went away, the 10.0.0.0/8 allocation was marked as reserved and since it was known that the ARPANET was truly gone (the hosts being moved to MILNET, NSFNET or the Internet) it was decided that this was the best Class A block to allocate.
Note Class A. This was before CIDR. So, the Class A, B and C private address netblocks needed to come out of the correct IP ranges.
I know that 172.16.0.0/12 was picked because it offered the most continuous block of Class B (/16) addresses in the IP space that was in a reserved block. 192.0.0.0/24 was always reserved for the same reason that 0.0.0.0/8 and 128.0.0.0/16 were reserved (first blocks of the old Class C, A and B network blocks) so assigning 192.168.0.0/24 out as private fit well -- 192.0.2.0/24 was already TEST-NET, where you could use them in public documentation without fear of someone trying it (see example.com for another example.)"
Quoted from:
https://supportforums.cisco.com/thread/2014967
https://supportforums.cisco.com/people/ganeshh.iyer
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I’m building an online multiplayer billiards game and I’m struggling to think of the best approach to multiplayer physics simulation. I have thought of a three possible scenarios, each having its own advantages and disadvantages and I would like to hear some opinion of those that either have implemented something similar already or have experience in multiplayer online games.
1st Scenario: Physics simulation on the clients: The player in turn to take a shot sends the angle of the shot and power to the server, and the sever updates all clients with these values so they can simulate the shot independently.
Advantages:
Low server overheat
Disadvantages:
Problems with synchronization. Clients must simulate the exact simulation regardless of their frame rate. (Possible to solve with some clever algorithm like one described here)
Cheating. Players can cheat by tweaking the physics engine. (Possible to determine when making a comparison at the end of the shot with other players ball positions. If only two players are at the table (i.e. not spectaculars) then who the cheater is?)
2nd Scenario:
Physics simulation on one (i.e. “master”) client (e.g. who ever takes the shot) and then broadcast each physics step to everyone else.
Advantages:
No problems with synchronization.
Disadvantages:
1.Server overheat. Each time step the “master” client will be sending the coordinates of all balls to the server, and the server will have to broadcast them to everyone else in the room.
2. Cheating by the “master” player is still possible.
3rd Scenario: The physics will be simulated on the server.
Advantage:
No possibility to cheat as the simulation is run independent of clients.
No synchronization issues, one simulation means everyone will see the same result (event if not at the same time because of network lag)
Disadvantages:
Huge server overload. Not only the server will have to calculate physics 30/60 times every second for every table (there might be 100 tables at the same time) but also will have to broadcast all the coordinates to everyone in the rooms.
EDIT
Some of similar games to the one I’m making, in case someone is familiar with how they have overcame these issues:
http://apps.facebook.com/flash-pool/
http://www.thesnookerclub.com/download.php
http://gamezer.com/billiards/
I think that the 3rd one is the best.
But you can make it even better if you compute all the collisions and movement in the server before sending them to the clients (every collisions and movements, etc...) then clients just have to "execute" them.
If you do that you will send the informations only once per shot, that will greatly reduce the network issue.
And as JimR wrote, you should use velocity or movement equation instead of doing small step by small step incremental simulation (like the Runge-Kutta method)
The information that the server send to the client would look like this:
Blackball hit->move from x1,y1 to x2,y2 until time t1
Collision between blackball and ball 6 at time t1
Ball 6 -> move from x3,y3 to x4,y4 until time t3
Blackball -> move from x5,y5 to x6,y6 until time t4
Collision between Ball 6 and Ball 4 at time t3
and so on until nothings move anymore
Also, you are likely to need a bunch of classes representing the differents physics equations and have a way to serialize them to send them to the clients (Java or C# can serialize objects easily).
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Does anybody know of a free flash obfuscator(protector)? All I can find are commercial ones with free trials. I have done numerous google searches, and have been unable to find what I am looking for.
I know that obfuscators do not make your swf hack proof, but they make things harder.
Things I am looking for in an obfuscator:
Unlimited obfuscations
No time limit
No watermark (or on the left side only! Right side is no good, same with center)
Able to publish work (no special player needed other than standard flashplayer)
I really was surprised to see how hard it was to find a good obfuscator (tried encoder, protection, etc. instead as well) and how easy it is to find a decompiler....
It is imperative that my code be protected, at least partially, to discourage the hacking of my game.
amayeta swfencrypt - http://www.amayeta.com/software/swfencrypt/ <= this one existed for a long time and is up to date
secureSWF http://www.kindisoft.com/secureSWF/download.php <== this one fulfill no time limit, but has watermark
and since actionscript format is very like javascript, you can use online free javascript obfuscator like this one to obfuscate sections of the important code
http://www.javascriptobfuscator.com/Default.aspx
or you can search for more at here http://www.google.com/search?q=obfuscator+javascript
I've found:
http://makc3d.wordpress.com/2010/02/09/open-source-swf-obfuscator/
http://github.com/shapedbyregret/actionscript-3-obfuscator
SOB
All open source, all free. I haven't tried any of them yet.
A coworker did some research on the topic a few months ago, and didn't find any free SWF obfuscators. We ended up picking SWF Encrypt (http://www.amayeta.com/software/swfencrypt/) which seems to do a good job.
OBFU - 1500 euros!
Amayeta SWF Encrypt Pro 5.0 - $125 USD. gets "bypassed" too
SecureSWF - Looks like the most promising right now.
A list of decompilers and obfuscators
Found at http://www.balsamiq.com/blog/2008/10/19/my-views-on-software-piracy/
I found SWFProtect. It looks decent, but You'll have to test it to be sure.
http://www.swfprotect.net/swf2.0/index.php
Update: Amayeta SWF Encrypt Version 4 is now being offered for free.
http://www.amayeta.com/promo/mag/