I tried my web site with TOR.
I the IP I got was 104.244.73.13
whatismyipaddress.com says this IP coming from Lebanon from Kentucky United States.
It also calls it "Hostname: luxembourgtorexit1".
www.ip2location.com says that this IP coming from Bissen from Mersch Luxembourg.
It also knows that it is "Proxy Type (TOR) Tor Exit Node"
It gets even more interesting with www.whatismyip.com. It gives 2 options
one from IP2Location.com and it is Luxembourg
and the other one from ipdata.co and it is Phoenix Arizona!
I honestly never seen such conflicts between IP tracking services.
Who gives the correct answer?
It should be in Luxembourg.
If you use ping.pe and test ping latency to 104.244.73.13 from different locations. The server in Europe has the lowest latency to prove that it is nearby.
Related
I am new with CANoe, and now I am trying to test a real ECU by sending diagnostic requests to it and get response from the ECU via CANoe. I use VN5610A and CANoe software is CANoe.Ethernet. I connected the VN5610A to PC and the ECU. I configured the Diagnostics/ISO TP configuration by uploading a ODX file as database. Then when i start logging, I can see the ethernet package infomation in the trace window. And if i send request via other external tools, i can also see the communication in the trace window. But how can i send request via Caone?
I now have some questions firstly:
I want to test a real ECU, should I use the simulation setup? I mean should i need to simulate the real ECU as a simuLated ECU? If not, I would not use Diagnostics Console to send request. Actually I tried to setup the simulated ECU and send request via Diagnostics Console. But the real ECU did not really received the request, just the simuated ECU received.
in the Vector Hardware Config, we can define VN5610A's IP address, should this IP address the same as my PC IP address? If not the same? which IP address should be assigned to "Tester Present"?
If config below in the Vector Hardware Config correct? Should PC and CANoe be the same virtual Port?
Thanks a lot in advance.
if you want to simulate real ECUs, the best way normally is to generate a rest bus simulation for the whole bus using the respective signal data bases and then comment out the ECU you need and connect it to the interface, instead. ECUs that are deactivated in the simulation setup are not simulated and thus can be set to the real bus. When the ECU is activated, you should also see the connection change in the simulation setup to the other wire. If you do not deactivate the ECU, CANoe will simulate it for you.
Maybe you can also have a look at the Examples. If you have not installed them together with CANoe, I recommend doing so. They're actually quite good :-)
when i ask from DNS Server about somewhere.com it resolve 4 IP for me in this order:
192.168.1.1
192.168.1.2
192.168.1.3
192.168.1.4
and my question is what happen if first IP(192.168.1.1) doesn't work?
which service or protocol switch to next IP?
and what about other services like telnet,... (telnet somewhere.com 443)?
First, when the lookup is made, the returned list is not in any kind of order, it is somewhat random. This helps ensure one system is not overloaded.
What happens if the first address in list fails, will depend on how well the application is written.
What is supposed to happen, is that if the first IP address fails, then the application should then try the next and iterate though the list until the list is exhausted or a connection is made. You can see this with many pieces of software. However, many developers are lazy, and only try the first returned. I can think of quite a few applications where this happens.
It's not fun when you see applications request ANY expecting an IPv4 address and the returned list contains an IPv6 address, only for the software to barf.
Reading about Webrtc i get the feeling that "it will drop server bandwidth usage dramatically" except for "a few corner enterprise-firewall cases" where one needs a TURN server which relays the whole traffic between the peers.
For example, altough not webrtc related but the idea is similar, the wikipedia article of Chatroulette states: The website uses Adobe Flash to display video and access the user's webcam. Flash's peer-to-peer network capabilities (via RTMFP) allow almost all video and audio streams to travel directly between user computers, without using server bandwidth. However, certain combinations of routers will not allow UDP traffic to flow between them, and then it is necessary to fall back to RTMP.
Also similar articles on Webrtc focus on "yeah there might be problems with firewalls so you need a TURN server but ignore this and look at my awesome PeerConnection javascript code".
What i don't understand:
A Connection between two peers requires a server socket to be open so the peers can connect to it. Even UDP requires the concept of a udp server socket. Since nearly all not-server internet connected peers are behind some kind of router. E.g. every smartphone uses a wifi router, desktop PC's use the router of the service provider, ...
It shouldnt be possible to connect to a server socket hosted on a smartphone (browser webrtc server socket) or desktop cause of the router/firewall.
Thus my understanding is practically no two peers which need to send their traffic through the internet will be able to use a direct P2P connection, right?
So the only useful case to use Webrtc is in a LAN like environment, right?
Furtherly in case of a video chat service like chatroulette based on webrtc would need to use a bunch of TURN servers to relay nearly ALL traffic. Which makes Webrtc equally costly regarding server bandwidth like hosting my own solution.
So my question is: Am i right? If not what is the technical detail that allows a PeerConnection to be used without a TURN server but for two nodes separated by the Internet? How is the connection established on Layer 4 the TCP/UDP Transport Layer? Is it using UDP and all wifi routers allow hosting UDP server sockets or such? Which wouldnt make much sense cause of NAT and security.
UPDATE 1:
Digging a bit further i found what "symmetric nat" means and what it has to do with enterprises: In most enterprises it seems that the device connected to the internet has symmetric nat implemented. This means that the routing table which maps internal "internal-ip:internal-port" tuples to "internet-ip:internet-port" also stores "destination-ip:destination-port". So such routes/nats store a table for every (tcp?) connection having 6 columns "internal-ip:internal-port:internet-ip:internet-port:destination-ip:destination-port". This means no one else but the destination is allowed to communicate with internal-ip:internal-port.
Whereas non-enterprise-routers seem to only store the "internal-ip:internal-port:internet-ip:internet-port" combination. Thats also what is meant as "poke a hole in the firewall".
You're not right. All peers have IP addresses in order to communicate, and can be reached on those same addresses, provided a firewall allows it.
NATs tend to be optimized for client-initiated client-server traffic only. That typically means they initially allow outbound traffic only, and only allow inbound traffic on the same line after outbound traffic has happened. Perfect for servers. See this WebRTCHacks article for an intro to the problem.
This is where ICE comes in to attempt to poke holes in the firewall from the inside (client-side), in order to establish a line of communication directly between two peers, without needing any "server" socket, whatever that means.
How ICE works is quite complicated, and is explained in detail in the RFC.
But in broad terms it works in a number of steps:
Each peer (e.g. browser) has an "ICE agent" that collects candidates. Candidates are addresses (IP:port numbers) at which this peer can be reached, including:
Host candidates: e.g. immediate LAN/wifi/VPN IPs of the machine.
Server-reflexive candidates: public (outside-NAT) addresses of the machine, obtained by bouncing requests off mirroring (STUN) servers on the internet.
Relay candidates: addresses to a shared TURN server to forward data if all else fails.
Once discovered, candidates are inserted into the local SDP, and trickled over the signaling channel to the other peer, where they are inserted in it's remote description, where the other agent sees them.
Once an ICE agent has both local and remote candidates, it starts pairing local and remote candidates, and checks them for connectivity by sending STUN requests on them (effectively attempts at reaching the peer).
Successful pairs are ones both ICE agents have gotten a response back on (a 4-way handshake if you will).
If there's more than one successful pair, they're sorted by some metric, and the best pair becomes selected.
The selected pair is then used to send media over. One pair is needed for each track of (video or audio) media.
If a better pair is found later, the selected pair may change, affecting what address media is sent on.
TURN should only be needed in cases either where both clients are behind symmetrical NATs, or UDP traffic is blocked entirely.
I have recently set up this configuration: http://pastebin.com/9SWpqQnz
It is achieving the goal of having the route fail over to the backup ADSL line when the primary fibre ethernet line goes down.
It is, however, failing over for a short period of time every few hours or so.
I assume it is looking for a single missed ping?
Can anyone suggest how this might be tightened up a bit and made more reliable?
track 1 rtr 123 reachability
delay up 60 down 20
ip sla 123
icmp-echo 37.77.176.177 source-interface FastEthernet0/0
timeout 2000
frequency 10
ip sla schedule 123 life forever start-time now
Also it's probably worth pinging something a little further out on the internet, since your ISP could have issues that leaves your local loop up and running but the rest of their network is unable to forward traffic correctly. On the other hand, if you continuously ping something random on the internet that you don't have authorization for you could get blocked or rate-limited, which will artificially fail over your default route.
I'm pretty sure I remember reading --but cannot find back the links anymore-- about this: on some ISP (including at least one big ISP in the U.S.) it is possible to have a user's GET and POST request appearing to come from different IPs.
(note that this is totally programming related, and I'll give an example below)
I'm not talking about having your IP adress dynamically change between two requests.
I'm talking about this:
IP 1: 123.45.67.89
IP 2: 101.22.33.44
The same user makes a GET, then a POST, then a GET again, then a POST again and the servers see this:
- GET from IP 1
- POST from IP 2
- GET from IP 1
- POST from IP 2
So altough it's the same user, the webserver sees different IPs for the GET and the POSTs.
Surely seen that HTTP is a stateless protocol this is perfectly legit right?
I'd like to find back the explanation as to how/why certain ISP have their networks configured such that this may happen.
I'm asking because someone asked me to implement the following IP filter and I'm pretty sure it is fundamentally broken code (breaking havoc for at least one major american ISP users).
Here's a Java servlet filter that is supposed to protect against some attacks. The reasoning is that:
"For any session filter checks that IP address in the request is the same that was used when session was created. So in this case session ID could not be stolen for forming fake sessions."
http://www.servletsuite.com/servlets/protectsessionsflt.htm
However I'm pretty sure this is inherently broken because there are ISPs where you may see GET and POST coming from different IPs.
Some ISPs (or university networks) operate transparent proxies which relay the request from the outgoing node that is under the least network load.
It would also be possible to configure this on a local machine to use the NIC with the lowest load which could, again, result in this situation.
You are correct that this is a valid state for HTTP and, although it should occur relatively infrequently, this is why validation of a user based on IP is not an appropriate determinate of identity.
For a web server to be seeing this implies that the end user is behind some kind of proxy/gateway. As you say it's perfectly valid given that HTTP is stateless, but I imagine would be unusual. As far as I am aware most ISPs assign home users a real, non-translated IP (albeit usually dynamic).
Of course for corporate/institutional networks they could be doing anything. load balancing could mean that requests come from different IPs, and maybe sometimes request types get farmed out to different gateways (altho I'd be interested to know why, given that N_GET >> N_POST).