In veins_inet example I need to send accident packets only for some vehicles (nodes), but by default it's taking broadcast in veinsInetApplication.cc so how can I change it to multicast?
SendPacket method was derived from veinsInetApplicationBase.cc. If I change anything there then I'm getting errors. Default port is 9001 so I can't change there also. so can I get any solutions for this?
VeinsInetApplication.cc
[
Related
Network disconnect issue happens in a system of my company.
Here is the network topolgy:
PC1: two NICs, both static IP address. Data: 10.10.22.11, Control: 10.10.22.10
PC2: two NICs, both static IP address. Data: 10.10.22.101, Control: 10.10.22.100
Default Gateway 10.10.22.11 is set on both side. However I don't think this is necessary, as there is no router or gateway between the two PCs, there directly linked.
A consultant point out, since all the IPs are set in same segment 10.10.22, there could be broadcast storm, which might be the cause of network disconnect.
Is this true? Can broadcast storm happen in directly linked two PCs?
No it shouldn't be.
Broadcast storms happen when there is a loop in a network.
A packet is forwarded to all ports in a switch and if there is a loop the packets are again sent to the same switch on all ports, amplifying the storm, if there is no network loop there shouldn't be any storm.
I don't see any loop in your configuration so this shouldn't be any broadcast storm.
Identifying broadcast storm is not so hard, just sniff on an interface on the network, and if you see millions of times the same broadcast packet, it should be a broadcast storm.
I'm using a PXI 8109 running Pharlap OS.
I'm trying to use the second ethernet interface of my PXI to send UDP and TCP packets.
Here the configuration of my two ethernet interfaces:
eth0 (primary):
IP : 10.0.0.3
subnet mask : 255.0.0.0
eth1 :
IP : 192.168.10.9
subnet mask : 255.255.255.0
For UDP, I have no problems, packets are sent to the second interface as I want. I think it work because there is a "net address" input on the "UDP Open" VI so the system can choose the right interface.
For TCP, I use the "TCP Open connection" VI but there is no this kind of input. And it is not working : I assume the system is trying to use the primary interface but it can't route packets...
For information, my two networks are physically independant.
Can you help me finding out what's going on ? Is it possible to use TCP on the second ethernet interface ?
TCP open is meant to open a connection to another computer, if you feed a valid (in one of the two subnets) TCP address it should open a connection on that specific interface.
I assume you need to use the TCP listener function and according to this KB article, you can specify on which address you want to listen. So yes, you should be able to use a specific ethernet interface.
disclaimer: I am not sure if all this info is valid on Pharlap as well.
Basically, the decision which NIC to use is up to the OS and I believe that normally it would choose based on the subnet of the address you're trying to connect to and those of the NICs - I don't know what the IP address is (maybe it's in the subnet of the wrong card?), but the subnets of the NICs certainly appear to be different from each other (10.0.0.0 and 192...).
On Windows, I believe you can set the routing tables to have some more control of this (although I don't know if you would be able to force something to go through the "wrong" NIC), but I have no idea how much control you would have over this on Phar Lap. I would suggest some searching. Here are a couple of relevant links:
http://forums.ni.com/t5/LabVIEW/RT-How-do-I-use-two-independent-Ethernet-ports/td-p/721269
http://forums.ni.com/t5/LabVIEW/Communicating-through-two-ethernet-ports-on-the-same-computer/m-p/1509450#M565374
I finally solved my problem. This was not related to the TCP connection ...
I was using a property node "Value (signaling)" to trigger the TCP connection and it seems that this is not supported on RT Targets.
This is why it was working on localhost.
Thanks for the help anyway ;)
If you have a switch with at least one subscriber to a multicast address, how much additional load would each additional subscriber add?
Example:
You have a 10G switch (with IGMP) with 10 servers and no other activity.
When Server1 subscribers to a 1G multicast feed, the switch will have 1G of load.
What would the load be after Server2 and Server3 subscribed?
Obviously traffic to the switch would not increase, but what about the switch's internal load?
Houw would the answer be different without IGMP?
The whole idea of multicast is that it is efficient. The presence of one subscriber downstream causes the switch to send an IGMP join request of its own upstream and pass incoming multicasts downstream, without duplication. The addition of further downstream subscribers has no effect at all except to increment an internal subscriber count for that group. When that goes back to zero it sends an IGMP leave request of its own upstream.
I don't know what you mean by 'without IGMP'. There is no such thing as UDP multicast without IGMP. It is a contradiction in terms.
Firstly, some background information for you.
The traditional definition of routers and switches are along the lines of:
Router: a device capable of routing a packet form one IP subnet to a different IP subnet
Switch: a device capable of switching a packet within the same IP subnet
However, this traditional definition no longer holds these days because we have switches that can route traffic from one IP subnet to another IP subnet and even perform complex operations such as QoS at wire speed.
Therefore it is often easier to redefine Routers and Switches as follows:
Router: a device that uses the CPU to route packets, often inspects parts of packets that are higher up the OSI layer.
Switch: a device with ASIC(s) (a.k.a switching chips) that switches/routes traffic at full wire speed. What this means is that if the switch has 24 1Gbps ports, it will be able to switch 24Gbps bi-directional traffic without dropping any packets.
Now, to answer your question, it is important to determine whether the ASIC in your switch is capable of handling multicast traffic or not. If so, adding "load" really isn't an issue, as long as you ensure that each switch port is not congested (e.g. 2Gbps of traffic trying to egress out of 1Gbps port). If the ASIC in your switch is NOT capable of handling multicast traffic, it is highly likely that the switch will simply send all multicast traffic up to the CPU. Then it would be up to the software to determine where each packet goes. CPUs on switches are not powerful, because their primary role isn't to route/switch packets, but to manage the switch (e.g. configure the ASIC so that packets get switched properly). Therefore, if your switch is sending packets up to the CPU, the switch will struggle. You won't get anywhere near 1Gbps of multicast via the CPU.
Without IGMP, switches, by default, will flood out the traffic on all ports. Again, this is not a problem for the switch itself because it can handle this at wirespeed. It may cause problems for other parts of the network because traffic is needlessly being duplicated.
The reason for this long answer is because the phrase "10G switch" in your example is quite misleading, and it led me to believe that you maybe thinking that a powerful CPU sits at the center of the switch that is capable of performing 10Gbps bi-directional switching. This is simply not the case, and talking about "load" on a switch therefore often makes little sense.
I hope this helps.
I was reading a paper related to network security and they have mentioned something called local per flow state maintained by routers. I didn't get what this means. I googled for a while but couldn't get a decent answer. Any suggestions?
A flow is a sequence of packets from a source to a certain destination (it can be a unicast, multicast or broadcast destination, if the network protocol supports it) at a certain point in time. Details depend on the context, particularly on the network and transport protocol. For TCP and IP, for example, a particular packet flow is identified by the protocol (TCP), the source and destination port numbers and the source and destination IP addresses. If security is applied (e.g. IPSec), then it might make things more complicated since it may introduce e.g. tunnels, which basically create flows inside a flow.
What you mention, per flow state on a router, means that the router stores these data (usually for a certain time) to be able to identify packet flows. A router typically does this for e.g. connection tracking or to be able to make filtering decisions (e.g. rejecting incoming packets not belonging to a flow established by a computer on the internal network).
So for instance, when I open a new browser window and type www.google.com in it, this will create a new flow with the following parameters:
transport protocol: TCP
source port: the source TCP port allocated to the web browser, e.g. 12345
destination port: 80
source IP: my computer's IP address, e.g. 1.2.3.4
destination IP: the IP address www.google.com was resolved to, e.g. 173.194.44.17
for example a voice call consists of many consecutive packets all part of the same communication.
We call this sequence of packets a flow. More specifically:
Flow: A collection of datagrams belonging to the same end-to-end communication, e.g.
a TCP connection.
per flow state is not maintained by routers/switches they just route packets individually. they treat each packet unique though they might be going to same destination hence, no per flow state is maintained
Hi i am on creating streaming application. in that i am using IP Multicasting.
Tell me how to validate the client before adding it in the group.
is that anything i have to do with IGMP?
You don't do it with your application.
IGMP is an internet layer protocol, it may not even reach your application.
Whenever a unit wants to receive multicast to a certain address, it sends an IGMP request to join a group. A router receives the request and remembers that this user wants to belong to this group.
Whenever the router receives a multicast packed destined for that address, it routes it to all the group members, possibly taking some access control restrictions into account.
All group manupulation is performed by routers. You just send your UDP packets to a multicast address (that is 224/4), and the routers decide whether to route it to a subscriber.
If you want to limit destinations where your multicast packets go, you do it on routers.
You should understand though, that the word "routes" above means that the router emits the packet into appripriate interface with a multicast destination address in Ethernet header and multicast destination address in IP header. An Ethernet switch attached to the interface, if any, will distribute the packet over all active ports. Since it knows nothing about internet routing, it will just see the broadcast/multicast bit set in the Ethernet header and act accordingly.
There are, though, some link layer devices (Ethernet switches) who peek into network layer headers and limit multicast to the subscribed units. That is called IGMP snooping. Some of them can also be capable to control access.
OK, there is a legitimate need to control who can join a multicast group. The only way I can see that being done is by filtering IGMP packets inbound on the router interfaces. This would work if the list of "allowed subscribers" is sufficiently static, but if there's a lot of changes, it would rapidly become untenable.
If (and only if) there's administrative control all the way down to a "customer-placed" router, I suspect something could be done there, to limit the groups that device has visibility of, but that is heavily dependent of environment (in a "broadband and multicast video from a single provider" scenario, a contractual requirement for using a provider-managed DSL router would be possible).
In addition to Quassnoi's comments on how multicast works, I have to wonder... Why do you want to restrict multicast membership and/or validate the recipient before having it added to the group?