Computer network addressing (Subnet Masks) - networking

I am trying to understand that Why 255.255.255.0 regarded as class B subnet mask,while many students in my class argue that the mask address is of class C.

Consider reading one of the relevant RFCs. RFC 870, for instance.
Here is an excerpt from that memo:
The third type of address, class C, has a 21-bit network number
and a 8-bit local address. The three highest-order bits are set
to 1-1-0. This allows 2,097,152 class C networks.
1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|1 1 0| NETWORK | Local Address |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Class C Address
In it, class C networks are defined to have 8-bit local address, hence the mask 255.255.255.0.

Related

What is the purpose of payload in packet too big ICMPv6 message

I have gone through the RFC 4443 and 8201, perhaps I did not understand as I am new to some terminology described in these RFC but I want to understand the implication of using payload in ICMP packet to big message?
As per the RFC 4443
Linkt to 4443 RFC
The payload will contain as much of invoking packet not exceeding the minimum path MTU in packet to big message.
I don't understand the use case of such payload, even in the RFC 8201 there is no mention about the usages of payload.
only one comment present was
Added clarification in Section 4, "Protocol Requirements", that
nodes should validate the payload of ICMP PTB messages per RFC
4443, and that nodes should detect decreases in PMTU as fast as
possible.
What can be the implication validating payload in the PTB message and how should we validate the payload and based on what conditions?
Packet Too Big Message as per RFC 4443.
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type | Code | Checksum |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| MTU |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| As much of invoking packet |
+ as possible without the ICMPv6 packet +
| exceeding the minimum IPv6 MTU [IPv6] |

Are messages dropped on RAFT?

I'm reading about raft, but I got a bit confused when it comes to consensus after a network partition.
So, considering a cluster of 2 nodes, 1 Leader, 1 Follower.
Before partitioning, there where X messages written, successfully replicated, and then imagine that a network problem caused partitioning, so there are 2 partitions, A(ex-leader) and B(ex-follower), which are now both leaders (receiving writes):
before partition | Messages |x| Partition | Messages
Leader | 0 1 2 3 4 |x| Partition A | 5 6 7 8 9
Follower | 0 1 2 3 4 |x| Partition B | 5' 6' 7' 8' 9'
After the partition event, we've figured it out, what happens?
a) We elect 1 new leader and consider its log? (dropping messages of the new follower?
e.g:
0 1 2 3 4 5 6 7 8 9 (total of 10 messages, 5 dropped)
or even:
0 1 2 3 4 5' 6' 7' 8' 9' (total of 10 messages, 5 dropped)
(depending on which node got to be leader)
b) We elect a new leader and find a way to make consensus of all the messages?
0 1 2 3 4 5 5' 6 6' 7 7' 8 8' 9 9' (total of 15 messages, 0 dropped)
if b, is there any specific way of doing that? or it depends on client implementation? (e.g.: message timestamp...)
The leaders log is taken to be "the log" when the leader is elected and has successfully written its initial log entry for the term. However in your case the starting premise is not correct. In a cluster of 2 nodes, a node needs 2 votes to be leader, not 1. So given a network partition neither node will be leader.

What is the difference between a bank conflict and channel conflict on AMD hardware?

I am learning OpenCL programming and running some programs on AMD GPU. I referred the AMD OpenCL Programming guide to read about global memory optimization for GCN Architecture. I am not able to understand the difference between a bank conflict and a channel conflict.
Can someone explain me what is the difference between them?
Thanks in advance.
If two memory access requests are directed to the same controller, the hardware serializes the access. This is called a channel conflict. Which means, each of integrated memory controller circuits can serve to a single task at a time, if you happen to map any two tasks' address to access to same channel, they are served serially.
Similarly, if two memory access requests go to the same memory bank, hardware serializes the access. This is called a bank conflict. If there are multiple memory chips, then you should avoid using a stride of the special width of the hardware.
Example with 4 channels and 2 banks: (not a real world example since banks must be more than or equal to channels)
address 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
channel 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1
bank 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1
so you should not read like this:
address 1 3 5 7 9
channel 1 3 1 3 1 // %50 channel conflict
bank 1 1 1 1 1 //%100 bank conflict,serialized on bank level
nor this:
address 1 5 9 13
channel 1 1 1 1 // %100 channel conflict, serialized
bank 1 1 1 1 // %100 bank conflict, serialized
but this could be ok:
address 1 6 11 16
channel 1 2 3 4 // no conflict, %100 channel usage
bank 1 2 1 2 // no conflict, %100 bank usage
because the stride is not a multiple of channel nor bank widths.
Edit: if your algorithms are more of a local-storage optimized, then you should pay attention to local data store channel conflicts. On top of this, some cards can use constant memory as an independent channel source to speed up reading rates.
Edit: You can use multiple wavefronts to hide conflict-based latencies or you can use instruction level parallelism too.
Edit: Number of local data store channels are much faster and more numerous than global channels so optimizing for LDS (local data share) is very important so uniform-gathering on global channels then scattering on local channels shouldn't be as problematic as scattering on global channels and uniform-gathering on local channels.
http://developer.amd.com/tools-and-sdks/opencl-zone/amd-accelerated-parallel-processing-app-sdk/opencl-optimization-guide/#50401334_pgfId-472173
For an AMD APU with a decent mainboard, you should be able to select an n-way channel interleaving or n-way bank interleaving for your desire if your software is not alterable.

What is the right order of bytes in a TCP header?

I'm trying to encode a TCP header myself, but can't understand what is the right order of bits/octets in it. This is what RFC 793 says:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Source Port | Destination Port |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Sequence Number |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
...
This means that Source Port should take first two octets and the lowest bit should be in the first octet. This means to me that in order to encode source port 180 I should start my TCP header with these two bytes:
B4 00 ...
However, all examples I can find tell me to do it the other way around:
00 B4 ...
Why?
This means that Source Port should take first two octets
Correct.
and the lowest bit should be in the first octet.
Incorrect. It doesn't mean that. It doesn't say anything about it.
All multi-byte integers in all IP headers are represented in network byte order, which is big-endian. This is specified in RFC 1700.

What does the '\x1b' + 47 * '\0' message sent to an NTP server mean?

I am working on an NTP Client. A few other threads indicate that the a message containing "\x1b' + 47 * '\0" is sent to the NTP server, but none of these threads give an explanation about what this message actually means or why it is sent. I've tried looking at the NTP RFC but I was unable to find any information about it in there either.
"\x1b' + 47 * '\0" represents a data field of 48 bytes. 0x1B followed by 47 times
0. 48 bytes is the size of an NTP UDP packet. The first byte (0x1B) specifies LI, VN, and Mode.
RFC 5905 NTP Specification (7.3. Packet Header Variables) specifies the message header as follows:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|LI | VN |Mode | Stratum | Poll | Precision |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Setting the first byte of the data to 0x1B or 00 011 011 means
LI = 0 (Leap indicator)
VN = 3 (Version number)
Mode = 3 (Mode, mode 3 is client mode)
You may also use the more recent version (VN = 4). This would require the first header byte to be set to
0x23 (00 100 011).
The modes are defined as
+-------+--------------------------+
| Value | Meaning |
+-------+--------------------------+
| 0 | reserved |
| 1 | symmetric active |
| 2 | symmetric passive |
| 3 | client |
| 4 | server |
| 5 | broadcast |
| 6 | NTP control message |
| 7 | reserved for private use |
+-------+--------------------------+
Specifying Mode = 3 indicates the message as a client request message.
Sending such a packet to port 123 of an NTP server will force the server to send a reply package.

Resources