Typically top level domain name servers like "com" name server, need to have a map which gives out IP address of the name server for different domain names like "google","yahoo","facebook", etc.
I image this would have a very large number of key-value pairs.
How is this huge map handled? Is it a unordered map, ordered map, or any other "special" implementation?
Most of the major nameservers are open souce so you could study their sources:
bind
nsd
knot
yadifa
geodns
But it is of course far more complicated than just a "map".
Even if you start with very old documents, like RFC 1035 that defines the protocol they are few details about implementation, as expected.
While name server implementations are free to use any internal data
structures they choose, the suggested structure consists of three major
parts:
A "catalog" data structure which lists the zones available to
this server, and a "pointer" to the zone data structure. The
main purpose of this structure is to find the nearest ancestor
zone, if any, for arriving standard queries.
Separate data structures for each of the zones held by the
name server.
A data structure for cached data. (or perhaps separate caches
for different classes)
(and read the following sentences about various optimizations)
First, the task is different for an authoritative or a recursive nameserver.
Some authoritative ones for example let you "compile" a zone into some kind of format before loading it. See zonec in nsd for example
You also need to remember that this data is dynamic: it can be remotely updated incrementally by DNS UPDATE messages, and in the presence of DNSSEC, the RRSIGs may get dynamically computed or at least need to change from time to time.
Hence, a simple key,value store is probably not enough for all those needs. But note that multiple nameservers allow different "backends" so that the data can be pulled from other sources, with some constraints or not, like an SQL database or even a program creating the DNS response when the DNS query comes.
For example, from memory, bind uses internally a "red back binary tree". See Wikipedia explanation at https://en.wikipedia.org/wiki/Red%E2%80%93black_tree, in short:
A red–black tree is a kind of self-balancing binary search tree in computer science. Each node of the binary tree has an extra bit, and that bit is often interpreted as the color (red or black) of the node. These color bits are used to ensure the tree remains approximately balanced during insertions and deletions.
Side note, about "need to have a map which gives out IP address of the name server" which is not 100% exact: the registry authoritative nameservers will have mostly NS records, associating domain names to other authoritative nameservers (a delegation) and will have some A and AAAA records called glues in that case.
Some requests to them may not get you any IP addresses at all, see:
$ dig #a.gtld-servers.com NS afnic.com +noall +ans +auth
; <<>> DiG 9.12.0 <<>> #a.gtld-servers.com NS afnic.com +noall +ans +auth
; (1 server found)
;; global options: +cmd
afnic.com. 172800 IN NS ns1.nic.fr.
afnic.com. 172800 IN NS ns3.nic.fr.
afnic.com. 172800 IN NS ns2.nic.fr.
(no IP addresses whatsoever because nameservers are all out of zone, that is "out-of-bailiwick" for the true technical term)
Related
I'm a noob, sorry to say, I understand both what a hash table is and what a translation lookaside buffer (TLB) is but it seems like they work on similar principals. What am I missing here?
A hash table uses a hash function that maps a certain space of original values to a smaller space of resulting values. The idea is that if the original space normally not used completely, one would better use the mapped space e.g. for table lookup. Since in this mapping, more than one original value is mapped to the same resulting value, conflicts can arise that require conflict resolution, which is no problem.
A TLB, on the other hand, is used to map a large virtual memory space to a small physical memory space. To do so, it caches virtual addresses, i.e. it maps a virtual address to a physical address. There are many ways to do this. The most simple (for explanation) is a direct mapped cache. It uses the upper part of the virtual address as the address of the cache memory. Stored there are the lower part of the virtual address together with the corresponding physical address. One could thus consider this as a hash table with a hash function that maps all virtual addresses that have the same upper bits to the same cache address. However, conflicts are here (in the direct mapped cache) resolved by replacing an old entry by a new entry. If the cache organization is more complex, i.e. as a multi-way associative cache, conflicts are handled by distributing new entries among the available associative storage locations.
So, you are right, there is no essential difference between hash tables and TLBs.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
My question is about CDR (Call detail Records). I want to know more about that. I searched a lot (in Google and other sites) but unfortunately there is few reference and i couldn't find answere of my questions in none of them (Please share any reference you know and think will be useful)
I want to know...
1. Where is CDR element in network structures? i mean for example in LTE, it is connected to which elements? (S-GW, MME, HSS, PCRF.etc) (As i read about that, CDR is "mediation" but where is it in practical networks?..where should be?)
2. as i searched, i couldn't find any big company (Vendor) specific hardware which made for CDR..is there any specific hardware which most mobile network operators use?
3. is there any standard specification (not official but used by most) about CDR? (like interfaces, protocols, file formats, etc)
Thanks a lot
CDR is an "old" word that comes from old fixed networks where the only service was voice and Call Data Records were generated by the switch. By extension, today, CDR means any information generated by a network equipment. It can still be voice, or mobile data, or wifi, or SMS, etc Some times they are called also UDR, "U" for Usage Data Record.
The MSC generated CDR about : incoming calls, outgoing calls, transit calls, SMS traffic. Basically it says that number A has called the number B during S seconds, that the location of A is a given Cell ID and LAC, that the call has used some trunc, and so on. The is no information about the price, for example. The same for the "CDR" from SGSN or GGSN or MME where the usually provided information is location, type of (data) protocol used (TCP, UDP, ARP, HTTP, SMTP, ...), volume, etc. SMSC, USSD, and others also produce this kind of CDR. I use to call those CDRs "Traffic CDRs" as they describe the traffic information.
There are complementary to the "Charging CDRs" where the price information is produce. For example, for a voice call, the IN platform (sometimes called the OCS; Online Charging System) will generate CDRs with A number, B number, Call duration (which usually is different from the duration seen on the MSC), the accounts that had been used to pay the call, etc. Same hold for data, sms and all services charging. Those CDRs may also be used for offline billing.
I'm not aware of any standard. They are maybe specifications about what CDR produced by a given (standard) platform needs to produce but my (quite long) experience in the field says you should not rely on this but on the spec defined by the equipment vendor and your own test procedure.
This is where the mediation comes into the game. It's an IT system that is able to
get (or receive) unprocessed CDR files from all the network equipment
identify and filter out some unnecessary fields
sometimes aggregate some traffic CDRs in to one CDR
sometimes deduplicate some CDRs, or make sure that there is only one CDR per network event
eventually produce output files that will be used by other systems like billing or data warehouse
A CDR, Call or Charging Data Record, is actually just a record of the call's details - i.e. the name is literally correct.
You can think of it as a data structure or a group of data including the called number, calling number, duration, time of call, network nodes used etc.
It is used to feed billing systems, analytics, and simply to record details on calls, which can help with diagnosing problems for example.
It is not a node or a physical element itself, and once the CDRs are collected, for example on a Switch, they can be transferred and stored elsewhere.
All the big switching vendors, Nokia, Ericsson, Huawei etc will 'make' or generate the CDR's on their switches, as it is a basic requirement that operators demand.
The 3GPP origination defines the specification for CDR's - this covers areas like the structure of the CDR, the info the CDR contains and how CDRs are transferred between network elements. You can find the spec here:
https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=1912
I'm implementing a completely decentralized database. Anyone at any moment can upload any type of data to it. One good solution that fits on this problem is an immutable distributed hash table. Values are keyed with their hash. Immutability ensures this map remains always valid, simplifies data integrity checking, and avoids synchronization.
To provide some data retrieval facilities a tag-based classification will be implemented. Any key (associated with a single unique value) can be tagged with arbitrary tag (an arbitrary sequence of bytes). To keep things simple I want to use same distributed hash table to store this tag-hash index.
To implement this database I need some way to maintain a decentralized consensus of what is the actual and valid tag-hash index. Immutability forces me to use some kind of linked data structure. How can I find the root? How to synchronize entry additions? How to make sure there is a single shared root for everybody?
In a distributed hash table you can have the nodes structured in a ring, where each node in the ring knows about at least one other node in the ring (to keep it connected). To make the ring more fault-tolerant make sure that each node has knowledge about more than one other node in the ring, so that it is able to still connect if some node crashes. In DHT terminology, this is called a "sucessor list". When the nodes are structured in the ring with unique IDs and some stabilization-protocol, you can do key lookups by routing through the ring to find the node responsible for a certain key.
How to synchronize entry additions?
If you don't want replication, a weak version of decentralized consensus is enough and that is that each node has its unique ID and that they know about the ring structure, this can be achieved by a periodic stabilization protocol, like in Chord: http://nms.lcs.mit.edu/papers/chord.pdf
The stabilization protocol has each node communicating with its successor periodically to see if it is the true successor in the ring or if a new node has joined in-between in the ring or the sucessor has crashed and the ring must be updated. Since no replication is used, to do consistent insertions it is enough that the ring is stable so that peers can route the insertion to the correct node that inserts it in its storage. Each item is only held by a single node in a DHT without replication.
This stabilization procedure can give you very good probability that the ring will always be stable and that you minimize inconsistency, but it cannot guarantee strong consistency, there might be gaps where the ring is temporary unstable when nodes joins or leaves. During the inconsistency periods, data loss, duplication, overwrites etc could happen.
If your application requires strong consistency, DHT is not the best architecture, it will be very complex to implement that kind of consistency in a DHT. First of all you'll need replication and you'll also need to add a lot of ACK and synchronity in the stabilization protocol, for instance using a 2PC protocol or paxos protocol for each insertion to ensure that each replica got the new value.
How can I find the root?
How to make sure there is a single shared root for everybody?
Typically DHTs are associated with some lookup-service (centralized) that contains IPs/IDs of nodes and new nodes registers at the service. This service can then also ensure that each new node gets a unique ID. Since this service only manages IDs and simple lookups it is not under any high load or risk of crashing so it is "OK" to have it centralized without hurting fault-tolerance, but of course you could distribute the lookup service as well, and sycnhronizing them with a consensus protocol like Paxos.
I am in need of synchronizing models/forwarding models between different computers. The models can represent tables but also trees (entries with child items). The setup will be:
Server providing a TCP Service (or other communication)
keep a list of models that need to be synchronized (registerSourceModel() in the like of proxy models)
provide the modellist with unique IDs to clients
provides QDataStream for packet serialization
Client(s) providing a TCP Connection to the server
create NetworkModel instances based on modellist from server
forward / provide any data() queries to the model
forward / provide any setData() queries to the model
... rest of typical model methods
There are two options of setting up the synchronization:
Have an QAbstractItemModel (may QStandardItemModel) with a complete cache (duplication of all data) on the client side, just updating on remote dataChanged()/layoutChanged() signals
Directly forward any requests over the network resulting in stub (Loading***) data entries until the real data has been fetched (dataChanged() signals) from the network
While option 1) will have quite the memory print, since we will have to duplicate any relevant model, it will most of the time have a very short response time, since synchronization can be done in background when needed.
Option 2) will have little to (almost) no memory usage, since everything will be queried directly. I am still not sure how this will actually look&feel when having big models to be visualized in a view. Think about an company catalogue (or list of amazon articles with some details) being required to be queried one by one (data() works on top of a single QModelIndex) over network.
As a result we probably will go with the first option.
The problems I encounter in both options are the synchronization/forwarding of a valid QModelIndex. These are always invalid on the remote computer. I did some research on the QSortFilterProxyModel() as this is kind of similar but within the same process space. This model keeps a identifying list for all indicies for mapping the "filtered" indicies.
Will this require myself to keep a identifying list of QPersistantModelIndex() on the server and a mapped list of IDs to my own QModelIndex() (with synchronization about these unique IDs of the list)?
Is there another option to "link" two models or even just put the whole model into some container and pipe that into my stream?
In order to track a TCP session as it traverses a network, I would like to know the Initial Sequence Number of the session. I have written some code to simply copy the ISN into the tcp_sock structure when the TCP session is created, then added code to copy that value into the tcp_info structure returned by getsockopt(). This seems to work, but I was wondering if there was a better way. I see that snt_isn and rcv_isn are stored in the structure tcp_request_sock. Is there a way to access the tcp_request_sock structure from the tp structure in getsockopt()?
thanks in advance
bvs
No, there is no way for a user-space to get (or set) the sequence numbers via the socket-API (e.g., via setsockopt or tcp_info,...).
The only way to read it would be by capturing the trace with tcpdump.