In the design of the management API of a network element, we often include support for the commonly used CLIs like the CISCO style CLI and Juniper style CLI. But to support those commands, we need to know the breakdown of the commands issued into the sequence of operations on the MIB tables and objects there in.
For example:
A CLI command :
router bgp 4711 neighbor 3.3.3.3
And it's MIB object operations (like in SNMP) would be :
bgpRmEntIndex 4711
bgpPeerLocalAddrType unica
bgpPeerLocalAddr 2.2.2.2
bgpPeerLocalPort 179
bgpPeerRemoteAddrType uni
bgpPeerRemoteAddr 3.3.3.3
bgpPeerRemotePort 179
Is there some resource which can help us understand this breakdown?
In general on the types of devices that you mention, you will find that there is no simple mapping between CLI operations and (SNMP) operations on MIB variables. The CLIs are optimized for "user-friendly" configuration and on-line diagnostics, SNMP is optimized for giving machine-friendly access to "instrumentation", mostly for monitoring. Within large vendors (such as Cisco or Juniper) CLI and SNMP are typically developed by different specialized groups.
For something that is closer to CLIs, but more friendly towards programmatic use (API), have a look at the IETF NETCONF protocol, which provides XML-based RPC read and write access to device configuration (and state). Juniper pioneered this concept through their Junoscript APIs and later helped with defining the IETF standard, so you will find good support there. Cisco has also added NETCONF capabilities to their systems, especially the newer ones such as IOR-XR.
The MIB documents, such as this one,
http://www.icir.org/fenner/mibs/extracted/BGP4-V2-MIB-idr-00.txt
Related
We've tried to evaluate gRPC with flatbuffers on an embedded linux system, the resulting executable is ~6 MB for a very basic example with protobuf. We are looking to strip off as much as possible to move to platforms with even less resources. Only a direct channel over a "secure" serial transport; USB CDC, direct UDP/TCP or similar is required.
Is there a way to achieve this with standard gRPC configuration?
Is a custom channel required for this setup? Implementing a custom channel seems rather complicated (very high cyclomatic complexity in the included channels, even the in-memory one)
Is there any other guidance or examples/implementations for a simple custom channel implementation?
What is the difference between 'networking operating systems' like ONOS, ONAP, Opendaylight and 'configuration management' platforms like Salt, Ansible, Puppet? More specifically, when would I choose one over the other? I have done some research on all these, and as far as I can tell, the configuration management platforms are, as the name implies, for configuring the network, and the operating system platforms are an actual software defined network that can also configure networks/networking devices plus more.
You're really talking about 3 different things.
OpenDaylight and ONOS are network controller platforms. While ONOS is starting to become feature parity with OpenDaylight, OpenDaylight is more widely deployed (over 1 billion people in production using it) and more supported.
ONAP is a is used to design, create, orchestrate, monitor, and perform life cycle management of open source and commercial VNFs and legacy networks. ONAP uses OpenDaylight MD-SAL at it's core.
I don't have much experience with SALT but it, Ansible and Puppet are flexible DevOps configuration utilities for managing users, services and general automation.
I understand that PKCS#11 is std that defines cryptoki API and KMIP is a protocol that defines message format, but how they are connected or are they even interconnected?
How they both hold their individual significance in cryptography?
PKCS#11 can be considered a protocol of a kind too, it's used to communicate with the hardware devices (to be precise, with the driver modules of those devices). However, it's not suitable for network communications. KMIP is the protocol to communicate with remote key storages and similar services and use the remotely stored keys. This is similar to what PKCS#11 offers locally.
In theory, the protocols partially interlap and are to certain extent interchangeable - Oracle has the PKCS#11 driver/gateway, which talks to the remote KMIP server, and the opposite should also be possible. But, of course, each has its own strengths and weaknesses. Interestingly, both KMIP and PKCS#11 standards are developed by the same people in OASIS.
There's also a paragraph in Wikipedia that answers your question.
Amazon / AWS EC2 offers SR-IOV (Single Root I/O Virtualization) instances, which it dubs "enhanced networking" -- does Google offer this on Compute Engine?
Specifically, are any GCE instance types able to bypass the hypervisor and have direct access to a multi-queue NIC?
SRV-IOV support is needed to take advantage of Scylla DB's architecture?
HN Discussion: https://news.ycombinator.com/item?id=10262719
Currently Google Compute Engine does not offer SR-IOV. That said, SR-IOV is not strictly necessary to take advantage of Scylla's architecture.
GCE offers multi-queue networking and it is possible to directly user-mode assign the virtio-net queues using Intel's DPDK. This should allow our virtio-net NIC to work with Scylla, although at least at one point DPDK made certain qemu specific assumptions with respect to virtio-net (in particular it assumed Tx/Rx queue depths of 256 descriptors; the virtio-net NIC in GCE currently advertises 16,384 entry queues although this is likely to change in the near future).
For applications like Scylla this should offer superior network performance and better in-guest compute overhead over utilizing the kernel TCP/IP stack.
Additionally, for all GCE instances with >= 1 cores (i.e., not fractional core instances) we offer multi-Gbps throughput subject to fabric availability. Latency is likely to be lowest in zones with Haswell processors. We do not currently guarantee specific network characteristics, but we offer up to 2 Gbps/core of network throughput shared between the virtual NIC and any attached persistent disk volumes (Local SSD throughput does not count against this limit). Throughput wise this makes 8-vCPU and larger instances comparable to EC2 Enhanced Networking.
At the moment, nothing that we offer is similar to AWS' "enhanced networking".
You are more than welcome posting this as a Feature Request on our Compute Engine Issue tracker though, so we can look at implementing a similar feature.
I was working on Java Agent Development Framework, which is the language of creating mobile agents. I was wondering that the code that I will write in JADE, will work over HTTP or below the HTTP? As I am opaque to the inside working and execution of JADE I couldn't get the answer directly...Thanks in advance :-)
JADE (or more generally FIPA standard) introduces the concept of platform consisting of one or more containers on which agents live. Each container is made up by a separate JVM. JADE distinguishes between two types of communication, depending on where the talking agents live:
intra-platform communication, when messages are exchanged between agents living on different containers of the same plaform
inter-platform communication, when messages are exchanged between agents living in different platforms
Depending on where the talking agents live, different protocol will be used.
For intra-platform communication one of the following transport protocols will be used:
RMI (default), going directly over TCP/IP
proprietary protocol based on TCP sockets (used in J2ME environment in JADE LEAP platform)
For inter-platform communication one of the following transport protocols will be used:
IIOP (Sun or ORBacus implementation)
HTTP and HTTPS
JMS
Jabber XMPP
Since the question is specific to JADE platform, I strongly encourage you to use JADE mailing list: http://jade.tilab.com/newuser.php