Is Riak a viable choice for dynamic network environments? - riak

We are considering Riak for use in an embedded device context (embedded Linux) where devices are dynamically addressed (DHCP).
Is this a viable choice?
We can assume that appropriate auto-discovery protocols are in place to enable devices to discover each other. Upon joining the network, a device would obviously need to do a riak-admin cluster join <other device>. Other than this, would Riak be capable of handling devices leaving and re-joining the network on a fairly non-frequent basis? Or, does it play much more nicely in a statically-addressed environment?

DHCP doesn't necessarily mean the device has to join when it boots. If the node names are resolvable via DNS or hosts file, and the listeners are configured to 0.0.0.0, the Riak nodes should communicate quite happily even if their IPs change on reboot.

Related

Limits when running zerotier

We want to use zero tier to connect from one cloud machine to multiple remote machines. We do not want remote machines to access each other. What would be a good approach?
Use a single network and set rules based on tags to restrict access
Run multiple networks, each having cloud machine and a remote machine
Are there limits to
Number of members in zerotier network
Number of zerotier networks a machine can connect to at a time - tun interfaces, ip conflicts or performance impact
I would use a single network and use rules to prevent peering between the machines. For instance, you could set the 192.168.141.0/25 portion of the network to prevent peering, and allow only defined network paths between hosts.
Just a personal rant here: You don't want to do that. Really. You're going to make a headache for yourself when you have to scale horizontally (which you will if you're successful). I would STRONGLY recommend taking a mTLS approach to service authentication instead. Somewhat more work at the start, but a lot easier in the long run.

Migrate from legacy network in GCE

Long story short - I need to use networking between projects to have separate billing for them.
I'd like to reach all the VMs in different projects from a single point that I will use for provisioning systems (let's call it coordinator node).
It looks like VPC network peering is a perfect solution to this. But unfortunately one of the existing networks is "legacy". Here's what google docs state about legacy networks.
About legacy networks
Note: Legacy networks are not recommended. Many newer GCP features are not supported in legacy networks.
OK, naturally the question arises: how do you migrate out of legacy network? Documentation does not address this topic. Is it not possible?
I have a bunch of VMs, and I'd be able to shutdown them one by one:
shutdown
change something
restart
unfortunately it does not seem possible to change network even when VM is down?
EDIT:
it has been suggested to recreate VMs keeping the same disks. I would still need a way to bridge legacy network with new VPC network to make migration fluent. Any thoughts on how to do that using GCE toolset?
One possible solution - for each VM in the legacy network:
Get VM parameters (API get method)
Delete VM without deleting PD (persistent disk)
Create VM in the new VPC network using parameters from step 1 (and existing persistent disk)
This way stop-change-start is not so different from delete-recreate-with-changes. It's possible to write a script to fully automate this (migration of a whole network). I wouldn't be surprised if someone already did that.
UDPATE
https://github.com/googleinterns/vm-network-migration tool automates the above process, plus it supports migration of a whole Instance Group or Load Balancer, etc. Check it out.

Assigning NICs to a XenProject VM

I wish to install a VM on my Xen Project machine that will run a Zentyal Firewall. My machine has three networks cards: one integrated, and two discreet, similar cards (they have the same Realtek chip, but are from different manufacturers). For the firewall to work optimally, what I want to do is assign and dedicate the two discreet NICs to my firewall VM, and use the integrated card for Dom0 and other VMs. I have been able to do similar things with other virtualisation software in the past, but have not been able to find a way to do it with Xen Project.
This page provides many useful configurations, but I don't think any of them match what I want to do. Is this at all possible, or must I give up hope of virtualising my firewall computer?
I think the best way to solve this would be using PCI passthrough in Xen. What this means is that you can leave 1 of your NICs attached to the dom0 (which can then be bridged to allow the other VMs to connect through the same interface - look at one of the Xen articles on network configuration for some examples of how to set this up, it'll be the same as if you only had a single NIC) and allow the firewall VM full control over the other two NICs.
The process for this is somewhat involved and can vary by distribution so I would advise you check the first article I linked but I will describe the basic process.
Check the PCI addresses of the two network cards you want to pass through using lspci. The lines of output for your cards will look something like the following (although the details will be very different the structure will be the same):
00:19.0 Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (rev 04)
00:19.1 Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (rev 04)
Make a note of the first column (00:19.0 and 00:19.1 in this example). Add this to the config for your firewall VM in the following format:
pci=['00:19.0','00:19.1']
On its own this will cause the VM to fail to boot as it will be unable to pass through the devices. In order for the devices to be passed through they will need to be bound to the pciback driver on dom0 with a command like:
xl pci-assignable-add 00:19.0
xl pci-assignable-add 00:19.1
This may not be possible in all situations but there are other methods if it is not. I strongly advise you to read the article I mentioned before to fully understand what the best way to do this is in your case.

JGroups UDP for membership but TCP for messaging?

We're prototyping a jgroups-based cluster node messaging system that will replace one that was JDBC-based. There are a lot of folks in my organization who are concerned about adding more multicast traffic to an already busy network, so I'm getting some pushback on a UDP/multicast solution.
I know JGroups can be configured to be TCP only, but I do not want to have to force a configuration step into the application where each node has to be identified ahead of time in a config file.
What I'd like then is to see if we can get a hybrid working here where multicast is used ONLY for group membership operations (discovery, heartbeats, failure detection), but messaging is all TCP-based.
I'm not finding examples of that in my searches, however, and am therefore questioning whether JGroups can be configured this way.
Can it, and any example configs showing how?
Thanks!
As for discovery, using MPING you can do that - it uses IP multicast to discover new nodes, although these later respond via the main transport (TCP in your case).
As for FD/FD_ALL, I don't think it's possible, the protocols are designed to use the main transport. You'd have to write your own FD protocol, shouldn't be that complicated.
However, if you can use UDP, you probably should. It's up to you whether you send a message to one node, multiple ones or all of them - for one destination there will be unicast UDP, for few destinations (if you set the anycast option) will be multiple unicasts used and only for all nodes the UDP will reduce the network load by multicasting. It's really up to the application, UDP just allows multicasts.

Can two or more SNMP agents be run on the same port (on the same machine)?

Just a technical question -
Can two or more SNMP agents be run on the same port (on the same machine)?
My first instinct would be no since host:port identifies an instance of an application but I'm not sure.
Thank you!
Technically, if the OS supports it, the SO_REUSEADDR SO_REUSEPORT options may be set on a socket to allow other processes to bind to the same address/port and thus allow multiple processes to receive messages on the same address/port. But both processes would have to set the option, and I doubt any agent implementations do that because it would not make sense to do so--it would just cause headaches having both agents potentially responding to a single request. Managers won't be equipped to handle it.
However, you can instead run an SNMP proxy in the primary address/port, configured to forward requests to one of multiple agents based on query, security, or (with SNMPv3) context/engine ID parameters, and forward responses back.
Also, using AgentX, you have an SNMP master agent running on the primary address/port, and one or more SNMP sub-agents connected to the master agent. The master agent dispatches requests to the sub-agents as appropriate, merging the results into a single response, so that to the outside world it appears as a single agent. Each sub-agent typically handles a different branch of OID space (one sub-agent implementing certain module(s), another sub-agent implementing other module(s)).
But taking two agents intended to own the address/port exclusively, and forcing them to share through the REUSE options, while it may be possible, would not be wise.
You can run multiple agents on the same host and with the same port if they have differents ip address (can use a netsh script for that).
Personnaly I use the nsoftware ddl : SecureSNMP V8 edition .NET to do this.
You can look at this post : Multiple SNMP Agents with nsoftware dll
No, two agents cannot both run on the same port as seperate applications for the reasons you assumed (except with a brittle packet sniffing hack, which we'll not go into).
However, 2 agents can be accessed through the same port if there is some mechanism that handles the actual port and distributes requests based on MIB. For example the Windows SNMP service does this, allowing any number of SNMP agents to be added as "extensions" through the registry (HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SNMP\Parameters\ExtensionAgents) by writing them as DLLs and using the snmp.h headers in the platform SDK.
You are correct: ports can't be shared.
If both the agents were designed by you, then the answer can be different.
Consider the HTTP and FTP cases, we can use host names to distinguise multiple sites on the same port, then why can't we do it for SNMP?
We can create a dispatcher who monitors port 161 for incoming traffic. Then use multiple real agents to handle those traffic behind. We can feel free to design how to distinguise them. Personally I prefer the FTP virtual host name manner and use | to distinguise agents.
Maybe I can create a demo for #SNMP Suite in the future.
But if you need to work with existing agents on the same server, then such flexibility is lost.

Resources