Cassandra cluster moving from unencrypted to encrypted state - encryption

Does anyone have any experience of transferring a Cassandra cluster (Apache one, not a Datastax) from a non-encrypted state to an encrypted one?
By the "encrypted state" I refer to the enabling of server_encryption_options, client_encryption_options, and transparent_data_encryption_options.
The question is caused by the fact that while there is a feeling that it is impossible to transfer the communication between the nodes of Cassandra to a secure way without building a whole new cluster.
When I do enable server_encryption_options for racks only, new encrypted nodes can't find old unencrypted, even though they are in another DCs.
This leads me to the understanding, that Cassandra can't be flexible in node-to-node connections, they are either all encrypted, or all unencrypted.
I will also be glad if you can advise me where else I can ask this question: like community chats or something.
Cassandra version 3.11.2
This is nodetool status out (I tried to get three encrypted RAC_mycluster3 back into cluster):
Datacenter: DC_mycluster1
====================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Rack
UN 10.62.167.101 60.98 MiB 256 100.0% RAC_mycluster1
Datacenter: DC_mycluster2
====================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Rack
UN 10.62.16.123 66.13 MiB 256 100.0% RAC_mycluster2
Datacenter: DC_mycluster3
====================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Rack
UN 10.62.2.3 66.09 MiB 256 100.0% RAC_mycluster3
UN 10.62.2.4 66.11 MiB 256 100.0% RAC_mycluster3
UN 10.62.2.5 66.18 MiB 256 100.0% RAC_mycluster3
Datacenter: DC_mycluster4
====================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Rack
UN 10.62.167.2 57.45 MiB 256 100.0% RAC_mycluster4

As you have experienced, Cassandra does not support transition of server-to-server encryption (server_encryption_options) in 3.11.x, and only transition of the client-to-server encryption (client_encryption_options) is supported.
Server-to-server encryption transition without downtime is in the next major release of Cassandra (4.0).
https://issues.apache.org/jira/browse/CASSANDRA-10404
So for now, you have to create a new cluster and migrate the date.
For real time communication, the Cassandra community have the room in Apache Software Fundation's Slack, and you can join through https://s.apache.org/slack-invite.
There is #cassandra room you can ask questions.

Related

NIC memory managment managment and RSS queues

I want to understand how NIC manages memory for ring buffers.
Say I have Q RSS queues of size N. The driver will allocate in kernel space Q ring buffers of size N packets:
My question is what happening on HW side in case OS fails to pull or pulls slowly packets for a particular queue and there N packets on the NIC side waiting to be pulled. I can imagine two scenarios:
Packets for the queue will "eat" all memory of NIC, thus forcing NIC to drop packets for other queues
NIC will stop receiving packets for the queue when it will reach N packets, thus rest of queues will be left unaffected?
Thanks
Current network stacks (and commodity OSes in general) have developed from models based on simple NICs that feed unicore CPUs incrementally. When multicore machines became prevalent and the scalability of the software stack became a serious concern, significant efforts were made to adapt these models to take advantage of multiple cores
As with any other rule hardcoding in NIC hardware, the main drawback of RSS is that the OS has little or no influence over how queues are allocated to flows.
RSS drawbacks can be overcome by using more flexible NIC filters or trying to smartly assign queues to flows using a software baked in the system operator.
The following ASCII art image describes how the ring might look after the hardware has received two packets and delivered the OS an interrupt:
+--------------+ <----- OS Pointer
| Descriptor 0 |
+--------------+
| Descriptor 1 |
+--------------+ <----- Hardware Pointer
| Descriptor 2 |
+--------------+
| ... |
+--------------+
| Descriptor n |
+--------------+
When the OS receives the interrupt, it reads where the hardware pointer is and processes those packets between its pointer and the hardware's. Once it's done, it doesn't have to do anything until it prepares those descriptors with fresh buffers. Once it does, it'll update its pointer by writing to the hardware. For example, if the OS has processed those first two descriptors and then updates the hardware, the ring will look something like:
+--------------+
| Descriptor 0 |
+--------------+
| Descriptor 1 |
+--------------+ <----- Hardware Pointer, OS Pointer
| Descriptor 2 |
+--------------+
| ... |
+--------------+
| Descriptor n |
+--------------+
When you send packets, it's similar. The OS fills in descriptors and then notifies the hardware. Once the hardware has sent them out on the wire, it then injects an interrupt and indicates which descriptors it's written to the network, allowing the OS to free the associated memory.
not an expert here, using the opportunity to learn a bit about how higher performance network cards work. this question seems to be dependent on the type of network adaptor you're using and to a lesser extent the kernel (e.g. how it sets up the hardware). the Linux docs I could find seemed to refer to the bnx2x driver, e.g. kernel docs and also RHEL 6 docs. that said, I couldn't find much in the way of technical docs about that NIC, and I had much more luck with Intel and I spent a while going through the X710 docs
as far as I can tell, the queues are just ring-buffers and hence if the kernel doesn't get through packets fast enough the old ones will be overwritten by new ones. I couldn't find this behaviour explicitly documented with respect to RSS, but it seems to make sense
the queues are also basically independent, so if/when this happens it shouldn't affect other queues and hence their flows should be unaffected

Are SAP clients 000, 001, 066 present in all environments?

As it is known that in SAP system there are 000, 001, 066 as default clients.
Suppose we do have separate systems for Development, Quality assurance and production so will those 3 default clients be present in all the systems?
Yes, these clients will exist in all SAP Netweaver ABAP systems.
Each client has specific roles:
Client 000 is the main admin client, where your basis team will install/upgrade the system.
Client 001 is a copy of client 000 which was created during the installation of the system.
Client 066 is used by EWA (Early Watch Alerts), if it's configured on your SLD (System Landscape Domain). This client is not mandatory anymore after 7.40 as explained below.
That's necessary too for a maintenance planner at SAP site.
It's possible to remove clients 001 and 066 as explained in note 1749142 - How to remove unused clients including client 001 and 066 and the same is publicly available in this blog post. This note & post also say that:
SAP NetWeaver 7.40 is the last release delivering the client 066 with the installation or upgrade.
Yes they are part of the standard install for all instances.
All these clients(000,001 and 066) are created by default when you install any SAP system.
These clients will be present in all dev,test and production as well

Using GOST with Corda

I know Corda is crytographically agile. As part of this, can a Corda network use GOST block cipher cryptography (GOST 28147-89) in order to comply with Russia standards?
GOST is an encryption scheme. The only place encryption is used in Corda is in TLS communication:
TLS 1.2 does not support GOST, although there is an RFC (https://www.ietf.org/archive/id/draft-chudov-cryptopro-cptls-04.html)
OpenSSL 1.1.0 and later no longer include the GOST engine (see Can't enable GOST engine support in OpenSSL)
Theoretically, Corda's crypto library (BouncyCastle) could support some of the GOST ciphers, as long as it supports all the algorithms defined in the TLS RFC
Even if TLS supports GOST cipher suites, for a full GOST-enabled Corda, there might be a requirement for GOST root, doorman and network map keys (if they need GOST in the certificate hierarchy as well)
I cannot see how mutually secure communication between the EU, US and the Rest of the World will be achieved, unless:
A company decides to run Corda in Russia only (their own Corda network with their own root certificate authority), or
TLS is modified to run dual algorithm encryption/hashing/signature/key-exchange. I am not aware of anything of this sort, except the Google post-quantum experiment that combined the ECC and New Hope algorithms. This means it is feasible to combine algorithms in TLS

TripleDESCryptoServiceProvider and PCI

Just a couple of months ago we set up a brand new Server 2016 box, used IIS Crypto to select PCI compliant cyphers...
Today it failed a PCI compliance scan
The remote service supports the use of medium strength SSL ciphers.
TLSv1
DES-CBC3-SHA Kx=RSA Au=RSA Enc=3DES-CBC(168) Mac=SHA1
Disabling TLS 1.0 caused a whole raft of issues, but my main concern is a lot of our web applications use System.Security.Cryptography.TripleDESCryptoServiceProvider to encrypt/decrypt strings held on our DB.
If we disable Triple DES will these applications then fail to encrypt/decrypt or does it rely on cryptography built in to .NET?
Thanks

Translate router CLI commands into sequence of MIB operations

In the design of the management API of a network element, we often include support for the commonly used CLIs like the CISCO style CLI and Juniper style CLI. But to support those commands, we need to know the breakdown of the commands issued into the sequence of operations on the MIB tables and objects there in.
For example:
A CLI command :
router bgp 4711 neighbor 3.3.3.3
And it's MIB object operations (like in SNMP) would be :
bgpRmEntIndex 4711
bgpPeerLocalAddrType unica
bgpPeerLocalAddr 2.2.2.2
bgpPeerLocalPort 179
bgpPeerRemoteAddrType uni
bgpPeerRemoteAddr 3.3.3.3
bgpPeerRemotePort 179
Is there some resource which can help us understand this breakdown?
In general on the types of devices that you mention, you will find that there is no simple mapping between CLI operations and (SNMP) operations on MIB variables. The CLIs are optimized for "user-friendly" configuration and on-line diagnostics, SNMP is optimized for giving machine-friendly access to "instrumentation", mostly for monitoring. Within large vendors (such as Cisco or Juniper) CLI and SNMP are typically developed by different specialized groups.
For something that is closer to CLIs, but more friendly towards programmatic use (API), have a look at the IETF NETCONF protocol, which provides XML-based RPC read and write access to device configuration (and state). Juniper pioneered this concept through their Junoscript APIs and later helped with defining the IETF standard, so you will find good support there. Cisco has also added NETCONF capabilities to their systems, especially the newer ones such as IOR-XR.
The MIB documents, such as this one,
http://www.icir.org/fenner/mibs/extracted/BGP4-V2-MIB-idr-00.txt

Resources