Full-disk encryption or application level encryption? - encryption

As an example, I am planning to host a Redis database with persistence on a server. To protect the data on the disk, I think, I have two options: A) Do read/write operations via an encryption layer. B) Apply Full Disk Encryption (FDE) and let Redis to read/write as usual.
What are the pros and cons of the both the approaches above? What factors should I consider?
There seems to be some similarity here Database encryption or application level encryption? It is about db and application level encryption; but, my question is about the disk and an application (such as a db) level.

Related

Migrating from an unencrypted Redshift Cluster to Encrypted

I am trying to enable SSE with a Customer-Managed CMK in my production Redshift cluster to follow certain security protocols.
For POC purposes, I spun up a 1 Node dc2.large Redshift cluster and following this doc, I was able to enable SSE.
However, my question is, does enabling SSE encrypt the existing data in the cluster? If not, what steps should be taken?
Overall what are the downsides, if any, of enabling encryption at rest in a production Redshift cluster and what are the best practices?
There is no need to change anything in your code or existing pipelines/process. This is Disk encryption. Its nothing to do with your database connections or code.
To know more about the process then read these links.
https://aws.amazon.com/about-aws/whats-new/2018/10/encrypt-amazon-redshift-1-click/
https://docs.aws.amazon.com/redshift/latest/mgmt/changing-cluster-encryption.html

Does my application need a connection pool if backend database is NoSQL (Azure Cosoms DB)

I am very new to NoSQL world and wondering how connections are managed by NoSQL databases like Azure Cosmos DB.
I am designing a highly scalable solution for real-time application. And one of the concern is how to manage numerous connections/requests to Azure Cosmos DB from Azure Functions or my business tier?
Is Cosmos DB subjecting to similar limitations as SQL Server is in terms of number of available connections?
Azure functions connections limitation allied to all outbound connections irrespective of target service. Some services might optimize the connection usage (pooling, multi-plexing etc...) for higher concurrency and throughput.
Specifically for CosmosDB: 2.0.0-preview package has connection multiplexing and pooling, please check https://www.nuget.org/packages/Microsoft.Azure.DocumentDB/2.0.0-preview
NOTE: Azure functions V2 run-time is required for custom CosmosDB SDK version.

Method to replicate sqlite database across multiple servers

I'm developing an application that works distributed, and I have a SQLite database that must be shared between distributed servers.
If I'm in serverA, and change sqlite row, this change must be in the other servers instantly, but if a server were offline and then it came online, it must update all info equal other servers.
I'm trying to develop a HA service with small SQLite databases.
I'm thinking on something like MongoDB or ReThinkDB, due to replication works fine and I have got data independently server online I had.
There are a library or other SQL methodology to share data between servers?
I used the Raft consensus protocol to replicate my SQLite database. You can find the system here:
https://github.com/rqlite/rqlite
Here are some options:
LiteReplica:
It supports master-slave replication for SQLite3 databases using a single master (writable node) and one or many replicas (read-only nodes).
If a device went offline and then it came online, the secondary/slave dbs are updated with the primary/master one incrementally.
LiteSync:
It implements multi-master replication so we can write to the db in any node, even when the device is off-line.
On both we open the database using a modified URI, like this:
“file:/path/to/app.db?replica=master&bind=tcp://0.0.0.0:4444”
AergoLite:
Blockchain based, it has the highest level of security. Stores immutable relational data, secured by a distributed consensus with low resource usage.
Disclosure: I am the author of these solutions
You can synchronize SQLite databases by embedding SymmetricDS in your application. It supports occasionally connected clients, so it will capture changes and sync them when a server comes online. It supports several different database platforms and can be used as a library or as a standalone service.
You can also use CopyCat, which support SQLite as well as a few other database types.
Marmot looks good:
https://github.com/maxpert/marmot
From their docs:
What & Why?
Marmot is a distributed SQLite replicator with leaderless, and eventual consistency. It allows you to build a robust replication between your nodes by building on top of fault-tolerant NATS Jetstream. This means if you are running a read heavy website based on SQLite, you should be easily able to scale it out by adding more SQLite replicated nodes. SQLite is probably the most ubiquitous DB that exists almost everywhere, Marmot aims to make it even more ubiquitous for server side applications by building a replication layer on top.

Is there a secure p2p distributed database?

I'm looking for a distributed hash table to store and retrieve values securely. These are my requirements:
It must use an existing popular p2p network (I must guarantee my key/value will be stored and kept in multiple peers).
None but myself should be able to edit or delete the key/value. Ideally an encryption key that only I have access to would be required to edit my key value.
All peers would be able to read the key value (read-only access, only the key holder would be able to edit the value)
Is there such p2p distributed hash table?
Would the bittorrent distributed hash table meet my requirements?'
Where could I find documentation?
You don't need encryption, you need signatures.
The mainline bittorrent DHT does not allow arbitrary key value storage at the moment, only key -> IP:Port storage where the IP is fixed to the originator of the storage request. The vuze DHT on the other hand does support binary blob storage, on top of which you could implement some signature scheme.
Update: BEP44 added signed or hash-based key-value storage to the bittorrent DHT. But it imposes some restrictions what can be used as keys to randomly distribute data throughout the keyspace.

Local SQLite vs Remote MongoDB

I'm designing a new web project and, after studying some options aiming scalability, I came up with two database solutions:
Local SQLite files carefully designed for a scalable fashion (one new database file for each X users, as writes will depend on user content, with no cross-user data dependence);
Remote MongoDB server (like Mongolab), as my host server doesn't serve MongoDB.
I don't trust MySQL server at current shared host, as it cames down very frequently (and I had problems with MySQL on another host, too). For the same reason I'm not goint to use postgres.
Pros of SQLite:
It's local, so it must be faster (I'll take care of using index and transactions properly);
I don't need to worry about tcp sniffing, as Mongo wire protocol is not crypted;
I don't need to worry about server outage, as SQLite is serverless.
Pros of MongoDB:
It's more easily scalable;
I don't need to worry on splitting databases, as scalability seems natural;
I don't need to worry about schema changes, as Mongo is schemaless and SQLite doesn't fully support alter table (specially considering changing many production files, etc.).
I want help to make a decision (and maybe consider a third option). Which one is better when write and read operations is growing?
I'm going to use Ruby.
One major risk of the SQLite approach is that as your requirements to scale increase, you will not be able to (easily) deploy on multiple application servers. You may be able to partition your users into separate servers, but if that server were to go down, you would have some subset of users who could not access their data.
Using MongoDB (or any other centralized service) alleviates this problem, as your web servers are stateless -- they can be added or removed at any time to accommodate web load without having to worry about what data lives where.

Resources