This page from Redislabs, titled: Redis Enterprise: A Secure Database states the following:
Encryption | Data in transit | - Client<>Redis – SSL/TLS
| | - Inter cluster (between cluster’s nodes) – IPSec
| | - Across-cluster – SSL/TLS
It's unclear what Redislabs means when they state IPSec for the encryption of traffic among its own sub-components.
Question
Do they do anything internal to facilitate this or do they expect that customers would set up a secure tunnel using some other product to secure this communications?
Going through this presentation from the VP of Redislabs, titled: Secure Redis deployments for Simplified Compliance - HIPPA, PCI, GDPR | Redis Labs it would seem to be the case that Redis Enterprise does nothing to help secure the in-communications among its own nodes in a cluster.
The product fully expects that customers utilize IPSec technologies such as:
stunnel
spiped
strongswan
iptables
etc.
to encrypt/secure traffic however you deem necessary per your applications usage of Redis.
Redis Enterprise comes with a deployment tool that allows securing inter-node communication using IPSec. As a result, the secured inter-node communication has practically no effect on cluster performance.
Oren
Related
I have a situation where messages are being generated by an internal application but the consumers for the messages are outside our enterprise network. Will either of http(s) transport or REST connectivity work in this scenario, with HTTP reverse proxy on DMZ? If not, is it safe to have a broker on the DMZ which can act as gateway to outside consumers?
Well, the rest/http approach to connect to ActiveMQ is very limited as it does not support true messaging semantics.
Exposing an ActiveMQ broker is no less secure than any other communication software if precautions are taken (TLS, default passwords changed, high entropy passwords are used and/or mutual authentication, recent patches applied, web console/jolokia not exposed externally without precautions etc etc).
In fact - you can buy online ActiveMQ instances from Amazon - which indicates that at least they think it's not such a bad idea to put them on the Internet.
We have developed a TeamViewer-like service where clients connect via SSL to our centralized servers. Other clients can connect to the server as well and we can setup a tunnel through our service to allow peer-to-peer connectivity without NAT or firewall issues.
This works fine with Azure Cloud Services, but we would like to move away from Azure Cloud Services. Service Fabric seems to be the way to go, because it supports ARM and also allows much fine-grained services and make updating parts of the system much more easy.
I know that microservices in Service Fabric can be stateful, but all examples use persistent data as state. In my situation the TCP connection is also part of the state. Is it possible to use TCP with service fabric?
The TCP endpoint should be kept alive on the same instance (for several days), so this makes the entire service fabric model much more difficult.
Sure, you can have users connect to your services over any protocol you want. Your service sounds very stateful to me in the same way that user session state is stateful - you want users to return to the same place where their data is. In your case, that "data" is a TCP connection. But there's no guarantee a TCP endpoint will be kept alive for days in any system - machines fail, software crashes, OSes get patched, etc. You need to be prepared for the connection to break so you can quickly re-establish it. Service Fabric stateful services are great for this. Failover of a stateful service to another machine is extremely fast (milliseconds). Of course, you can't actually replicate a live connection, but you sure can replicate all the metadata you need to re-establish a connection if it breaks.
I'm trying to solve an architecture design puzzle, it's about designing an infra for keeping data and servers as much secured/hidden as possible, here are requirements:
I want to hide the internal design of my infra (several data servers with public and private hosts)
I want to access to each service using same IP address, and the query is forwarded to right server based on something (cookie, uri, port or whatever)
access to data service must be enforced with ssl/tls encryption
After studying carefully these requirements I was thinking about using a reverse proxy and grant access to all data services only across the reverse proxy server, an other pro of a reverse proxy is that access authentication is enforced at once with sll/tls encryption and no need to configure each endpoint separately.
my real issue is that I didn't find any reverse proxy that supports tcp queries, and same for static load balancing algorithms that are supported only for HTTP requests, (haproxy for instance)
Any idea how to solve this issue ?
Thanks to all
I am using Apache Camel with ActiveMQ for Routing Message to Queue. To avail high availability we can configure cluster of MQ server in case of system gets fail.
ActiveMQ also provide failover features. now i wanted to load balance two set of MQ server to single IP:port at TCP level. Can Failover feature would able to load balance two MQ servers ?
e.g.
One IP is load balanced.
192.168.0.1:61616 --> 192.168.1.1:61616,192.168.1.2:61616
Load balancing can be done through the "Network of Broker" feature of ActiveMQ, see http://activemq.apache.org/networks-of-brokers.html.
Client failover uri is for recovery, not load balancing. Load balancing messaging clients ends up being a complicated mess (since you can't predict message size).
A good approach is to use the failover:(tcp://...)?randomize=false and partition your producer traffic into groups by changing the order of the brokers in the uri.
Group1 producers: 192.168.1.1:61616,192.168.1.2:61616
Group2 producers: 192.168.1.2:61616,192.168.1.1:61616
Can anyone confirm that using a persistent outgoing TCP connection on port 80 will not be blocked by the vast majority of consumer firewalls?
That has been assumption based on the fact that HTTP runs over TCP, but of course it is theoretically possible to analyze the packets. Question is do most CONSUMER firewalls do this or not?
The feature is called ALG, Application Layer Gateway. This is where the firewall is aware of and perhaps even participates in an application protocol
There are two main reasons a firewall may do this:
Protocol support, in order to support the protocol it is necessary to snoop/participate, e.g. opening up additional ports for non passive FTP or media ports for SIP+SDP
Additional security, an ALG may function as a transparent proxy and filter protocol commands and actions to enforce policy. E.g. preventing the HTTP CONNECT method
ALGs have been a common feature of stateful firewalls for many years, though often the source of instability.
For security proscriptive environments expect HTTP to be validated and filtered either by a firewall or other dedicated policy enforcement appliance.
Residential broadband routers do not tend to have advanced firewall features. I would be surprised to find any with HTTP validation / filtering on port 80.
Personal software firewalls come in two flavours, basic and advanced. Most consumers will have a basic one that probably comes with their operating system and will not do any HTTP validation / filtering.
However, there is a rising trend in antivirus product differentiation of advanced internet content filtering for threat protection, there is significant possibility these may filter HTTP activity (but is difficult to determine with certainty from their Feature Lists).
It's almost impossible to answer this question with anything other than "it depends".
Most leading firewall vendor solutions will do this through their configuration.
You will find paranoid organisations (financial, government, military, gambling etc) will typically have such application intelligence enabled. They will detect the traffic as not valid HTTP and so block it for both security and performance reasons.
This type of feature is (these days) typically turned on by default and as you know, most people don't change a default configuration after the vendor or consultant has left.
However, some companies, where the techies don't understand or they have no power in the decision-making, will turn such application intelligence off because it interferes with business, i.e. internal apps or external apps (running on the LAN and connecting back), developed as bespoke solutions, work over TCP port 80 (hey, it's always open) and are non-http.
You don't just have to worry about firewalls though, most companies run internal proxy servers for outgoing traffic and these typically now only allow valid HTTP on port 80 and their configuration isn't changed as a proxy server is usually requested by the infrastructure and security teams and they don't want non-http over port 80. Additionally, there's also load balancers and they're typically configured for HTTP on port 80, for a variety of reasons such as content switching, rewrites, load-balancing and security.
To summarise, in my experience, that'd be a yes but I haven't worked a lot with SMEs, primarily larger corporates.
port 80 is blocked by many firewalls for example you have to add exceptions like i allow Skype or msn messenger to use port 80 for out going traffic