I need to secure a database where data comes from different clients at a time. To maintain the integrity in mid-way network the message needs to be encrypted in some way. If database server knows the encryption method, then it decrypts and stores the data. Which is not ok in this case as if some breach happens then the message might be leaked in mid-way network.
So, need some algorithm like SSH. Client sends data locked, database server locks the message and resends it. The client then unlocks the data and finally database server has the message with it's locking system which can be removed and the data can be read.
I don't care if the database server is breached. All I want is to maintain integrity of message while travelling through the network.
Any suggestions or references to achieve this will be appreciated.
Related
I'm looking for a streaming server with the message persistence guarantee, i.e. where the messages published by producers are guaranteed to be durably stored before the server acknowledges publishing to the producer.
My use case requires that we reduce the possibility of losing any produced messages. Producers are able to replay messages if required but they need to be sure that the ACKed message is durably persisted and will be delivered by the streaming server to the consumers.
NATS Streaming server seems to do something along those lines, but the docs for clustering and fault tolerance don't make it very clear what persistence guarantee is provided in each case. The doc on producer integration confirms that the server will actively ACK the published messages, either synchronously or via callback, but it does not make it clear if the ACK means that the message was durably stored at this point or not yet.
The doc on store configuration, specifically SQL options briefly mentions that the ACK from the server means a durable storage guarantee, but it's not clear still how exactly that applies in cases of Clustering and Fault Tolerance and different persistence backends (files or SQL).
NATS Streaming will have persisted the message before sending the publisher ACK back. The store implementations (filestore/SQL) may use some caching, but regardless, the writes are sync'ed (unless disabled) before the ACK is sent back.
However, in cluster mode, the filestore sync'ing is disabled because we rely on the fact that the data is replicated to each node of the cluster and so you would need multiple failures at once to lose the message. (note that there is an option for file store implementation to perform auto-sync at regular interval: see auto_sync here
I am interested in the bit-level goings-on of computer networks, and have been using Wireshark to look at outgoing packets my computer sends when I send messages into the message box of a text-based game, which is hosted on an external web server.
Every time I post a message, I can see a TLSv1.2 packet is sent out, the payload of which is encrypted. My question is how would I go about encrypting outgoing packets myself (using C++)?
My only thoughts on this are that, since encryption happens locally and probably inside my web browser process, I might be able to send a message, note the encrypted output, and then use a time-travel debugger to look inside the process memory, watch for when memory was set to whatever was sent out, then go backwards in the assembly code to try to work out what it's doing. I think there is probably a better way.
Equally, when the server sends messages back, how could I use code to decode those?
I am aware my browser has been given a cookie which looks like a private key, would the process use this cookie for encrypting and decrypting, or is that a red herring?
I am connecting clients to our servers using SignalR (same as socketio websockets) so I can send them notifications for activities in the system. It is NOT a chat application. So messages when sent will be for a particular user only.
These clients are connected on multiple web servers and these servers are subscribed to a redis backplane. Like mentioned in this article - http://www.asp.net/signalr/overview/performance/scaleout-in-signalr
My question here is for this kind of notification system, in redis pubsub - should i have multiple channels - one per user in the backplane and the app server listening to each users notification channel. Or have one channel for all these notifications and the app server parses each message and figure out if they have that userid connected and send the message to that user.
Based on the little I know about the details of your application, I think you should create channels/lists in the backplane/Redis on a per-client basis. This would be cheap in Redis, and it gives the server side process handling a specific client only the notifications they are supposed to have.
This should save your application iteration or handling of irrelevant data, which could have implications of performance at scale, and if security is at all a concern (don't know what the domain or application is), then it would be best to never retrieve/receive information unnecessarily that wasn't intended for a particular client.
I will pose a final question and some thoughts which I think support my opinion. If you don't do this on a client-by-client basis, then how will you handle when the user is not present to receive a message? You would either have to throw that message away, or have the application server handle that un-received message for every single client, every time they poll or otherwise receive information from Redis. This could really add up. Although, without knowing the details of the application, I'm not sure if this paragraph is relevant.
At the end of the day, though approaches and opinions may vary depending on the application, I would think about the architecture in terms of the entities and you outlined. You have clients, and they send and receive messages directly to one another. Those messages should be associated with each of the parties involved somehow, and they should be stored in a manner that will be efficient for lookup and which helps define/outline the structure of the application.
Hope my 2c helps!
Good morning everyone.
I've been reading (most of it here in stack overflow) about how to make a secure password authentication (hashing n times, using salt, etc) but I'm in doubt of how I'll actually implement it in my TCP client-server architecture.
I have already implemented and tested the methods I need (using jasypt digester), but my doubt is where to do the hashing and its verification.
As for what I read, a good practice is to avoid transmitting the password. In this case, the server would send the hashed password and the client would test it with the one entered by the user. After that I have to tell the server if the authentication was successful or not. Ok, this won't work becouse anyone who connect to the socket the server is reading and send a "authentication ok" will be logged on.
The other option is to send the password's has to the server. In this case I don't see any actual benefit from hashing, since the "attacker" will have to just send the same hash to authenticate.
Probably I'm not getting some details, so, can anyone give me a light on this?
The short answer to your question is definitely on the side that permanently stores the hashes of the passwords.
The long answer: hashing passwords only allows to prevent an attacker with read-only access to your passwords storage (e.g. database) from escalating to higher power levels and to prevent you knowing the actual secret password, because lots of users use same pass across multiple services (good description here and here). That is why you need to do the validation on the storage side (because otherwize, as you've mentioned, attacker would just send "validation ok" message and that's it).
However if you want to implement truly secure connection, simple passwords hashing is not enough (as you've also mentioned, attacker could sniff TCP traffic and reveal the hash). For this purpose you need to establish a secure connection, which is much harder than just hashing password (in web world a page where you enter your pass should always be served over HTTPS). The SSL/TLS should be used for this, however these protocols lie on top of TCP, so you might need another solution (in common, you need to have a trusted certificate source, need to validate the server cert, need to generate a common symmetric encryption key and then encrypt all data you send). After you've established secure encrypted connection, encrypted data is useless to sniff, the attacker would never know the hash of the password.
Whats the best practice for scalable servers which need to maintain a list of active users?
Should I open a persistent TCP Connection for each client on which the server sends update messages?
This could lead in many open connection and propably no traffic for many seconds. Is this a problem in TCP?
Or would it be better to let the Client poll updates periodically (with a new tcp connection each)?
How do Chat Servers or large Online Games handle this?
Personally I'd go for a single persistent TCP connection per client to avoid a) the additional work in creating and destroying connections and the additional latency involved in all the TCP packets involved and b) to avoid creating lots of sockets in TIME_WAIT on either the clients or the server. There's simply no good reason to create and destroy the connections.
Depending on your platform there may be various tricks to deal with the various platform specific problems that you might get when you have lots of connections open, and by lots I mean 10s of thousands. For example, on Windows, using overlapped I/O and I/O completion ports would be a good design for lots of connections and if your connections are generally idle most of the time then you might find that using the 'zero byte read' trick would allow you to handle more connections on lesser hardware; but it's something you can add once you know you have a problem due to the amount of buffer space that you have waiting for reads which only complete infrequently.
I wouldn't have the clients polling the server. It's inefficient. Have the server publish data to the clients as and when there is data available. This would allow the server to control the workload somewhat by letting it decide how often to send the data to the clients - it could either send every time new data became available for a client or send after it had batched up some data and waited a short while, etc. If the server is pushing the data then the server (the weak point, the place that might get overwhelmed by client demand) has more control over the work that it will need to do.
If you have each client polling then a) you're generating more network noise as each client sends a message to ask the server if it has anything that it should send it and b) you're generating more work for the server as it needs to respond to the polls. The server knows when there's data for the client, let it be responsible for telling the clients.