Extra data manipulation with end to end encryption - encryption

Would it be possible to store some data as tables in a system, where users talk with each other through a server using end to end encryption. Figuratively, let's say end points A and B talk to each other through a server using end to end encryption. This basically means all of the messages flowing from A to B or vice versa are only decryptable between these two end points. So I want to create a schema, say a webserver, where I would create a table as follows:
From To Content HasMessageBeenSend
A B encrypted text 0
B A some other encrypted text 1
Would it be possible to create such a schema above in the server? For example, A writes into database this encrypted text as well as the receiver the sender sides, but because messages are encrypted, except of this content area, all other information is public and can be readable by the server.

Related

Brute-Force encryption algo/key based on mapping encrypted-value <-> unencrypted-value (hashcat)

i have a list of encrypted values and i know the unencrypted values for each entry.
Is there a possibility to brute force the encryption mechanism, so that i'm able to decrypt new unknown encrypted values?
this is my list "unencrypted -> encrypted":
XXXXXXXXXXXXXXXXX -> AAPBbXxBtdNhUH2nc3w3DWajRHeG5OmunJQ97n9/Ooccih07+8EsMNNW2zqbzXvQ1bl+yBwUcj1ZzcxNIem0zPr1TeiphXPh/UF9r7XzRfI4w7bMuyM=
YYYYYYYYYYYYYYYYY -> AALKq4wVvSIbsn3h5azSGT7Z5HKGH1YNGKC1+MVPLWKaEMHR+VbdcVcwnZYB32OHjYf/T7tpo1FjFV8qEPltpzdWxe4OFwLiB9nJe6HIan0zn4Jsf2Q=
ZZZZZZZZZZZZZZZZZ -> AANdNV4zeqvH7jVi0HjnMBkSvvAXcQavyNDOJVYUGKT/LKC97iPDB1t3xTnz/9T5kkeHxtH2lXjRnPChY3AwfVuPImQ4CF8/7sHvpQQCM3fSHAy+lV0=
...
The mechanism is the same for each entry (no salts).
every single value can be (en)crypted without additional values.
Is this possible using hashcat?
BR
John

NODE_PROPERTIES table in database

What is the purpose of NODE_PROPERTIES table in the database and how do we get this table populated with key value pairs and how do we query? And how do we query data in other NODE tables like NODE_INFOS, NODE_NAMED_IDENTITIES , NODE_INFO_HOSTS? Is there any service level function available in CordaRPCClient to do that? We would like to store some extra properties for each node
The NODE_PROPERTIES table is used for internal purposes to store information that doesn't justify having its own table (currently, whether or not the node was in flow-drain mode when it was last stopped).
Feel free to store additional key-value pairs there, as long as they don't clash with keys used for internal purposes (a clash is unlikely, as we currently use long key-names to store information in this table).
You can get access to the node's database via the node's ServiceHub, which is available inside flows and services. The Flow DB sample shows an example of a service that connects, reads and writes directly to the node's database: https://github.com/corda/samples.
You can also connect directly to the node via JDBC (e.g. from a client or server). The node lists its JDBC database connection string at start-up. You can also set it in the node's configuration file, as shown here: https://docs.corda.net/corda-configuration-file.html#examples.

Not able to apply max() for Encrypted Column on SQL Server

I have a table with a datetime column that was encrypted using Always Encrypted feature on SQL Server 2016.
Now I am trying to perform this simple select:
select max(dt_order)
from orders
where customer = 123;
I am running into this error:
Msg 33299, Level 16, State 2, Line 5
Encryption scheme mismatch for columns/variables 'dt_order'. The encryption scheme for the columns/variables is (encryption_type = 'DETERMINISTIC', encryption_algorithm_name = 'AEAD_AES_256_CBC_HMAC_SHA_256', column_encryption_key_name = 'myColHML', column_encryption_key_database_name = 'TESTING') and the expression near line '1' expects it to be (encryption_type = 'PLAINTEXT') (or weaker).
In SSMS I already have set connect options "Column Encryption Setting=Enabled" and also I've set query options "Enable Parametrization for Always Encrypted"
Any idea?
Since your data is encrypted using a key on the client side, SQL Server will not be able to calculate max. This is due to the fact that SQL Server does not have the key. The main value proposition of Always Encrypted is that it protects the data from Admins of SQL Server. Currently, the only operation possible on encrypted columns is equality.
From official documentation
Deterministic encryption always generates the same encrypted value for
any given plain text value. Using deterministic encryption allows
point lookups, equality joins, grouping and indexing on encrypted
columns. However, but may also allow unauthorized users to guess
information about encrypted values by examining patterns in the
encrypted column, especially if there is a small set of possible
encrypted values, such as True/False, or North/South/East/West region.
Deterministic encryption must use a column collation with a binary2
sort order for character columns.
Randomized encryption uses a method that encrypts data in a less
predictable manner. Randomized encryption is more secure, but prevents
searching, grouping, indexing, and joining on encrypted columns.

Having trouble with FIELDPROC on a database (Column Encryption on Iseries)

I used Listing 3 in the following link to create a FIELDPROC program QGPL/MOBHOMEPAS which should encrypt a variable char column Field Encryption in DB2 for i
I compiled the RPGLE program and I created a separate database DBMLIB/UMAAAP00 as follows
A R UMAAAF00 TEXT('-
A TEST ENCRYPTION')
A*
A IPIAAA 20A VARLEN(20)
A KYGAAA 11S 2 COLHDG('SALARY')
I then use strsql to alter the table and protect IPIAAA
ALTER TABLE DBMLIB/UMAAAP00 alter column IPIAAA set FIELDPROC
QGPL.MOBHOMEPAS
ALTER COMPLETED FOR TABLE UMAAAP00 IN DBMLIB.
For some reason when I go in to add entries through upddta directly to the file itself and then do a wrkqry to query and file and view them I don't see them as encrypted.
Is this not how it's supposed to work? Is anyone able to assist me with the logic? Ultimately, I'd like to create a simple table from scratch that has a single 20 character or so password column as encrypted.
If the code being utilized for the named FieldProc program QGPL.MOBHOMEPAS was modeled-after [an effective copy of] the source code that was found at the URL from the OP [which BTW includes a position-to request to the comments section... Why?], then that code is implemented using the base-level of the DB2 for IBM i 7.1 SQL FieldProc support, not the next [enhanced] level of support in which the masking feature was added. That is, every invocation other than for function-code=8 will necessarily always be an Encode or a Decode operation for which any masking of the data is unsupported, because changing the data [with that level of support] would corrupt the data in the TABLE.
Note [from http://www.mcpressonline.com/rpg/db2-field-procedures-finally-support-conditional-masking.html] the differences in the coding requirements described for the pre-masking-support [eight parameters] and since-masking-support [nine parameters] as the pre-requisite to have the Run Query (RUNQRY) and Update Data (UPDDTA) features mask the data that is presented to the user:
The new FieldProc Masking support revolves around two main components.
The first component is a new parameter that was added to the parameter
lists that the DB2 engine passes to the FieldProc program on each
decode call. This new parameter controls whether or not the FieldProc
program can return a masked value. There are some DB2 operations—such
as the RGZPFM (Reorganize Physical File Member) command and trigger
processing—that always require the clear-text version of the data to
be returned. The second component is a new special SQLState value
('09501') that is to be returned by the FieldProc program whenever it
is passed a masked value on the encode call. This prevents the masked
value from being encoded, which would result in the original data
value being lost. When this special SQLState value is returned, DB2
will ignore the encoded value that is passed back by the FieldProc
program and instead use the value that's currently stored in the
record image for that column.
For some reason when I go in to add entries through upddta directly to
the file itself and then do a wrkqry to query and file and view them I
don't see them as encrypted. Is this not how it's supposed to work?
No, that's not how it's supposed to work. The data will be encoded on disk only.
When you view the data it will be decoded automatically by the FIELDPROC program no matter what you're using to view it (WRKQRY [yuck], DFU, STRSQL, whatever). This is how it works regardless of field masking (which is different/additional functionality).

Whats the proper way to implement a unique secondary index on a DynamoDB table that has no range key?

I'm a bit confused by how to properly set up secondary indexes in DynamoDB.
the documentation states secondary indexes are for tables which have a hash and rangekey, but in my case, I have no need of the range key.
The scenario is basically like this. I have a list of mobile clients which will call into my API. those clients are identified by a 6 character unique client ID. Each client also has a unique device ID, which is basically a long GUID -- quite long and inconvenient to use as the primary key.
The question comes when a client registers itself it sends is device ID (the long GUID) in a registration request and the server generates the unique clientID (the six char unique ID) which it returns to the client for future communication. One of the checks that the server side must do is make sure the request is not a duplicate registration, i.e. that the deviceID is not already present in the table under another client ID.
In a SQL table, I would have the clientID be the primary key, and 'd just define the a unique index on the deviceID field, but it seems like I can't do that in DynamoDB, since I only have a hash key on the table, not a hash and range key. I could do a query to find out if there's a dupe deviceID somewhere but that would seem to require a table scan which I'd like to avoid.
What's the proper way to set up something like this in DynamoDB? Do I just use a dummy range key like "foo" on all my rows and use a local secondary index? Seems inefficient somehow.
I personally don't like to use indexes.
What I recommend is to keep two tables.
DEVICES
Hash: device_id
attribute: client_id
CLIENT_DEVICES
Hash: client_id
Range: device_id
This allows you to reason about whether a client has devices, which devices, as well as ask for a device if it attached to a client.
This IMO is more readable than global/local secondary indexes.

Resources