I am trying to create a solidity contract that decrypts a message using the an AES key. The data to be decrypted is saved as a variable in the contract (this data is already encrypted). The user should be able to pass an AES key into the decrypt function, this function should decrypt and return the message.
I do not mind the key being exposed on the network. Would there be any way to achieve this?
Solidity currently (v0.8) doesn't support any of the AES algorithms.
If your goal is to perform an action (e.g. transfer funds) to a user providing the correct key, you could have them calculate a keccak256 (one-way) hash of some secret (e.g. the key) off-chain, and then submit the original key for validation (against the hash stored in the contract).
pragma solidity ^0.8;
contract MyContract {
// keccak256 hash of the string "foo"
bytes32 hash = 0x41b1a0649752af1b28b3dc29a1556eee781e4a4c3a1f7f53f90fa834de098c4d;
function guessThePassword(string memory _password) external view returns (bool) {
return keccak256(abi.encodePacked(_password)) == hash;
}
}
Mind that this approach (as well as your original approach from the question) is vulnerable to frontrunning. One of the ways to prevent frontrunning, is to use double hashing. You can see a code example in this contract that was used for a competition type "first submitting the correct password can withdraw funds".
Related
Question:
Does the TokenIdentifier of a TokenType used to build a NonFungibleToken have to be a unique string?
Long-Winded Background:
The documentation clearly outlines that “It is, in fact, the responsibility of the developer to ensure that no two instances of an NonFungibleToken refer to the same off-chain or on-chain object…”
I was under the impression that the UniqueIdentifier was how I ensured that uniqueness:
TokenType tokenType = new TokenType("Toyota Corolla", 0);
IssuedTokenType issuedTokenType = new IssuedTokenType(partyA, tokenType);
String VIN = "1G2JB12F047226515";
UUID uuid = UUID.randomUUID();
UniqueIdentifier uniqueIdentifier = new UniqueIdentifier(VIN, uuid);
NonFungibleToken nonFungibleToken = new NonFungibleToken(issuedTokenType, partyA, uniqueIdentifier);
This allows me to define limitless Toyota Corolla’s each with their own unique UUID and Vehicle Identification Number (VIN).
The MoveNonFungibleTokens() flow allows me to specify a QueryCriteria to isolate the specific NonFungibleToken I wish to move (I use a LinearStateQueryCriteria specifying the UUID):
subFlow(new MoveNonFungibleTokens(partyAndToken, observers, queryCriteria));
However when I want to redeem a NonFungibleToken the RedeemNonFungibleTokens() flow only allows me to specify a TokenType:
subFlow(new RedeemNonFungibleTokens(tokenType, issuer, observers));
This means I can’t identify a NonFungibleToken by its UUID. If you do what I am doing above you will receive the following error when I try to redeem:
Exactly one held token of a particular type TokenType(tokenIdentifier=' Toyota Corolla ', fractionDigits=0) should be in the vault at any one time.
If this is the case then TokenType’s tokenIndentifier ("Toyota Corolla") must be the source of uniqueness. I would have to do something like this:
TokenType tokenType = new TokenType("Toyota Corolla-" + UUID.randomUUID(), 0);
Is this correct or have I missed something?
I was just really surprised when I started writing the Redeem parts of my token tests and thought “Well then what is the purpose of the UniqueIdentifier in a NonFungibleToken?”
In the case of non-fungible tokens, the relationship between the token type and the token is one to one; that's why the redeem flow takes only the TokenType parameter.
Looking at the MoveNonFungibleTokens flow input parameters here, they sort of contradict the first statement; because even the comment on the flow states that it should be used for one TokenType at a time, which means the queryCriteria parameter is not needed since you're already specifying the token (token type) that you want to move inside the PartyAndToken parameter. I will forward this discussion to R3 engineers to get a clarification.
As for the your question, the reason you need the unique identifier in the non-fungible token is because it extends LinearState (which is identified by that UUID).
Remember that states are immutable in Corda, so how do you mimic an update? You do it buy using a LinearState, for instance if your non-fungible token is a car and you want to change the owner of the car (i.e. update the holder of the token); then you create a transaction where the input is the current token and the output is the updated token which should have the same linearId as the input; this way the output and the input are tied together and now you can track the history of updates of a certain state by querying that shared linearId.
Side note: Your LinearState should have 2 constructors; one that assigns a random linearId and you should use it when creating a new state, and another constructor that takes a linearId as an input parameter; this constructor should be used when you create the output of an update transaction (so you can create the output with the linearId of the input), also you must mark that constructor with the #CordaSerializable annotation so that Corda uses it when it check-points a certain flow (i.e. serializes then de-serializes your state) otherwise Corda will use the other constructor and assign a new random value for your linearId when it de-serializes your state (when the flow resumes) and you essentially end with a different state!
I recommend that you use EvolvableTokenType for your car example instead of TokenType; this would allow you to add your custom attributes (VIN, price, mileage, etc...) and you can can control which attributes can be updated (price, mileage) and which cannot (VIN); see more about that in the official free Corda course from R3 here.
I am learning Account concept which is released in Corda 4.3. The concept also allows node sign the transaction by using account key rather than node key. I look into few aspects and have still questions that:
In which case we should sign transaction with account key rather than node key?
What would be a crucial benefit to use account key signing over node key?
The framework allows transaction between accounts in the same node to be signed with account key. Why should we do that?
Thank you in advance.
It's not about what's the crucial benefit of signing with an account key instead of a node key, it's what your state contract dictates.
For instance if you look at the gold-trading example:
The state has an attribute owningAccount:
data class LoanBook(val dealId: UUID, val valueInUSD: Long,
val owningAccount: PublicKey? = null) : ContractState
The contract dictates that the owningAccount is a required signer for the TRANSFER command:
inputGold.owningAccount?.let {
require(inputGold.owningAccount in command.signers) {
"The account sending the loan must be a required signer" }
}
Notice how the flow signs the transaction with the node's key (because the initiator of a flow is a required signer), and the owningAccount's key (because the contract dictates that):
val locallySignedTx = serviceHub.signInitialTransaction(transactionBuilder,
listOfNotNull(loanBook.state.data.owningAccount,
ourIdentity.owningKey))
My goal is to assure that all data in remote couch db will be encrypted. When I follow this example from pouch-transform docs, my data is not encrypted on remote end point after sync,
pouch.transform({
incoming: function (doc) {
encrypt(doc);
},
outgoing: function (doc) {
decrypt(doc);
}
});
when I encrypt in outgoing it does, but in this case my data encrypted locally as well. What I'm doing wrong here, isn't the point of encryption to have data encrypted in remote db? So the only way to achieve this will be to create set/get wrappers and encrypt in them? Can I detect somehow document destination in outgoing call?
isn't the point of encryption to have data encrypted in remote db?
No. As explained in the package description:
Apply a transform function to documents before and after they are stored in the database.
In other words, it only modifies the data at rest.
This plugin has exactly no effect on the data being sent to/from CouchDB--only in the way the data is stored within PouchDB itself.
If you want to encrypt documents in CouchDB, too, you need to do this at the application layer. That is, encrypt the data yourself, and store it in the document or as an attachment in encrypted form.
I've got the problem too. I do not wanna transform the data when replicating, so I've hacked in and added a replicating flag to pass through wrappered functions when replication: https://github.com/pouchdb/pouchdb/pull/7774
And modified the pouch-transform pacakge to pass the option to the outcoming callback:
{
// decrypt it here.
outgoing: async (doc, args: IPouchDBWrapperArgs, type: TransformPouchType) => {
const {options} = args;
if (!options.replicating) {
doc = await this.decryptDoc(doc, options);
}
return doc;
},
};
Currently I'm using bouncy castle's libraries for the actual work and have found an example at sloanseaman.com that (after a little tweaking) works with v1.52.
I've also got a working example from developer.com of how to use the JCE interface and can even drop the bcprov in it and use some of it's algorithms.
public class CryptoUtil {
private static final String ALGORITHM = "IDEA/PGP/NoPadding";
public static void encryptFile(File keyFile, File plainTextFile, File encryptedFile) throws GeneralSecurityException, IOException {
Cipher desCipher = Cipher.getInstance(ALGORITHM);
desCipher.init(Cipher.ENCRYPT_MODE, readKeyFromFile(keyFile));
OutputStream out = new BufferedOutputStream(new FileOutputStream(encryptedFile));
InputStream in = new BufferedInputStream(new FileInputStream(plainTextFile));
while (in.available() > 0) {
// Read the next chunk of bytes...
byte[] cleartextBytes = new byte[in.available()];
in.read(cleartextBytes);
// Now, encrypt them and write them to the encrypted file...
byte[] encryptedBytes = desCipher.update(cleartextBytes);
out.write(encryptedBytes, 0, encryptedBytes.length);
}
// Take care of any pending padding operations
out.write(desCipher.doFinal());
in.close();
out.flush();
out.close();
System.out.println("Encrypted to " + encryptedFile);
}
But no matter what algorithm string I use, I can't get my JCE utility to encrypt the way that the bouncyCastle utility does.
The furthest I've gotten is using "IDEA/PGP/NoPadding" which allows me to encrypt and decrypt within itself, but the BC utility won't decrypt them, saying there's an unknown object in the stream.
Here is my source code
Do you guys know what combination of Algorithm, Mode, and Padding I would need to use for this? Are there other options that I need to apply somehow? I guessing I need to use BC's version of AlgorithmParametersSpi but I haven't figured out how to create that yet
You can't. While OpenPGP uses "normal" public/private and symmetric encryption algorithms, trouble starts with the modes. OpenPGP uses its own mode (a modified CFB mode), and also the whole OpenPGP packet syntax is not supported by Java's default libraries.
You'd at least need to reimplement the OpenPGP CFB mode in Java, or somehow rely on Bouncy Castle's implementation.
The OpenPGP CFB mode already includes a replacement for the initialization vector; no additional padding is used/required.
I have program who exchange session key via Diffie-Hellman algorithm, or almost exchange. All action is 2 classes: one receives data and calculates private key, set it to second class, where symmetric key is calculated after receiving public part of DH.
Program is using Qt and QCA.
Private key is stored as widget class member:
QCA::DHPrivateKey m_localKey;
after receiving public part of other side key (as QByteArray) it calculates symetric key:
QCA::Initializer init;
QCA::DLGroup group(prime, p);
QCA::SecureArray remoteKey(m_remoteKey);
QCA::DHPublicKey pk(group, remoteKey);
m_sessionKey = m_localKey.deriveKey(pk);
but session key is always empty (m_sessionKey.isEmpty() and m_sessionKey.isNull() are true).
Values are set and they are exchange correct (remote part public key is received as it is),
m_localKey.isNull() and pk.isNull() returns correct values (false).
strange part is that when I run test, it works. Test use same order operations just private keys are created in one class, but logic to get symmetric key is same, and class used for that is same.
My question would be why it could behave different in test and separate programs. And is it possible to get any error/debug information from QCA::DHPrivateKey about what went wrong in deriveKey()?
Sadly that code was lost so can't check for sure, but problem probably was in 2 places - transfering/receiving data and too many QCA::Initializer calls.
After set QCA::Initializer in main and [re]writing data exchange code it works.
It's still sad that I don't know how check errors, if such occurs, so if anyone know please share these knowledges.