Consider this scenario:
Key1 = random key
Key2 = random key
CombinedKey = Key1.encrypt (Key2)
Input = "test"
Step1 = CombinedKey.encrypt (Input)
Step2 = key2.decrypt (step1)
Result = key1.decrypt (step2)
Is Result == "test" if the encryption type is AES? Or for any other encryption algorythm?
No. AES is not a group. For simplicity's sake, let's just say it this way: AES encryption is not commutative. Said another way, since AES is not a group, there is no key X such that encrypting with key Y and then key Z, key X can decrypt in one step. There are no shortcuts.
If you encrypt Input with CombinedKey then only CombinedKey will decrypt it. Using key2 to decrypt Step1 will result in only junk, not an intermediate result.
Related
Is there a sensible way of storing a mapping of key/value pairs where the key is of length > 1?
What I know so far
Where keys are of length 1, we can use a named list, e.g.
mylist <- list(a=c("apple", "alphabet", "allegro"),
b=c("baseball", "brilliant"))
and access the values by using the keys, like so
mylist$a
# [1] "apple" "alphabet" "allegro"
But if the keys are of length greater than 1, e.g. instead of a and b, they were c('a', 'foo', 'bar'), and c('b', 'some', 'thing'), is there a data structure in R that caters to this many to many mapping, so that any one of the elements of a key will map to the relevant values?
What you want is alternate keys to the same element, from what I understand. This is more a problem in designing the best structure, that something intrinsic to R.
One solution would be to assign the value to each corresponding key, but that would create redundancy, and the value would be repeated.
Another better solution would be to use a pre-list to translate all possible jargons to only which can be used as the key.
So you can have a list of synonyms like:
synonyms <- list(jargon1 = "keyword1", jargon2 = "keyword1", jargon3 = "keyword3")
So both jargon1 and jargon2 would point to the same keyword which could then be used to fetch the correct value from your main list.
What I would do is create a new master_list with the name of all the keys that it can take.
master_list <- list(a = c('a', 'foo', 'bar'), b = c('b', 'some', 'thing'))
Now the values present in master_list can be referred with one common key in mylist.
mylist <- list(a=c("apple", "alphabet", "allegro"), b=c("baseball", "brilliant"))
This will give minimum redundancy overall.
Using hash package in R I created a hast table with keys and values. I want to add new keys and values to the existing hashtable. Is there any way?
Suppose
ht <- hash(keys = letters, values = 1:26)
And I need to add new keys and values to ht.
Is there any way other than
for eg :
ht$zzz <- 45
The documentation for the hash package provides a number of syntax varieties for adding new elements to a hash:
h <- hash()
.set( h, keys=letters, values=1:26 )
.set( h, a="foo", b="bar", c="baz" )
.set( h, c( aa="foo", ab="bar", ac="baz" ) )
The first .set option would seem to be the best for bulk inserts of key value pairs. You would only need a pair of vectors, ordered in such a way that the key value representation is setup the way you want.
How can I use hash in R in order to the key values has other hash?
In python I would have something like this:
hash = {}
hash["other_hash"] = {}
hash["other_hash"]["value"] = 5
In R I'm trying to use hash library and env structure to create hashs but I can't create one hash inside the key value of other hash.
You can use a list():
hash <- list(other_hash = list(value = 5))
hash$other_hash$value #5
I am trying to combine keys with the same value in a dictionary, for example
di = {'dog':'A', 'cat':'A'}
to become
{'A':['dog', 'cat']}
I can reverse the dictionary no problem,but cannot seem to code a way to combine keys. Does anyone have any suggestions? Thanks!
this is what I have as code, and it has reversed the keys and values, but has dropped any keys with the same values. Is there a simple way to keep all of the keys and values?
for k, v in dict1.items():
for k, v in sorted(dict1.items(), key = lambda (k, v): k):
dict2[v] = k
return dict2
I need to generate a random number corresponding to each value in an index, where it needs to be reproducible for each index value, regardless of how many indexes are given:
As an example, I might provide the indexes 1 to 10, and then in a different time, call the indexes for 5 to 10, and they both need to be the same for the values 5 to 10. Setting a global seed will not do this, it will only keep same for the nth item in the random call by position in the vector.
What I have so far is this, which works as desired:
f = function(ix,min=0,max=1,seed=1){
sapply(ix,function(x){
set.seed(seed + x)
runif(1,min,max)
})
}
identical(f(1:10)[5:10],f(5:10)) #TRUE
identical(f(1:5),f(5:1)) #FALSE
identical(f(1:5),rev(f(5:1))) #TRUE
I was wondering if there is a more efficient way of achieving the above, without setting the seed explicitly for each index, as an offset to global seed.
You can use the digest package for tasks like this:
library(digest)
f = function(ix, seed=1){
sapply(ix, digest, algo = "sha256", seed = seed)
}
identical(f(1:10)[5:10],f(5:10)) #TRUE
#> [1] TRUE
identical(f(1:5),f(5:1)) #FALSE
#> [1] FALSE
identical(f(1:5),rev(f(5:1))) #TRUE
#> [1] TRUE
Use an encryption. With a given key, unique inputs will always produce unique outputs. As long as the numbers you input are distinct then the outputs will always be different; the same numbers will always encrypt to the came output cyphertext. Use DES for 64 bit numbers or AES for 128 bit numbers. For other sizes either roll your own Feistel cipher (insecure, but random) or use Hasty Pudding cipher.