Make 1 key from 2 vigenere keys - encryption

I have this school assignment about Vigenere code.
I've got 2 keys: AB and XYZ. A text is encrypted twice with these keys.
The questions are:
How to make 1 key out of those 2?
How to make 1 key when there are 3 keys?

Choose the length of the combined key as the least common multiple of the key lengths.
Repeat each key until it fills the combined key
Add all the repeated keys.
For example with AB and XYZ assuming A=0:
The lengths are 2 and 3, the common multiple is 6.
AB AB AB and XYZ XYZ
A+X, B+Y, A+Z, B+X, A+Y, B+Z = XZZYYA
This algorithm works with any number of keys.

#CodesInChaos answer is great, but let's add some math:
|C| = gcd(|A|, |B|)
# gcd: greatest common divisor.
Also notice that you can compute the value of the combined key:
Let's define the keys as:
A=(a_0,a_1,…,a_i)
B=(b_0,b_1,…,b_j)
Then the value of the combined key is:
C={c_i=a_(i%|A|)+b_(i%|B|) | 0≤i≤lcd(|A|,|B|)}
And can be generalized to each number of keys:
C={c_i=a_(i%|A|)+b_(i%|B|)+...+z_(i%|Z|) | 0≤i≤lcd(|A|,|B|,...,|Z|)}

Related

Efficient string similarity grouping

Setting:
I have data on people, and their parent's names, and I want to find siblings (people with identical parent names).
pdata<-data.frame(parents_name=c("peter pan + marta steward",
"pieter pan + marta steward",
"armin dolgner + jane johanna dough",
"jack jackson + sombody else"))
The expected output here would be a column indicating that the first two observations belong to family X, while the third and fourth columns are each in a separate family. E.g:
person_id parents_name family_id
1 "peter pan + marta steward", 1
2 "pieter pan + marta steward", 1
3 "armin dolgner + jane johanna dough", 2
4 "jack jackson + sombody else" 3
Current approach:
I am flexible regarding the distance metric. Currently, I use Levenshtein edit-distance to match obs, allowing for two-character differences. But other variants such as "largest common sub string" would be fine if they run faster.
For smaller subsamples I use stringdist::stringdist in a loop or stringdist::stringdistmatrix, but this is getting increasingly inefficient as sample size increases.
The matrix version explodes once a certain sample size is used. My terribly inefficient attempt at looping is here:
#create data of the same complexity using random last-names
#(4mio obs and ~1-3 kids per parents)
pdata<-data.frame(parents_name=paste0(rep(c("peter pan + marta ",
"pieter pan + marta ",
"armin dolgner + jane johanna ",
"jack jackson + sombody "),1e6),stringi::stri_rand_strings(4e6, 5)))
for (i in 1:nrow(pdata)) {
similar_fatersname0<-stringdist::stringdist(pdata$parents_name[i],pdata$parents_name[i:nrow(pdata)],nthread=4)<2
#[create grouping indicator]
}
My question: There should be substantial efficiency gains, e.g. because I could stop comparing strings once I found them to sufficiently different in something that is easier to assess, eg. string length, or first word. The string length variant already works and reduces complexity by a factor ~3. But thats by far too little. Any suggestions to reduce computation time are appreciated.
Remarks:
The strings are actually in unicode and not in the Latin alphabet (Devnagari)
Pre-processing to drop unused characters etc is done
There are two challenges:
A. The parallel execution of Levenstein distance - instead of a sequential loop
B. The number of comparisons: if our source list has 4 million entries, theoretically we should run 16 trillion of Levenstein distance measures, which is unrealistic, even if we resolve the first challenge.
To make my use of language clear, here are our definitions
we want to measure the Levenstein distance between expressions.
every expression has two sections, the parent A full name and the parent B full name which are separated by a plus sign
the order of the sections matters (i.e. two expressions (1, 2) are identical if Parent A of expression 1 = Parent A of expression 2 and Parent B or expression 1= Parent B of expression 2. Expressions will not be considered identical if Parent A of expression 1 = Parent B of expression 2 and Parent B of expression 1 = Parent A of expression 2)
a section (or a full name) is a series of words, which are separated by spaces or dashes and correspond to the the first name and last name of a person
we assume the maximum number of words in a section is 6 (your example has sections of 2 or 3 words, I assume we can have up to 6)
the sequence of words in a section matters (the section is always a first name followed by a last name and never the last name first, e.g. Jack John and John Jack are two different persons).
there are 4 million expressions
expressions are assumed to contain only English characters. Numbers, spaces, punctuation, dashes, and any non-English character can be ignored
we assume the easy matches are already done (like the exact expression matches) and we do not have to search for exact matches
Technically the goal is to find series of matching expressions in the 4-million expressions list. Two expressions are considered matching expression if their Levenstein distance is less than 2.
Practically we create two lists, which are exact copies of the initial 4-million expressions list. We call then the Left list and the Right list. Each expression is assigned an expression id before duplicating the list.
Our goal is to find entries in the Right list which have a Levenstein distance of less than 2 to entries of the Left list, excluding the same entry (same expression id).
I suggest a two step approach to resolve the two challenges separately. The first step will reduce the list of the possible matching expressions, the second will simplify the Levenstein distance measurement since we only look at very close expressions. The technology used is any traditional database server because we need to index the data sets for performance.
CHALLENGE A
The challenge A consists of reducing the number of distance measurements. We start from a maximum of approx. 16 trillion (4 million to the power of two) and we should not exceed a few tens or hundreds of millions.
The technique to use here consists of searching for at least one similar word in the complete expression. Depending on how the data is distributed, this will dramatically reduce the number of possible matching pairs. Alternatively, depending on the required accuracy of the result, we can also search for pairs with at least two similar words, or with at least half of similar words.
Technically I suggest to put the expression list in a table. Add an identity column to create a unique id per expression, and create 12 character columns. Then parse the expressions and put each word of each section in a separate column. This will look like (I have not represented all the 12 columns, but the idea is below):
|id | expression | sect_a_w_1 | sect_a_w_2 | sect_b_w_1 |sect_b_w_2 |
|1 | peter pan + marta steward | peter | pan | marta |steward |
There are empty columns (since there are very few expressions with 12 words) but it does not matter.
Then we replicate the table and create an index on every sect... column.
We run 12 joins which try to find similar words, something like
SELECT L.id, R.id
FROM left table L JOIN right table T
ON L.sect_a_w_1 = R.sect_a_w_1
AND L.id <> R.id
We collect the output in 12 temp tables and run an union query of the 12 tables to get a short list of all expressions which have a potential matching expressions with at least one identical word. This is the solution to our challenge A. We now have a short list of the most likely matching pairs. This list will contain millions of records (pairs of Left and Right entries), but not billions.
CHALLENGE B
The goal of challenge B is to process a simplified Levenstein distance in batch (instead of running it in a loop).
First we should agree on what is a simplified Levenstein distance.
First we agree that the levenstein distance of two expressions is the sum of the levenstein distance of all the words of the two expressions which have the same index. I mean the Levenstein distance of two expressions is the distance of their two first words, plus the distance of their two second words, etc.
Secondly, we need to invent a simplified Levenstein distance. I suggest to use the n-gram approach with only grams of 2 characters which have an index absolute difference of less than 2 .
e.g. the distance between peter and pieter is calculated as below
Peter
1 = pe
2 = et
3 = te
4 = er
5 = r_
Pieter
1 = pi
2 = ie
3 = et
4 = te
5 = er
6 = r_
Peter and Pieter have 4 common 2-grams with an index absolute difference of less than 2 'et','te','er','r_'. There are 6 possible 2-grams in the largest of the two words, the distance is then 6-4 = 2 - The Levenstein distance would also be 2 because there's one move of 'eter' and one letter insertion 'i'.
This is an approximation which will not work in all cases, but I think in our situation it will work very well. If we're not satisfied with the quality of the results we can try with 3-grams or 4-grams or allow a larger than 2 gram sequence difference. But the idea is to execute much fewer calculations per pair than in the traditional Levenstein algorithm.
Then we need to convert this into a technical solution. What I have done before is the following:
First isolate the words: since we need only to measure the distance between words, and then sum these distances per expression, we can further reduce the number of calculations by running a distinct select on the list of words (we have already prepared the list of words in the previous section).
This approach requires a mapping table which keeps track of the expression id, the section id, the word id and the word sequence number for word, so that the original expression distance can be calculated at the end of the process.
We then have a new list which is much shorter, and contains a cross join of all words for which the 2-gram distance measure is relevant.
Then we want to batch process this 2-gram distance measurement, and I suggest to do it in a SQL join. This requires a pre-processing step which consists of creating a new temporary table which stores every 2-gram in a separate row – and keeps track of the word Id, the word sequence and the section type
Technically this is done by slicing the list of words using a series (or a loop) of substring select, like this (assuming the word list tables - there are two copies, one Left and one Right - contain 2 columns word_id and word) :
INSERT INTO left_gram_table (word_id, gram_seq, gram)
SELECT word_id, 1 AS gram_seq, SUBSTRING(word,1,2) AS gram
FROM left_word_table
And then
INSERT INTO left_gram_table (word_id, gram_seq, gram)
SELECT word_id, 2 AS gram_seq, SUBSTRING(word,2,2) AS gram
FROM left_word_table
Etc.
Something which will make “steward” look like this (assume the word id is 152)
| pk | word_id | gram_seq | gram |
| 1 | 152 | 1 | st |
| 2 | 152 | 2 | te |
| 3 | 152 | 3 | ew |
| 4 | 152 | 4 | wa |
| 5 | 152 | 5 | ar |
| 6 | 152 | 6 | rd |
| 7 | 152 | 7 | d_ |
Don't forget to create an index on the word_id, the gram and the gram_seq columns, and the distance can be calculated with a join of the left and the right gram list, where the ON looks like
ON L.gram = R.gram
AND ABS(L.gram_seq + R.gram_seq)< 2
AND L.word_id <> R.word_id
The distance is the length of the longest of the two words minus the number of the matching grams. SQL is extremely fast to make such a query, and I think a simple computer with 8 gigs of RAM would easily do several hundred of million lines in a reasonable time frame.
And then it's only a matter of joining the mapping table to calculate the sum of word to word distance in every expression, to get the total expression to expression distance.
You are using the stringdist package anyway, does stringdist::phonetic() suit your needs? It computes the soundex code for each string, eg:
phonetic(pdata$parents_name)
[1] "P361" "P361" "A655" "J225"
Soundex is a tried-and-true method (almost 100 years old) for hashing names, and that means you don't need to compare every single pair of observations.
You might want to go further and do soundex on first name and last name seperately for father and mother.
My suggestion is to use a data science approach to identify only similar (same cluster) names to compare using stringdist.
I have modified a little bit the code generating "parents_name" adding more variability in first and second names in a scenario close to reality.
num<-4e6
#Random length
random_l<-round(runif(num,min = 5, max=15),0)
#Random strings in the first and second name
parent_rand_first<-stringi::stri_rand_strings(num, random_l)
order<-sample(1:num, num, replace=F)
parent_rand_second<-parent_rand_first[order]
#Paste first and second name
parents_name<-paste(parent_rand_first," + ",parent_rand_second)
parents_name[1:10]
Here start the real analysis, first extract feature from the names such as global length, length of the first, length of the second one, numeber of vowels and consonansts in both first and second name (and any other of interest).
After that bind all these feature and clusterize the data.frame in a high number of clusters (eg. 1000)
features<-cbind(nchars,nchars_first,nchars_second,nvowels_first,nvowels_second,nconsonants_first,nconsonants_second)
n_clusters<-1000
clusters<-kmeans(features,centers = n_clusters)
Apply stringdistmatrix only inside each cluster (containing similar couple of names)
dist_matrix<-NULL
for(i in 1:n_clusters)
{
cluster_i<-clusters$cluster==i
parents_name<-as.character(parents_name[cluster_i])
dist_matrix[[i]]<-stringdistmatrix(parents_name,parents_name,"lv")
}
In dist_matrix you have the distance beetwen each element in the cluster and you are able to assign the family_id using this distance.
To compute the distance in each cluster (in this example) the code takes approximately 1 sec (depending on the dimension of the cluster), in 15mins all the distances are computed.
WARNING: dist_matrix grow very fast, in your code is better if you will analyze it inside di for loop extracting famyli_id and then you can discard it.
You may improve by not comparing all the couples of lines.
Instead, create a new variable that will be helpfull for decide if it is worth comparing.
For exemple, create a new variable "score" contaning the ordered list of letters used in parents_name (for exemple if "peter pan + marta steward" then the score will be "ademnprstw"), and calculate distance only between lines where score are matching.
Of course, you can find a score that fits better your need, and improve a little to enable comparison when not all the letters used are common ..
I faced the same performance issue couple years ago. I had to match people's duplicates based on their typed names. My dataset had 200k names and the matrix approach exploded. After searching for some day about a better method, the method I'm proposing here did the job for me in some minutes:
library(stringdist)
parents_name <- c("peter pan + marta steward",
"pieter pan + marta steward",
"armin dolgner + jane johanna dough",
"jack jackson + sombody else")
person_id <- 1:length(parents_name)
family_id <- vector("integer", length(parents_name))
#Looping through unassigned family ids
while(sum(family_id == 0) > 0){
ids <- person_id[family_id == 0]
dists <- stringdist(parents_name[family_id == 0][1],
parents_name[family_id == 0],
method = "lv")
matches <- ids[dists <= 3]
family_id[matches] <- max(family_id) + 1
}
result <- data.frame(person_id, parents_name, family_id)
That way the while will compare fewer matches on every iteration. From that, you might implement different performance boosters, like filtering the names with the same first letter before comparing, etc.
Making equivalency groups on non transitive relation does not make sense. If A is like B and B is like C, but A is not like C, how would you make families from that? Using something like soundex (that was idea of Neal Fultz, not mine) seems the only meaningful option and it solves your problem with performance too.
What I have used to reduce the permutations involved in this sort of name matching, is create a function that counts the syllables in the name (surname) involved. Then store this in the database, as a pre-processed value. This becomes a Syllable Hash function.
Then you can choose to group words together with the same number of syllables as each other. (Although I use algorithms that allow 1 or 2 syllables difference, which may be presented as legitimate spelling / typo errors...But my research has found that 95% of misspellings share the same number of syllables)
In this case Peter and Pieter would have the same syllable count (2), but Jones and Smith do not (they have 1). (For example)
If your function does not get 1 syllable for Jones, then you may need to increase your tolerance to allow for at least 1 syllable difference in the Syllable Hash function grouping that you use. (To account for incorrect syllable function results, and to catch the matching surname correctly in the grouping)
My syllable counting function may not apply completely - as you might need to cope with non-English letter sets...(So I have not pasted the code...Its in C anyway) Mind you - the Syllable count function does not have to be accurate in terms of TRUE syllable count; it simply needs to act as a reliable Hashing function - which it does. Far superior to SoundEx which relies on the first letter being accurate.
Give it a go, you might be surprised how much improvement you get by implementing a Syllable Hash function. You may have to ask SO for help getting the function into your language.
If I get it right, you want to compare every parent pair (every row in parent_name data frame) with all other pairs (rows), and keep rows that have Levenstein distance smaller or equal to 2.
I have written following code for the beginning:
pdata<-data.frame(parents_name=c("peter pan + marta steward",
"pieter pan + marta steward",
"armin dolgner + jane johanna dough",
"jack jackson + sombody else"))
fuzzy_match <- list()
system.time(for (i in 1:nrow(pdata)){
fuzzy_match[[i]] <- cbind(pdata, parents_name_2 = pdata[i,"parents_name"],
dist = as.integer(stringdist(pdata[i,"parents_name"], pdata$parents_name)))
fuzzy_match[[i]] <- fuzzy_match[[i]][fuzzy_match[[i]]$dist <= 2,]
})
fuzzy_final <- do.call(rbind, fuzzy_match)
Does it return what you wanted?
it reproduces your output, i guess you will have to decide partial matching criteria, i kept the default agrep ones
pdata$parents_name<-as.character(pdata$parents_name)
x00<-unique(lapply(pdata$parents_name,function(x) agrep(x,pdata$parents_name)))
x=c()
for (i in 1:length(x00)){
x=c(x,rep(i,length(x00[[i]])))
}
pdata$person_id=seq(1:nrow(pdata))
pdata$family_id=x

converting rows to columns of a data frame in R

I have a data set like this
movieID title year country genre directorName Rating actorName1 actorName.2
1 hello 1995 USA action john smith 6 tom hanks charlie sheen
2 MI2 1997 USA action mad max 8 tom cruize some_body
3 MI2 1997 USA thriller mad max 8 tom cruize some_body
basically there are numerous rows that just have a different user given genre that I would like to columns having genre1, genre2, ...
I tried reshape() but it would only convert based on some ID variable. If anyone has any ideas let me know
You can use reshape() to do this, if you understand the lens through which reshape() views data.
Background
First, consider the concept of a record in the context of the relational model of data management. Generally, in a table of data, each record should correspond to a well-defined unit of data, concisely termed the record unit, with one or more columns acting as identification or key variables that serve to differentiate between unique instances of the record unit.
Usually, units are described by a set of scalar variables. In other words, each record has associated with it one or more scalar values, each of which provides a single piece of information about the unit. In a nice simple world, all properties of units would be scalar, and thus you could represent each variable as a single column vector, with each element/cell corresponding to one record unit, and thereby providing the value of that particular property for that particular unit.
Further to the concept of properties, it is possible and very common to identify typing or grouping classifications of units. These are often represented as additional scalar properties of units.
When people talk about the long format vs. the wide format of tabular data, they are generally referring to how these kinds of type classifications are laid out in a table. This choice of data layout is directly related to the choice of unit that is represented by a single record in the table. These are actually one and the same choice.
For example, in an experiment with multiple measurements per individual, it would be possible to store one measurement per record, with individuals represented over multiple records, and a type column to distinguish between measurement type. Alternatively, it would be possible to store one individual per record, with each measurement represented by a single column. With respect to each other, the former format is long, and the latter format is wide. But now consider that, if each individual belonged to a single experimental group within the experiment, it would be possible to store one group per record, with each individual represented by a set of columns, and each measurement represented by one column within the set. This is yet a "wider" format, if you will. It's all relative.
Unfortunately, unit characteristics are sometimes more complex than simple scalar values. The most common case of this is a multivalue property, sometimes described as a many-to-one relationship (especially in the context of DBMSs). In other words, multiple values for the property can be associated with a single record unit. In cases like this, it is not possible to represent the multivalue property as a simple column vector within the data set. There are hacks that programmers often settle into when trying to deal with this complexity, such as:
Concatenating the multiple values into a single scalar value (such as a single comma-separated string, or a bit vector). Let's call this the "concatenation hack".
Duplicating the unit record for each value of the property. (This generally can only be plausible if only one of the properties in the data set is multivalue.) Let's call this the "duplication hack".
Separating the property into multiple "instances" of itself, each stored in its own column. Let's call this the "separation hack".
Simply trying to ignore all but one of the multiple values. Let's call this the "ignorance hack".
In some contexts, special data types can be used to more appropriately represent the data as a pseudo-column-vector. PostgreSQL, for example, provides an array column type, and even R data.frames can have list columns whose individual elements can hold any data type supported by R, including multielement vectors. These representations are usually preferable to the aforementioned hacks.
Probably the most widely used solution that I wouldn't qualify as a hack is to completely separate the multivalue property from the primary table of data, and instead store it as a separate table which is linked to the primary table on a key. Each record in the secondary table has a key to a record in the primary table, and stores alongside the key a single value of the multivalue property. This is the design advocated by the relational model.
These approaches all have their own tradeoffs, of course, and the analysis of which is optimal for a given situation can be very complex, nebulous, and even somewhat subjective. I won't go into more detail on this here.
Before I begin to talk about reshape(), it is important to emphasize that unit typing is a very different thing from multivalue properties. Reshaping data is generally supposed to be about managing typing and record unit selection. It is not supposed to be about managing multivalue property layout, but it can be used in this way, as we will see.
reshape()
At its most abstract, reshape() can be used to transform a set of typed scalar data columns from one row per type with a discriminator column to one column per type with a discriminator suffix in the column name, for every unique (possibly multicolumn) key, and vice-versa.
The key will generally correspond with a single record unit, to use the terminology introduced earlier. Each key uniquely identifies one record unit.
The data columns are the actual variables/properties which describe the record units, with the discriminator acting to distinguish between the different types of the data variables.
In the terminology of the reshape() documentation and interface, the key columns are "id" columns, the discriminator is the "time" column, and the data columns are "varying" columns.
It is important to understand that the key you specify as the idvar argument is always the unique key of the wide format, whether you are transforming to wide from long, or to long from wide. In the long format, the unique key is the idvar columns plus the discriminator column (timevar).
Here's a simple demo:
## define example long table
long <- data.frame(id1=rep(letters[1:2],each=4L),id2=rep(1:2,each=2L),type=1:2,x=1:8,y=9:16);
long;
## id1 id2 type x y
## 1 a 1 1 1 9
## 2 a 1 2 2 10
## 3 a 2 1 3 11
## 4 a 2 2 4 12
## 5 b 1 1 5 13
## 6 b 1 2 6 14
## 7 b 2 1 7 15
## 8 b 2 2 8 16
## convert to wide
idvar <- c('id1','id2');
timevar <- 'type';
wide <- reshape(long,dir='w',idvar=idvar,timevar=timevar);
attr(wide,'reshapeWide') <- NULL; ## remove "helper" attribute, which cannot always be relied upon
wide;
## id1 id2 x.1 y.1 x.2 y.2
## 1 a 1 1 9 2 10
## 3 a 2 3 11 4 12
## 5 b 1 5 13 6 14
## 7 b 2 7 15 8 16
## convert back to long
long2 <- reshape(wide,dir='l',idvar=idvar,timevar=timevar,varying=names(wide)[!names(wide)%in%c(idvar,timevar)]);
attr(long2,'reshapeLong') <- NULL; ## remove "helper" attribute, which cannot always be relied upon
long2 <- long2[do.call(order,long2[c(idvar,timevar)]),]; ## better order, corresponding with original long
rownames(long2) <- NULL; ## remove useless row names
long2$type <- as.integer(long2$type); ## annoyingly, longifying interprets discriminator suffixes as doubles
identical(long,long2);
## [1] TRUE
The above code also demonstrates some of the quirks committed by reshape(), such as attribute assignments (that I've never seen anyone rely upon), unexpected row order, undesirable row names, and non-ideal vector type derivation. All of these quirks can be papered over with simple modifications, as I show above.
Also notice that the varying argument can be omitted when transforming from long to wide, in which case it is derived by reshape() by the process of elimination, but it cannot be omitted when transforming from wide to long.
Input
The situation you've gotten yourself into appears to be that you have a data.frame that is supposed to contain one row per movie, but each movie record has been duplicated for each genre that is associated with the movie. In other words, the movie is the record unit, and the genre is a multivalue property associated with the movie, which is currently being represented by the duplication hack.
Your objective seems to be to transform the data from the duplication hack into the separation hack.
I don't mean to sound too critical here; these hacks are widely used and are, in many cases, fairly effective at handling this kind of complexity in a relatively simple way. It's very likely this is a good solution for your application. But I'm going to call a spade a spade; these are hacks, and are far from the most appropriate or robust solutions for data processing. And I agree that the separation hack is better than the duplication hack.
Another confusing detail is that there is a movieID column which appears to be unique per row, and not unique per movie. IDs 2 and 3 both seem to be associated with movie MI2.
My interpretation is that, in the input, because the duplication hack has been used to deal with multiple genres, each row can be thought of as being unique per genre instance. In other words, each row represents a single instance of a genre as used in a single movie. Hence the movieID column is better thought of as a genre instance identifier, and has just been misnamed. (An alternative interpretation is that it was generated incorrectly, and should be unique per movie, in which case it should be fixed and treated identically to the key columns described later.)
Solution
We can solve this problem by calling reshape() to transform from long format to wide format.
Recall that reshaping is supposed to be used for type layout, for navigating between record unit representations. Here we're instead going to use it for transforming how the multivalue property currently stored in the genre column is laid out.
Now, the most important question is, which columns are keys (idvar), which is the discriminator (timevar), and which are data (varying)?
The easiest one is the genre column. It's a data column. It's not part of the key that will help uniquely identify each movie record in the wide format, and it's certainly not a discriminator of other data columns, so it must be a data column itself. We can also arrive at this answer by considering what must happen to it during the transformation; for each unique key, the genre values must be separated from one row per value to one column per value, which is what happens to all data columns when transforming from long to wide.
Now it's useful to consider the discriminator column. Which one is it? In actuality, it doesn't exist in the input. There's no column that says "this is genre type X, this is genre type Y". So what do we do? According to your required output, you want to associate with each genre a sequential index number, presumably in row order. This means we need to synthesize a new column with such a sequence when passing the data.frame to reshape(). However, we must be careful to ensure that the sequence starts anew for each movie, otherwise every record in the input table would see its genre occupy its own column in the output, due to its unique discriminator suffix. We can do this with ave() (grouping by the key columns) and transform(). We'll name the synthesized column time, which is the default assumption by reshape() if you don't specify the timevar argument. This will allow us to omit specification of that argument. (Note: I've always wished that reshape() would default to such a row-order sequence instead of looking for an input column named time, but it doesn't do that. Oh well.)
Now let's deal with the movieID column. Being a unique identifier in the input table, the only way to include it in the output table would be to also treat it as a data column, so that it would be split by the discriminator into separate columns. I decided to make the assumption that you don't want to do this, so I just removed it from the input table before reshaping, by exploiting the same transform() call. If you want, you can excise the removal piece to see the effect of including movieID across the transformation.
That leaves the remaining columns of title, year, country, directorName, Rating, actorName1, and actorName.2. How should we treat these?
Technically speaking, conceptually, most of them should be data columns. They can't be discriminators (we already covered that), and there's no way most of them (Rating, for example) could be considered key columns. Again, conceptually.
But it would be incorrect to specify any of them as data columns. The reason is that we're not using reshape() in the normal way. We know the movie records have been duplicated for the genre duplication hack used by the input data.frame, and so all the columns I just listed are actually just duplicates within the movie record group. We need these columns to effectively collapse to a single record in the output, and that's exactly what happens with key columns that pass through a reshape() call. Hence, we must identify them all as key columns by passing them to the idvar argument.
Another way of thinking about this is that the key columns are left untouched by reshape(), other than deduplication (if going from long to wide) or duplication (if going from wide to long). It is only the discriminator column that is transferred from column to suffix (if going from long to wide) or vice-versa (if going from wide to long), and data columns that are transferred from single column to multiple columns (if going from long to wide) or vice-versa (if going from wide to long). We need these columns to remain untouched, other than deduplication. Hence we require all columns, other than the target multivalue property column genre and the synthesized time column (and, in this case, the extraneous movieID column) to be specified as key columns.
Note that this is true even if one or more of the key columns could serve as a true key for the movie records. For example, if title was known to be unique within the table by movie, it would still be incorrect to just specify title as the key, and all the other column names I listed as data columns, because they would then be widened in the output according to the synthesized discriminator, even though we know all values within each movie record group are identical.
So, here's the end result:
df <- data.frame(movieID=c(1L,2L,3L),title=c('hello','MI2','MI2'),year=c(1995L,1997L,1997L),country=c('USA','USA','USA'),genre=c('action','action','thriller'),directorName=c('john smith','mad max','mad max'),Rating=c(6L,8L,8L),actorName1=c('tom hanks','tom cruize','tom cruize'),actorName.2=c('charlie sheen','some_body','some_body'),stringsAsFactors=F);
idcns <- names(df)[!names(df)%in%c('movieID','genre')];
reshape(transform(df,movieID=NULL,time=ave(df$movieID,df[idcns],FUN=seq_along)),dir='w',idvar=idcns,sep='');
## title year country directorName Rating actorName1 actorName.2 genre1 genre2
## 1 hello 1995 USA john smith 6 tom hanks charlie sheen action <NA>
## 2 MI2 1997 USA mad max 8 tom cruize some_body action thriller
Note that it is irrelevant exactly which vector is passed as the first argument to ave(), since seq_along() ignores its argument, except for its length. But we do require an integer vector, since ave() tries to coerce its result to the same type as the argument. It is acceptable to use df$movieID because it is an integer vector; alternatively we could use df$year, df$Rating, or synthesize an integer vector with seq_len(nrow(df)) or integer(nrow(df)).
Try this with dplyr and tidyr:
library(tidyr)
library(dplyr)
df %>% mutate(yesno=1) %>% spread(genre, yesno, fill=0)
This creates a column yesno that just gives a value to fill in for each genre. We can then use spread from tidyr. fill=0 means to fill in those not in the genre with 0 instead of NA.
Before:
genre title yesno
1 action lethal weapon 1
2 thriller shining 1
3 action taken 1
4 scifi alien 1
After:
title action scifi thriller
1 alien 0 1 0
2 lethal weapon 1 0 0
3 shining 0 0 1
4 taken 1 0 0

Tcl dict create makes weird dictionary

set table [dict create cells state bits state]
set data [dict create 1 "s" 1 "f"]
puts $table
puts $data
puts out:
cells state bits state
1 f
This is quite weird! why doesn't dict create make a full dictionary on 1 s 1 f ?
thanks!
ps documentation says:
dict create ?key value ...?
Return a new dictionary that contains each
of the key/value mappings listed as arguments (keys and values
alternating, with each key being followed by its associated value.)
https://www.tcl.tk/man/tcl/TclCmd/dict.htm#M6
Post Post Script:
I found that putting any two keys that are the same does the same thing such as:
puts [dict create 2 "s" 2 "f"]
2 f
If you enter different values for the same key (key = 1 in the first case, key = 2 in the second), an associative container like the Tcl dict will keep only one of those values.
You can still have a data structure that has multiple equal keys, just don't use dict create, which enforces the unique-key trait (that's AFAIK the only reason to initialize dictionaries with dict create):
% set data {1 s 1 f}
1 s 1 f
% dict get $data 1
f
% set data [dict create 1 s 1 f]
1 f
% dict get $data 1
f
Dictionary-handling commands will deal with this structure as if it had only one each of the keys. Dictionary-mutating commands will not preserve the multiple equal keys of the original structure.

How to encrypt 32bit integer? [duplicate]

I need an algorithm that can do a one-to-one mapping (ie. no collision) of a 32-bit signed integer onto another 32-bit signed integer.
My real concern is enough entropy so that the output of the function appears to be random. Basically I am looking for a cipher similar to XOR Cipher but that can generate more arbitrary-looking outputs. Security is not my real concern, although obscurity is.
Edit for clarification purpose:
The algorithm must be symetric, so that I can reverse the operation without a keypair.
The algorithm must be bijective, every 32-bit input number must generate a 32-bit unique number.
The output of the function must be obscure enough, adding only one to the input should result big effect on the output.
Example expected result:
F(100) = 98456
F(101) = -758
F(102) = 10875498
F(103) = 986541
F(104) = 945451245
F(105) = -488554
Just like MD5, changing one thing may change lots of things.
I am looking for a mathmetical function, so manually mapping integers is not a solution for me. For those who are asking, algorithm speed is not very important.
Use any 32-bit block cipher! By definition, a block cipher maps every possible input value in its range to a unique output value, in a reversible fashion, and by design, it's difficult to determine what any given value will map to without the key. Simply pick a key, keep it secret if security or obscurity is important, and use the cipher as your transformation.
For an extension of this idea to non-power-of-2 ranges, see my post on Secure Permutations with Block Ciphers.
Addressing your specific concerns:
The algorithm is indeed symmetric. I'm not sure what you mean by "reverse the operation without a keypair". If you don't want to use a key, hardcode a randomly generated one and consider it part of the algorithm.
Yup - by definition, a block cipher is bijective.
Yup. It wouldn't be a good cipher if that were not the case.
I will try to explain my solution to this on a much simpler example, which then can be easily extended for your large one.
Say i have a 4 bit number. There are 16 distinct values. Look at it as if it was a four dimensional cube:
(source: ams.org)
.
Every vertex represents one of those numbers, every bit represents one dimension. So its basicaly XYZW, where each of the dimensions can have only values 0 or 1. Now imagine you use a different order of dimensions. For example XZYW. Each of the vertices now changed its number!
You can do this for any number of dimensions, just permute those dimensions. If security is not your concern this could be a nice fast solution for you. On the other hand, i dont know if the output will be "obscure" enough for your needs and certainly after a large amount of mapping done, the mapping can be reversed (which may be an advantage or disadvantage, depending on your needs.)
The following paper gives you 4 or 5 mapping examples, giving you functions rather than building mapped sets: www.cs.auckland.ac.nz/~john-rugis/pdf/BijectiveMapping.pdf
If your goal is simply to get a seemingly random permutation of numbers of a roughly defined size, then there is another possible way: reduce the set of numbers to a prime number.
Then you can use a mapping of the form
f(i) = (i * a + b) % p
and if p is indeed a prime, this will be a bijection for all a != 0 and all b. It will look fairly random for larger a and b.
For example, in my case for which I stumbled on this question, I used 1073741789 as a prime for the range of numbers smaller than 1 << 30. That makes me lose only 35 numbers, which is fine in my case.
My encoding is then
((n + 173741789) * 507371178) % 1073741789
and the decoding is
(n * 233233408 + 1073741789 - 173741789) % 1073741789
Note that 507371178 * 233233408 % 1073741789 == 1, so those two numbers are inverse the field of numbers modulo 1073741789 (you can figure out inverse numbers in such fields with the extended euclidean algorithm).
I chose a and b fairly arbitrarily, I merely made sure they are roughly half the size of p.
Apart from generating random lookup-tables, you can use a combination of functions:
XOR
symmetric bit permutation (for example shift 16 bits, or flip 0-31 to 31-0, or flip 0-3 to 3-0, 4-7 to 7-4, ...)
more?
Can you use a random generated lookup-table? As long as the random numbers in the table are unique, you get a bijective mapping. It's not symmetric, though.
One 16 GB lookup-table for all 32 bit values is probably not practical, but you could use two separate 16-bit lookup tables for the high-word and the low word.
PS: I think you can generate a symmetric bijective lookup table, if that's important. The algorithm would start with an empty LUT:
+----+ +----+
| 1 | -> | |
+----+ +----+
| 2 | -> | |
+----+ +----+
| 3 | -> | |
+----+ +----+
| 4 | -> | |
+----+ +----+
Pick the first element, assign it a random mapping. To make the mapping symmetric, assign the inverse, too:
+----+ +----+
| 1 | -> | 3 |
+----+ +----+
| 2 | -> | |
+----+ +----+
| 3 | -> | 1 |
+----+ +----+
| 4 | -> | |
+----+ +----+
Pick the next number, again assign a random mapping, but pick a number that's not been assigned yet. (i.e. in this case, don't pick 1 or 3). Repeat until the LUT is complete. This should generate a random bijective symmetric mapping.
Take a number, multiplies by 9, inverse digits, divide by 9.
123 <> 1107 <> 7011 <> 779
256 <> 2304 <> 4032 <> 448
1028 <> 9252 <> 2529 <> 281
Should be obscure enough !!
Edit : it is not a bijection for 0 ending integer
900 <> 8100 <> 18 <> 2
2 <> 18 <> 81 <> 9
You can always add a specific rule like :
Take a number, divide by 10 x times, multiplies by 9, inverse digits, divide by 9, multiples by 10^x.
And so
900 <> 9 <> 81 <> 18 <> 2 <> 200
200 <> 2 <> 18 <> 81 <> 9 <> 900
W00t it works !
Edit 2 : For more obscurness, you can add an arbitrary number, and substract at the end.
900 < +256 > 1156 < *9 > 10404 < invert > 40401 < /9 > 4489 < -256 > 4233
123 < +256 > 379 < *9 > 3411 < invert > 1143 < /9 > 127 < -256 > -129
Here is my simple idea:
You can move around the bits of the number, as PeterK proposed, but you can have a different permutation of bits for each number, and still be able to decipher it.
The cipher goes like this:
Treat the input number as an array of bits I[0..31], and the output as O[0..31].
Prepare an array K[0..63] of 64 randomly generated numbers. This will be your key.
Take the bit of input number from position determined by the first random number (I[K[0] mod 32]) and place it at the beginning of your result (O[0]). Now to decide which bit to place at O[1], use the previously used bit. If it is 0, use K[1] to generate position in I from which to take, it it is 1, use K[2] (which simply means skip one random number).
Now this will not work well, as you may take the same bit twice. In order to avoid it, renumber the bits after each iteration, omitting the used bits. To generate the position from which to take O[1] use I[K[p] mod 31], where p is 1 or 2, depending on the bit O[0], as there are 31 bits left, numbered from 0 to 30.
To illustrate this, I'll give an example:
We have a 4-bit number, and 8 random numbers: 25, 5, 28, 19, 14, 20, 0, 18.
I: 0111 O: ____
_
25 mod 4 = 1, so we'll take bit whose position is 1 (counting from 0)
I: 0_11 O: 1___
_
We've just taken a bit of value 1, so we skip one random number and use 28. There are 3 bits left, so to count position we take 28 mod 3 = 1. We take the first (counting from 0) of the remaining bits:
I: 0__1 O: 11__
_
Again we skip one number, and take 14. 14 mod 2 = 0, so we take the 0th bit:
I: ___1 O: 110_
_
Now it doesn't matter, but the previous bit was 0, so we take 20. 20 mod 1 = 0:
I: ____ O: 1101
And this is it.
Deciphering such a number is easy, one just has to do the same things. The position at which to place the first bit of the code is known from the key, the next positions are determined by the previously inserted bits.
This obviously has all the disadvantages of anything which just moves the bits around (for example 0 becomes 0, and MAXINT becomes MAXINT), but is seems harder to find how someone has encrypted the number without knowing the key, which has to be secret.
If you don't want to use proper cryptographic algorithms (perhaps for performance and complexity reasons) you can instead use a simpler cipher like the Vigenère cipher. This cipher was actually described as le chiffre indéchiffrable (French for 'the unbreakable cipher').
Here is a simple C# implementation that shifts values based on a corresponding key value:
void Main()
{
var clearText = Enumerable.Range(0, 10);
var key = new[] { 10, 20, Int32.MaxValue };
var cipherText = Encode(clearText, key);
var clearText2 = Decode(cipherText, key);
}
IEnumerable<Int32> Encode(IEnumerable<Int32> clearText, IList<Int32> key) {
return clearText.Select((i, n) => unchecked(i + key[n%key.Count]));
}
IEnumerable<Int32> Decode(IEnumerable<Int32> cipherText, IList<Int32> key) {
return cipherText.Select((i, n) => unchecked(i - key[n%key.Count]));
}
This algorithm does not create a big shift in the output when the input is changed slightly. However, you can use another bijective operation instead of addition to achieve that.
Draw a large circle on a large sheet of paper. Write all the integers from 0 to MAXINT clockwise from the top of the circle, equally spaced. Write all the integers from 0 to MININT anti-clockwise, equally spaced again. Observe that MININT is next to MAXINT at the bottom of the circle. Now make a duplicate of this figure on both sides of a piece of stiff card. Pin the stiff card to the circle through the centres of both. Pick an angle of rotation, any angle you like. Now you have a 1-1 mapping which meets some of your requirements, but is probably not obscure enough. Unpin the card, flip it around a diameter, any diameter. Repeat these steps (in any order) until you have a bijection you are happy with.
If you have been following closely it shouldn't be difficult to program this in your preferred language.
For Clarification following the comment: If you only rotate the card against the paper then the method is as simple as you complain. However, when you flip the card over the mapping is not equivalent to (x+m) mod MAXINT for any m. For example, if you leave the card unrotated and flip it around the diameter through 0 (which is at the top of the clock face) then 1 is mapped to -1, 2 to -2, and so forth. (x+m) mod MAXINT corresponds to rotations of the card only.
Split the number in two (16 most significant bits and 16 least significant bits) and consider the bits in the two 16-bit results as cards in two decks. Mix the decks forcing one into the other.
So if your initial number is b31,b30,...,b1,b0 you end up with b15,b31,b14,b30,...,b1,b17,b0,b16. It's fast and quick to implement, as is the inverse.
If you look at the decimal representation of the results, the series looks pretty obscure.
You can manually map 0 -> maxvalue and maxvalue -> 0 to avoid them mapping onto themselves.

Arranging Vigenere Cipher into columns

As I understand if you arrange a Vigenere cipher into columns you can use the Index Of Coincidence to find out the key length.
I'm struggling to write an Algorithm that would take a piece of text and arrange it into columns.
For example -
1 2 3 4 5 6 7 8 9 10
Would return this if the period is 2 -
1,3,5,7,9
2,4,6,8,10
and perform an IOC test on each of these strings
IF the period is 3 -
1,4,7,10
2,5,8
3,6,9
and perform an IOC test on each of these strings
Etc etc.
I've constructed an IOC test however I'm struggling to think of an algorithm to split the text up into collumns, any tips on how to think more like a computer scientist and construct algorithms like this?
If you already know the key length, it's pretty trivial. If you don't know the key length, you have to guess it by entropy. Here is an example in Python for instance:
if you_dont_know_key_length:
key_length = find_key_length_by_entropy(ciphertext)
columns = [ciphertext[i::key_length] for i in xrange(key_length)]
Any language should basically have the same construct (pick every n-th element in the ciphertext)

Resources