I have been stumped on this for a long time and I was wondering if anyone can help me out. How could I write a program that would calculate the shift value of an encrypted Caesar's Cypher file in C++? My assignment is to take each lower case letter and calculate how many times each is used, then find the most frequent character. I know I can do that with a map of char and int. But then I would have to go back to that letter to change it to the letter 'e'. With maps there's no way to search back through values. I think the only way is a vector of vectors but I wouldn't know how to find the letter again with those either. Does anyone know a better way or how could I use vectors to accomplish this?
You can go like this.
First read whole file in a buffer.
Create map with char key and int value. with all alphabets and values 0
Loop over whole buffer till end incrementing value in map by 1 for each character. Also maintain max variable storing key ha of character having maximum value.
At end of loop max variable will point to e.
Subtracting 4 from max will give you shift value for this cipher. If it comes negative then you can add 26. (As this calculation in in mod 26)
All you need is a vector of size 26 (one for each character) where A has index 0 and Z has index 25.
Go through the ciphertext and in the vector increase the value for the specified character index.
When you've gone through all the ciphertext then go through the vector and check for the highest value. This is probably the character E.
Now you take the index and subtract with 4 (index of E).
This yields the shift value.
Let's say 20 has the highest count then your shift value is 16.
Related
I have a load cell connected to an HX711 which all works fine. I am attempting to create a calibration table with 10 points holding the raw sensor output and the calibrated value using a set of weights. The load cell is bi-directional so it works in any direction so the output is positive and negative counts but the zero is not necessarily 0 output.
This all works fine when the numbers are all positive or all negative in each lookup table but fails when there is negative & positive numbers in the captured points. Eg, the output from the HX711 is positive 28,000 with no load. Add a load of 1kg and get a reading of -56,000. The next reading for 1Kg is say, -83,000. These are stored as {28,000, -56000, 83,000} in an array with the the calibrated {0, 1, 2} in another array.
Normally I interpolate the result based on finding which 2 numbers the raw count falls between. Everything works when the numbers are less than -56,000 and I get readings of 1 to 2kg. When the reading is greater than -56,000, it fails to calculate the reading and I end up with NAN.
It can also be the other way around with negative and then positive. (-56,000, 28,000, 55,000} for example.
How to handle this situation?
I worked this out not long after I posted the question and thought that the answer would help anyone else with this same issue. By comparing the 2 values for negative or positive as I stepped through the table and then swapping them around in the calculation, it works. The difference between 28,000 and -56,000 comes out as 84,000 and using this to do the maths works. I confirmed operation by applying 1kg and 2kd test loads. It reads in both directions pos or neg.
My problem is that I am trying to make Controller using Arduino, ESP8266 SDcard module and some sensors. When I try to store some data in SDcard, at first time all works fine, but in second or third time I need to rewrite same line with different values. But there is an issue because line length is not equal with previous.
If it is longer then nothing wrong, but if shorter, then it will leave some unnecessary characters.
The most difficult part is where I need to store value of LED light and time:
255 10 0 Where 255 represents LED-value, 10-Hour, 0-min
Value can be 1 or 3 character long, hour 1 or 2, min 1 or 2...
So is there any solutions for this problem??
Now I am trying to change int to uint8_t to equal all possible values.
Is this approach Right? Maybe someone has made something like that?
Any suggestions will be appreciated.
You can normalize the data as you suggest so that the line length is always the same.
One approach is that all values are uint8_t which would require 3 uint8_t values.
Another is leave it as a string but each field is a fixed width with padded values. e.g. 0050901 for a value 5 at the 9th hour, 1st minute. Or pad with spaces so 5 9 1 for the same representation of this data. (Two spaces before the 5 and one space before the 9 and the 1).
Either approach is fine and just depends on what you prefer or what is easier when consuming and/or writing the data.
If the input is a number, how can I write a procedure that checks every digit and produces an output equal to the number of odd digits in this number?
I'm thinking about turning the number into a list first, but I'm trying to think of an easier solution.
Also, we're not allowed to use "odd?". So instead of using "odd?" to check whether or not a digit is odd, we can use "quotient"
Rather than convert to a string like in marekful's comment, try recursively taking off the most significant digit at a time using the mod operation. Then, you can use the quotient function to test for odd or even.
Imagine you have a list of N numbers. You also have the "target" number. You want to find the combination of Z numbers that summed together are close to the target.
Example:
Target = 3.085
List = [0.87, 1.24, 2.17, 1.89]
Output:
[0.87, 2.17] = 3.04 (0.045 offset)
In the example above you would get the group [0.87, 2.17] because it has the smallest offset from the target of 0.045. It's a list of 2 numbers but it could be more or less.
My question is what is the best way/algorithm (fastest) to solve this problem? I'm thinking a recursive approach but not yet exactly sure how. What is your opinion on this problem?
This is a knapsack problem. To solve it you would do the following:
def knap(numbers,target):
values = Set()
values.add(0)
for v in values:
for n in numbers:
if v+n<(2*target): # this is optional..
values.add(v+n);
for v in values:
# find the closest item to your target
Essentially, you are building up all of the possible sums of the numbers. If you have integral values, you can make this even faster by using an array instead of a set.
Intuitively, I would start by sorting the list. (Use your favorite algorithm.) Then find the index of the largest element that is smaller than the target. From there, pick the largest element that is less than the target, and combine it with the smallest element. That would probably be your baseline offset. If it is a negative offset, you can keep looking for combinations using bigger numbers; if it is a positive offset you can keep looking for combinations using smaller numbers. At that point recursion might be appropriate.
This doesn't yet address the need for 'Z' numbers, of course, but it's a step in the right direction and can be generalized.
Of course, depending on the size of the problem the "fastest" way might well be to divide up the possible combinations, assign them to a group of machines, and have each one do a brute-force calculation on its subset. Depends on how the question is phrased. :)
I need function that maps any m integers between a and b (where b-a > m) into integers between 0 to m-1. The m integers between a and b may not be in any order. The mapping could be in any order as long as it is one-to-one mapping.
For example I have a set of integers between 10 and 50 and I pick any 10 integers randomly and map them into 0-9. The function could take one, two or three inputs that may different for each set of those 10 integers. And one more thing, it has to be reversible, i.e using those inputs I can get back the original number.
does it exist of such function and is it possible ?
It's fairly easy. Map the smallest number to 0, the second smallest to 1, etc. The map is invertible if and only if you know the set of numbers you began with.
It sounds like you're asking for a minimal perfect hash. Such functions do exist, there are algorithms for finding them, and even preexisting libraries to do the work.