Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
At the moment I'm playing with the LSM303DLHC accelerometer/magnetometer/thermometer.
This is its datasheet: http://www.st.com/st-web-ui/static/active/en/resource/technical/document/datasheet/DM00027543.pdf
Everything is working quite well, but I don't know how to interpret the output values. The datasheet (page 9) says something like "1 mg/LSB" (no, it's not milligramm :D) about the linear acceleration sensitivity in my configuration. What the hell should that mean? Same with temperature sensor output change (8 LSB/°C) and magnetic gain setting (1100 LSB/gauss), only the other way around.
For example, what to do with this accelerometer output: 16384? That is my measured gravitational acceleration.
Now I got the trick. There are several things on this MEMS you have to know, but which are not mentioned in the datasheet:
The accelerometer's output register is just 12 bits and not 16 bits, so you need to right-shift the value by 4 and multiply it with 0,001 G. Furthermore it's little-endian.
The magnetometer's output register is 16 bits, but big-endian. Furthermore the vector order is (X|Z|Y) not (X|Y|Z). To calculate the correct value you need to devide X and Y by 980 gauss⁻¹, while it's 1100 gauss⁻¹ for Z.
The temperature sensor works, but it's not calibrated. So you can use it to measure temperature change, but no absolute temperatures. It's also just 12 bits, but big-endian and you have to devide the output by 8 C⁻¹.
With that Information it's possible to use the LSM303DLHC. But who the hell invented this? "Let's build a new accelerometer, magnetometer and thermometer in one package and screw the user up by mixing word length and endianness up without mentioning it in the datasheet."
LSB/unit or Unit/LSB is the factor(called sensitivity) with which you have to multiply the raw sensor data.
Say Sensor A has X,Y and Z registers ,
the values coming in each of the registers needs to be Divided/multiplied with the LSB/unit or Unit/LSB factor.
This is because the data sheet says # the particular fullscale you will have this much sensitivity(LSB/unit or Unit/LSB)
for LSB/Unit :
x lsb means - 1 unit
1 lsb means - 1/x unit
value lsb(value in the register) = (1/x)*(value in the register) - Apply unitary method here.
similarly for Unit/LSB you have to multiply the sensitivity.
You can build Accelerometer,Magnetometer or Temperature sensor or may be Gyro-meter in one module, but what if a customer/User wants only one sensor?
Rgds,
Rp
The datasheet is definitively unclear regarding the interpretation of the Acceleration registers.
Genesis Rock solution assume it is 12-bits, which works. (Another solution is to assume gain is 16 mg/LSB instead of 1 mg/LSB, but as the last 4 bits of the accelerations seem to always be zeros the former solution makes more sense).
But both for the temperature and acceleration, if you take into account only the 12 most significant bits. The last two bits are still also always zero, so the effective resolution would be 10-bits which is confusing.
I also can't make sense of the temperature reading unless there is an unknown offset not specified in the datasheet.
I hope others can confirm they are getting the same results.
Regarding the 12 bit output of the accelerometer: there is a high-resolution flag on control register 4. It's off by default and there's no information on what high resolution means. I'm guessing that it might enable 16 bit output. Also on control register 4 is a flag to set the endianness of the accelerometer output. It's little endian by default. The data sheet is pretty weak overall.
The simple and embarrassing fact is that none of the responses have hit the target of the question.
The result is buried in another parameter that is supplied in the data sheet: the sensitivity.
for example the FXAS21002C for 2000 dps sensitivity is 62.5 mdps/LSB (=0.0626 dps/LSB).
the zero offset is 25 LSB thus the value in dps units is 0.0625 * 25 = 1.5625 dps
the same IMU has another sensitivity for 250 dps which is 7.825 mdps/LSB (=0.007825 dps/LSB) and since the offset is also 25 LSB then the calculation will expose the real value of 0.0078125 * 25 = 0.1953 dps
the example can be found here: https://learn.adafruit.com/comparing-gyroscope-datasheets/overview
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 days ago.
This post was edited and submitted for review 8 days ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
We analyzed a program which was supposedly used for cracking some cryptographic algorithms.
During the investigations we determined that the program input size can be varied in a wide range and for N-bit input the program result is also always N bit long. Additionally, we found that the program working time depends significantly on input length N, especially when N is greater than 10-15. Our test also reveals that the program working time depends only on Input length, not the input itself.
During our tests, we fixed the following working times with an accuracy of one hundredth of a second):
N=2-16.38 seconds
N=5 - 16.38 seconds
N = 10 - 16.44 seconds
N = 15 - 18.39 seconds
N = 20 - 1 minute 4.22 seconds
We also planned to test the program for N = 25 and N = 30, but for both cases the program didn't finish within half an hour and was forced to terminate it. Finally, we decided for N = 30 to not terminate the program, but to wait a little bit longer. The result was 18 hours 16 minutes 14.62 seconds. We repeated the test for N = 30 and it gave us exactly the same result, more than 18 hours. Tasks: a) Find the program working times for the following three cases - N = 25, N = 40 and N = 50. b) Explain your result and solution process.
At first I thought of finding a linear relation between N and time taken, t. Obviously that failed.
Then I realized that the t for N=2 and N=5 are nearly identical (here they are identical because they have been rounded to two digits after the decimal). Which emphasizes that the change in t only becomes more apparent when N>=10.
So, I tried to write t as a function of N, since t only depends on the size of the input.
Seeing the exponential growth my first idea was to write it as f(t)=Ce^N+k; where C and k are constants and e is euler's number.
That approach does not hold up. Afterwards I thought of trying powers of 2 because it's a computer question but I'm kind of lost.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
If I started with a sha256 hash such as
3f46fdad8e5d6e04e0612d262b3c03649f4224e04d209295ef7de7dc3ffd78a7
and rehashed it continuously (without salting):
i) What is the shortest time it would take before it started cycling in a loop or back onto the same value if at all?
ii) If it did cycle back on itself, could we assume that it had been cracked?
iii) How long would this take using modern GPU cracking techniques?
iv) If all the intermediary hashes were recorded in some kind of rainbow tables - presumably all the hashes within that cycle would be compromised?
v) What is to stop someone computing these cycles and offering cracks to sha256 hashes - likewise for other hashing protocols...
For Extra marks - What is the probability this question would be asked in this forum 60 billion years ago?
If values generated by sha256 can be assumed to be distributed uniformly and randomly, then there exists with probability 1−1/e (about 63%) a 256-bit sequence whose sha256 hash is equal to itself. If so, the minimum sequence length is one.
On the other hand, based on the pigeonhole principle, we know that the sequence must repeat after no more than 2256 iterations. This doesn't say anything about the brokenness of sha256.
The maximum cycle length is 2256 ≈ 1.16×1077 iterations. If you can evaluate 1012 hashes per second, then working your way through all possible hashes would take you about 1065 seconds (about one quindecillion times the age of the earth). Even if you're fortunate enough find a loop in a tiny fraction of that time, you're still liable to be waiting for trillions of years.
Good luck with that. If every atom in our galaxy was used to store a separate hash value, you would run out of space after storing less than one billionth of the total number of hashes. (Source: number of atoms in milky way galaxy ≈ 1068)
See 3 and 4
A similar question was asked about 9 years ago.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Sometimes in code, I see the developer chooses a number like 32 for a package of data. Or in a game, the loaded terrain of a map has the size of 128*128 points.
I know it has something to do with the maximum size of datatypes. Like a Char has 8 bits, etc.
But why don't they just use numbers like 100*100 for a map, a list, or a Minecraft chunk?
If I have 8 bits to store a (positive) number, I can count to 2^8 = 256.
When I choose the size of a map chunk, I could choose a width of 250 in stead of 256. But it seems that is not a good idea. Why?
Sometimes developers do use numbers like 250 or 100. It's not at all uncommon. (1920 appears in a lot of screen resolutions for example.)
But numbers like 8, 32, and 256 are special because they're powers of 2. For datatypes, like 8-bit integers, the number of possible elements of this type is a power of 2, namely, 2^8 = 256. The sizes of various memory boundaries, disk pages, etc. work nicely with these numbers because they're also powers of two. For example, a 16,384-byte page can hold 2048 8-byte numbers, or 256 64-byte structures, etc. It's easy for a developer to count how many items of a certain size fit in a container of another size if both sizes are powers of two, because they've got many of the numbers memorized.
The previous answer emphasizes that data with these sizes fits well into memory blocks, which is of course true. However it does not really explain why the memory blocks themselves have these sizes:
Memory has to be addressed. This means that the location of a given datum has to be calculated and stored somewhere in memory, often in a CPU register. To save space and calculation cost, these addresses should be as small as possible while still allowing as much memory as possible to be addressed. On a binary computer this leads to powers of 2 as optimal memory or memory block sizes.
There is another related reason: Calculations like multiplication and division by powers of 2 can be implemented by shifting and masking bits. This is much more performant than doing general multiplications or divisions.
An example: Say you have a 16 x 16 array of bytes stored in a contiguous block of memory starting at address 0. To calculate the row and column indices from the address, generally you need to calculate row=address / num_columns and column=address % num_columns (% stands for remainder of integer division).
In this special case it is much easier for a binary computer, e.g.:
address: 01011101
mask last 4 bits: 00001101 => column index
shift right by 4: 00000101 => row index
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
At the moment I'm playing with the LSM303DLHC accelerometer/magnetometer/thermometer.
This is its datasheet: http://www.st.com/st-web-ui/static/active/en/resource/technical/document/datasheet/DM00027543.pdf
Everything is working quite well, but I don't know how to interpret the output values. The datasheet (page 9) says something like "1 mg/LSB" (no, it's not milligramm :D) about the linear acceleration sensitivity in my configuration. What the hell should that mean? Same with temperature sensor output change (8 LSB/°C) and magnetic gain setting (1100 LSB/gauss), only the other way around.
For example, what to do with this accelerometer output: 16384? That is my measured gravitational acceleration.
Now I got the trick. There are several things on this MEMS you have to know, but which are not mentioned in the datasheet:
The accelerometer's output register is just 12 bits and not 16 bits, so you need to right-shift the value by 4 and multiply it with 0,001 G. Furthermore it's little-endian.
The magnetometer's output register is 16 bits, but big-endian. Furthermore the vector order is (X|Z|Y) not (X|Y|Z). To calculate the correct value you need to devide X and Y by 980 gauss⁻¹, while it's 1100 gauss⁻¹ for Z.
The temperature sensor works, but it's not calibrated. So you can use it to measure temperature change, but no absolute temperatures. It's also just 12 bits, but big-endian and you have to devide the output by 8 C⁻¹.
With that Information it's possible to use the LSM303DLHC. But who the hell invented this? "Let's build a new accelerometer, magnetometer and thermometer in one package and screw the user up by mixing word length and endianness up without mentioning it in the datasheet."
LSB/unit or Unit/LSB is the factor(called sensitivity) with which you have to multiply the raw sensor data.
Say Sensor A has X,Y and Z registers ,
the values coming in each of the registers needs to be Divided/multiplied with the LSB/unit or Unit/LSB factor.
This is because the data sheet says # the particular fullscale you will have this much sensitivity(LSB/unit or Unit/LSB)
for LSB/Unit :
x lsb means - 1 unit
1 lsb means - 1/x unit
value lsb(value in the register) = (1/x)*(value in the register) - Apply unitary method here.
similarly for Unit/LSB you have to multiply the sensitivity.
You can build Accelerometer,Magnetometer or Temperature sensor or may be Gyro-meter in one module, but what if a customer/User wants only one sensor?
Rgds,
Rp
The datasheet is definitively unclear regarding the interpretation of the Acceleration registers.
Genesis Rock solution assume it is 12-bits, which works. (Another solution is to assume gain is 16 mg/LSB instead of 1 mg/LSB, but as the last 4 bits of the accelerations seem to always be zeros the former solution makes more sense).
But both for the temperature and acceleration, if you take into account only the 12 most significant bits. The last two bits are still also always zero, so the effective resolution would be 10-bits which is confusing.
I also can't make sense of the temperature reading unless there is an unknown offset not specified in the datasheet.
I hope others can confirm they are getting the same results.
Regarding the 12 bit output of the accelerometer: there is a high-resolution flag on control register 4. It's off by default and there's no information on what high resolution means. I'm guessing that it might enable 16 bit output. Also on control register 4 is a flag to set the endianness of the accelerometer output. It's little endian by default. The data sheet is pretty weak overall.
The simple and embarrassing fact is that none of the responses have hit the target of the question.
The result is buried in another parameter that is supplied in the data sheet: the sensitivity.
for example the FXAS21002C for 2000 dps sensitivity is 62.5 mdps/LSB (=0.0626 dps/LSB).
the zero offset is 25 LSB thus the value in dps units is 0.0625 * 25 = 1.5625 dps
the same IMU has another sensitivity for 250 dps which is 7.825 mdps/LSB (=0.007825 dps/LSB) and since the offset is also 25 LSB then the calculation will expose the real value of 0.0078125 * 25 = 0.1953 dps
the example can be found here: https://learn.adafruit.com/comparing-gyroscope-datasheets/overview
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
Let's say I have a slider that can go between 0 and 1. The SoundTransform.volume also ranges between 0 (silent) and 1 (full volume), but if I use a linear function, let's say SoundTransform.volume = slider.volume, the result is rather not pleasing - the perception is that the volume dramatically changes in the lower half and does almost nothing in the upper half of the slider.
I really haven't studied the human ear, but I overheard once that human perception is logarithmic, or something similar. What algorithms should I use for setting the SoundTransform.volume?
human perception in general is logarithmic, also when it comes to things as luminosity, etc. ... this enables us to register small changes to small "input signals" of our environement, or to put it another way: to always percieve a change of a perceivable physical quantity in relation to its value ...
thus, you should modify the volume to grow exponentially, like this:
y = (Math.exp(x)-1)/(Math.E-1)
you can try other bases as well:
y = (Math.pow(base,x)-1)/(base-1)
the bigger the value of base is, the stronger the effect, the slower volume starts growing in the beginning and the faster it grows in the end ...
a slighty simpler approach, giving you similar results (you are only in the interval between 0 and 1, so approximations are quite simple, actually), is to exponantiate the original value, as
y = Math.pow(x, exp);
for exp bigger than 1, the effect is, that the output (i.e. the volume in you case) first goes up slower, and then faster towards the end ... this is very similar to exponential functions ... the bigger exp, the stronger the effect ...
Human hearing is logarithmic, so you want an exponential function (the inverse) to apply to the linear output of your slider. I don't know if human hearing is closer to ln or log:
For Ln:
e^x
For Log:
10^x
You could experiment with other bases too. You will then need to scale your output so that it covers the available range of values.
Update
After a bit of research it seems that base 2 would be appropriate since the power is related to the square of the pressure. If anyone knows better, please correct me.
I think what you want is:
v' = 2^v.a^v - 1
a = ( 2^(log2(m+1)/n) )/2
v is your linear input value ranging from 0..n
v' is your logarithmic value ranging from 0..m
The -1 in the first equation is to give you an output range from 0 instead of 1 (since k^0=1).
The m+1 is to compensate for this so you get 0..m not 0..m+1
You can of course get tweak this to suit your requirements.
Hearing is complicated, the perceived loudness varies according to frequency, the duration of the sample, and from person to person. So this cannot be solved mathematically but by trying a variety of functions for the control and picking the one which 'feels' the best.
Do you find at the moment that varying the control at the low end of the range has little effect on the apparent volume, but that the volume increases rapidly at the upper end of the range? Or do you hear the reverse, the volume varies too quickly at the low end and not enough at the high end? Or would you like finer control over the volume at medium levels?
Increased low-volume sensitivity:
SoundTransform.volume = Math.sin(x * Math.PI / 2);
Increased high-volume sensitivity:
SoundTransform.volume = (Math.pow(base,x) - 1)/(base-1);
or
SoundTransform.volume = Math.pow(x, base);
Where base > 1, try different values and see how it feels. Or more drastically, a 90 degree circular arc:
SoundTransform.volume = 1 - Math.sqrt(1-(x * x));
Where x is slider.volume and is between 0 and 1.
Please do let us know how you get on!
Yes, human perception is logarithmic. Considering this, you should adjust a volume exponentially, so that the percieved increase becomes linear. See decibel on Wikipedia
Android already do such things from Audio Framework.It use decibels to adjust the volumes. User can use steps such as from 1 to 7 for ringtone or 1 to 15 for music.
The formula is:
User call set volume API linearly but get amplitude exponentially. graph as below:
A 3db increase means you are doubling the volume, but the human ear requires ~6db increase to perceive a doubling in volume.
However, a strictly logarithmic curve, while accurately modeling the human perception of volume, has a usability problem.
When people want a loud volume, the knob becomes too sensitive at the upper end, making it difficult to find the "right" volume.
You've probably had this problem before... 7 is too soft, 8 is too loud, meanwhile 1-3 are inaudible over background noise.
So, I recommend a logarithmic scale, but with a floor at the low end and a soft knee at the top to allow a more linear response, especially in the "loud" part of the knob.
Oh, and make sure the knob goes up to 11. ;)
The human ear indeed perceives sounds on a logarithmic scale of increasing intensity, and because of that, the unit generally used to measure acoustic intensity is the decibel (which is actually used for all sorts of intensities and powers, not just those of sound, and also happens to be a dimensionless unit). The reference level, 0 dB, is usually set to the lower bound of human hearing, and every ten-decibel increase above that is equivalent to an increase in power by a factor of 10.
Note, however, that you should first check with other people and see what they think, just in case; what sounds odd to you may not sound odd to others. If they agree with you, then go right ahead and do it exponentially, but if you're in the minority, then it might just be your own ears that are the problem.
EDIT: Ignore my previous third paragraph. Refer to back2dos's answer if you decide to do it exponentially.
This is a javascript function i have for a logarithmic scale for dbm.
The input is a percentage (0.00 to 1.00) and the max value (my implementation uses 12db)
The mid point is set to 0.5 and that will be 0db.
When the percentage is zero, the output is negative Infinity.
function percentageToDb(p, max) {
return max * (1 - (Math.log(p) / Math.log(0.5)));
};