rewrite line in SdFat for arduino - arduino

My problem is that I am trying to make Controller using Arduino, ESP8266 SDcard module and some sensors. When I try to store some data in SDcard, at first time all works fine, but in second or third time I need to rewrite same line with different values. But there is an issue because line length is not equal with previous.
If it is longer then nothing wrong, but if shorter, then it will leave some unnecessary characters.
The most difficult part is where I need to store value of LED light and time:
255 10 0 Where 255 represents LED-value, 10-Hour, 0-min
Value can be 1 or 3 character long, hour 1 or 2, min 1 or 2...
So is there any solutions for this problem??
Now I am trying to change int to uint8_t to equal all possible values.
Is this approach Right? Maybe someone has made something like that?
Any suggestions will be appreciated.

You can normalize the data as you suggest so that the line length is always the same.
One approach is that all values are uint8_t which would require 3 uint8_t values.
Another is leave it as a string but each field is a fixed width with padded values. e.g. 0050901 for a value 5 at the 9th hour, 1st minute. Or pad with spaces so 5 9 1 for the same representation of this data. (Two spaces before the 5 and one space before the 9 and the 1).
Either approach is fine and just depends on what you prefer or what is easier when consuming and/or writing the data.

Related

Xiaomi mi scale v1 Weight Data

I am trying to writing an application that can take weight measurement from Xiaomi mi scale version 1. I get a hex value like this 0624b2070101002e3800004c04(5.50kg) from the Body Composition Measurement service.
According to my research the first byte gives
02:measurement unit
The last two bytes are the weight value,
But when I convert this value to decimal and divide by 200, I don't get the correct value.
Can someone help me get the correct data?
The last two bytes are 4c04. Bluetooth sends data in little endian format so as an integer that is 1100.
The "GATT Specification Supplement 4" document at https://www.bluetooth.com/specifications/specs/ says:
3.27.2.7 Weight field
This field is in kilograms with resolution 0.005 if the bit 0 of the Flag field is 0 or in pounds with a resolution of
0.01 if the bit 0 of the Flag field is 1.
1100 * 0.005 = 5.5kg
The value of hex 4c04 is equal to 19460 which does not give you the desired result.
But changing the endianess (byte-order) to 044c results in a value of 1100 which, divided by 200, gives you the desired result of 5.5kg
I had a similar problem, here is how i do it personally read the whole thing i had everything documented and don't be intimidated by the looks of it it's pretty simple in fact:
for the conversion i use this website cause well I don't know how to convert :)
enter link description here
here is an image showing exactly how it works u need to look to the INT 16 BIG endian and post the whole hex code not just the last two bytes like this:

Lookup table for load cell calibration doesn't like negative numbers

I have a load cell connected to an HX711 which all works fine. I am attempting to create a calibration table with 10 points holding the raw sensor output and the calibrated value using a set of weights. The load cell is bi-directional so it works in any direction so the output is positive and negative counts but the zero is not necessarily 0 output.
This all works fine when the numbers are all positive or all negative in each lookup table but fails when there is negative & positive numbers in the captured points. Eg, the output from the HX711 is positive 28,000 with no load. Add a load of 1kg and get a reading of -56,000. The next reading for 1Kg is say, -83,000. These are stored as {28,000, -56000, 83,000} in an array with the the calibrated {0, 1, 2} in another array.
Normally I interpolate the result based on finding which 2 numbers the raw count falls between. Everything works when the numbers are less than -56,000 and I get readings of 1 to 2kg. When the reading is greater than -56,000, it fails to calculate the reading and I end up with NAN.
It can also be the other way around with negative and then positive. (-56,000, 28,000, 55,000} for example.
How to handle this situation?
I worked this out not long after I posted the question and thought that the answer would help anyone else with this same issue. By comparing the 2 values for negative or positive as I stepped through the table and then swapping them around in the calculation, it works. The difference between 28,000 and -56,000 comes out as 84,000 and using this to do the maths works. I confirmed operation by applying 1kg and 2kd test loads. It reads in both directions pos or neg.

Arduino float - 6 decimal numbers

I wonder if anyone has a good solution for decimal numbers.
I use a Arduino Mega, and try to convert a float with 6 numbers after decimal point. When I try, I get 5 numbers correct, but not number 6. The 6 number is either not counted, or shown as 0. I have tried a lot of different things, but it always end up showing 5 numbers correct, but not 6.
Do anyone has a solution for this?
Appriciate all help
In general, you can use scaled integer forms of floating-point numbers to preserve accuracy.
Specifically, if these are lat/lon values from a GPS device, you might be interested in my NeoGPS. Internally, it uses uses 32-bit integers to maintain 10 significant digits. As you have discovered, most libraries only provide the 6 or 7 digits because they use float.
The example NMEAloc.ino shows how to print the 32-bit integers as if they were floating-point values. It just prints the decimal point at the right place.
The NeoGPS distance and bearing calculations are also careful to perform math operations in a way that maintains that accuracy. The results are very good at small distances/bearings, unlike all other libraries that use the float type in naive calculations.
4 byte floats can hold 6 significant digits.
8 byte doubles can hold 15.
You need to use doubles to get the precision you want.
info on 4 byte floats

Calculation of data delta

I'm writing a server that sends a "coordinates buffer" of game objects to clients every 300ms. But I don't want to send the full data each time. For example, suppose I have an array with elements that change over time:
0 0 100 50 -100 -50 at time t
0 10 100 51 -101 -50 at time t + 300ms
You can see that only the 2nd, 4th, and 5th elements have changed.
What is the right way to send not all the elements, but only the delta? Ideally I'd like a function that returns the complete data the first time and empty data when there are no changes.
Thanks.
Are you looking to optimize for efficiency, or is this a learning exercise? Some thoughts:
Unless there's a lot of data, it's probably easiest, and not terribly inefficient, to send all the data each time.
If you send deltas for all of the data points each time, you won't save much by sending zeroes for unchanged points instead of re-sending the previous vales.
If you send data for only those points that change, you'll need to provide an index for each value. For example, if point 3 increases by 5 and point 8 decreases by 2, then you might send 3 5 8 -2. But now, since you're sending two values for each point that changes, you'll only win if fewer than half the points change.
If the values change relatively slowly, as compared to the rate at which you transmit updates, you might increase efficiency by transmitting the delta for each data point, but using only a few bits. For example, with 4 bits you can transmit values from -8 to +7. That would work as long as the deltas are never larger than that, or if it's ok to transmit several deltas before they "catch up" to the actual values.
It may not be worthwhile to have 2 different mechanisms: one to send the initial values, and another to send deltas. If you can tolerate the lag, it may make more sense to assume some constant initial value for every point, and then transmit only deltas.
There are lots of options. If most data isn't changing, just send (index,value) pairs of the changed elements. If most values change but the changes are small, compute deltas and gzip (or run length encode, or lots of other possibilities) the result.

Decrypting a Caesar's Cypher Trouble

I have been stumped on this for a long time and I was wondering if anyone can help me out. How could I write a program that would calculate the shift value of an encrypted Caesar's Cypher file in C++? My assignment is to take each lower case letter and calculate how many times each is used, then find the most frequent character. I know I can do that with a map of char and int. But then I would have to go back to that letter to change it to the letter 'e'. With maps there's no way to search back through values. I think the only way is a vector of vectors but I wouldn't know how to find the letter again with those either. Does anyone know a better way or how could I use vectors to accomplish this?
You can go like this.
First read whole file in a buffer.
Create map with char key and int value. with all alphabets and values 0
Loop over whole buffer till end incrementing value in map by 1 for each character. Also maintain max variable storing key ha of character having maximum value.
At end of loop max variable will point to e.
Subtracting 4 from max will give you shift value for this cipher. If it comes negative then you can add 26. (As this calculation in in mod 26)
All you need is a vector of size 26 (one for each character) where A has index 0 and Z has index 25.
Go through the ciphertext and in the vector increase the value for the specified character index.
When you've gone through all the ciphertext then go through the vector and check for the highest value. This is probably the character E.
Now you take the index and subtract with 4 (index of E).
This yields the shift value.
Let's say 20 has the highest count then your shift value is 16.

Resources