Arduino - how to change TFmini's measure unit to mm - arduino

I'm making a 3D scanner using TFMini-S Lidar alone with Arduino Uno. The Lidar default to cm, but I want to use mm instead. However everywhere I look, I can't find any clear way to do it. After reading the documentation, I discovered the TFMini is factory programed to use cm. But it also mentions that there are commands that can change the unit to mm.
The value of distance output Dist may vary with the output unit, which is cm by default. If the
unit of distance is changed to the unit-mm via an instruction, and the PC software will be unable
to identify it, and so the unit of “④TIME LINE CHART” will still be cm. For example, the actual
TFmini measurement is 1m, the distance value of TFmini is 1000 in mm, the value read by the
PC software also is 1000, but the unit will not change and still display cm.
And this is all I can find about how to do that from some other website:
TFmin LiDAR sensor has updated with the output unit mm to cm now. There is the configuration commands that can set the output unit by your need:
“Changes to the mm: send 42 57 02 00 00 0 01, 02 to enter configuration mode, then send 42 57 02 00 00 00 00 1A;
Change to cm: send 42 57 02 00 00 0 01, 02 to enter configuration mode, then send 42 57 02 00 00 00 01 1A.”
I'm a beginner to Arduino and I'm not sure how to send those configuration commands. If there are similar resources, please let me know. Any help would be greatly appreciated.

Solution found from TFMini-Plus library, but it works with TFMini-S/TFMini as well.
The "STANDARD_FORMAT_MM" command will set the measure unit to mm, but make sure to save using "SAVE_SETTINGS" command. Refer to the Github example code for your own project.
Big thanks to the creator of that Github repo.

Related

How to prove CRC can detect even number of isolated bit errors

A 1024-bit message is sent that contains 992 data bits and 32 CRC bits. CRC is com- puted using the IEEE 802 standardized, 32-degree CRC polynomial. For each of the following, explain whether the errors during message transmission will be detected by the receiver:
(a) There was a single-bit error.
(b) There were two isolated bit errors.
(c) There were 18 isolated bit errors.
(d) There were 47 isolated bit errors.
(e) There was a 24-bit long burst error.
(f) There was a 35-bit long burst error.
In the above question can anyone explain for option (c).
This 41-bit codeword with weight 18 (expressed as six bytes in hexadecimal) can be exclusive-ored with any message starting at any bit position, and leave the CRC-32 of that message unchanged:
2f 18 3b a0 70 01

Intel Hex - Converting 03 record type to 05 and vice versa

I am trying to write an Intex HEX parser. It seems that there are two styles for extended addresses (allowing you to address memory larger than 64 kB), and that these two styles shouldn't be mixed in the same file:
Segment Addressing
Linear Addressing
I want to write a parser that will take Intel HEX in any of the above two styles, and output in any of the above styles.
I've got a handle on all the record types except for 03 (Start Segment Address) and 05 (Start Linear Address). My assumption, based on the wikipedia article is that for the following record of type 03 (ignore dashes - just to aid annotation):
:04-0000-03-00003800-C1
|_1|___2|_3|_______4|_5|
where:
1 - 04 -> byte count
2 - 0000 -> address
3 - 03 -> record type (Start Segment Address)
4 - 00003800 -> data
5 - C1 -> checksum
If I were to convert this into a 05 type record type, would be:
:0400000500003800BF
Where I just simply take the IP address (last 2 data bytes) of the 03 record and set that to the data field for the 05 type, padded with 0000; completely ignoring the first 2 bytes of the data section of the 03 record (which in this example is 0000).
Is this correct? Am I making any faulty assumptions / missing the point entirely?

Determining subnet mask based on number of hosts

During preparation for an exam I came over two questions that didn't make sense to me.
"You are planning to subnet IPv4 addresses to use on a global network. The design must support creating two separate networks that allows for support up to 1000 hosts and maximize the number of networks that are avaible.
You need to identify the subnets that meet these requirements."
Network 1:
1. 172.16.0.0/5
2. 172.16.0.0/6
3. 172.16.0.0/8
Network 2:
1. 10.0.0.0/14
2. 10.0.0.0/16
3. 10.0.0.0/20
The correct answers are 2) for Network 1 and 1) for Network 2, but the calculations are not presented as a part of the solutions. I've been trying for a few days to work it out, but something in my brain seems to have crashed.
I'd be grateful if anyone could show me how to work from network ID, netmask and number hosts to determine which netmask is the best and provides the most subnets.
10101100 00010000 00000000 000000
is the binary representation of the IP address 172.16.0.0.
the subnet must be done in such a way so that atleast 1000 hosts can be allocated.
We cannot touch the 172 and 16 address as it belongs to a different class.
now looking at
00 00 00 00 and 00 00 00 00
if we subnet the 1st 6 places it will leading to
11 11 11 00 00 00 00 00 to 11 11 11 11 11 11 11 11
which is 172.16.252.0 to 172.16.255.255 giving us 256*4 = approx 1000 hosts
hence we subnet using 172.16.0.0/6 but not others as it may lead to shortage or excess present available hosts
Hope its helpful for the other but consider class as the second question belong to different class

How to calculate the difference between two hex offsets?

I have been searching how to do this but I haven't find the way to do it. There's another way to calculate this difference instead of be counting one by one?
For example:
0x7fffffffe070 - 0x7fffffffe066 = 0x04
0x7fffffffe066 - 0x7fffffffe070 = -0x04
0x7fffffffdbe0 - 0x7fffffffda98 = ????
To understand these results let's suppose we open a file with an hex editor and we have the following hex numbers: 8A B7 00 00 FF, with their corresponding hex offsets: 0x7fffffffe066 0x7fffffffe067 0x7fffffffe068 0x7fffffffe069 0x7fffffffe070. The difference of the hex offsets of the numbers 8A and FF is 0x04 because they differ in 4 positions.
The difference between 0x7fffffffe070 and 0x7fffffffe066 cannot be 4.
I think 0x7fffffffe070 should be 0x7fffffffe06a in your example.
Other than that I don't understand the question.
Normally you would calculate the difference with a calculator set to programmer/hexadecimal mode.
In a previous answer I explained how to calculate the number by hand, but that answer got deleted.

What type of hash (or encryption) is this?

I am reverse engineering an unfortunate legacy application. I know the following:
username => hashFunction() => 8BYUW6iFeL9mmSBW7xjzMw~~
password => hashFunction() => VszQfe5n0+CooePc7CS9kw~~
The hashes always seem to be 22 characters in length. The system is a legacy microsoft .net application. I have reason to believe that they are reverseable as well (but this may not be true).
The two trailing tildes make me feel like I should be able to identify this. How do I begin to figure out what type of hashing is used?
If the 'hashFunction' function is part of the legacy application, you could use a reflection tool like .net reflector to reverse the code to see what the function is doing.
Just a wild guess
64^21 < 2^128 <= 64^22
You need 22 characters for a base-64 encoding of a 128-bit hash.
The above argument will also work if you replace 64 with any integer from 57 up to 68. Base-67 encodings are not common I assume but it doesn't harm to have that in mind.
Your samples seem to have at least 63 characters (26 upper, 26 lowercase, 10 digits, plus the plus-sign).
Assuming the trailing '~' characters are fillers (which is not the usual case (the usual is '=')) the username comes out to:
f0 16 14 5b a8 85 78 bf 66 99 20 56 ef 18 f3 33
and the password comes out to:
56 cc d0 7d ee 67 d3 e0 a8 a1 e3 dc ec 24 bd 93
in hex. This agrees with the the 128 bits of the other posters. It sounds like the output of an AES-128 encryption or MD5 hash. With a sample this small and no idea what the source was, this is kind of where we have to leave it. Since you said you thought they might be reversible, that kind of points to AES-128. Without a bigger sample and no input data, that is all that can be done with it.
I tried doing an MD5 hash of the strings "username" and "password" and they come out to different values. If it was encrypted with AES, we are out of luck without some more hints.
Good luck,
/Bob Bryan
Here is an interesting value:
Fj83STvXE+6q57GVjIi9aQ==
I just happen to know it is PI to 37 places (times 1E37)
3141592653589793238462643383279502884
Not all base64 values are hashed or encrypted. Without some knowledge of the process that resulted in the value, it is impossible to do much with a random 128 bit string.
Regards,
/Bob Bryan

Resources