I try to build a smart home trainer.
at this moment, it is connected with Zwift with the Fitness Machine Service.
I can send to zwift Power and Cadence and i can play.
Now i try to add the control point (one of the characteristics included in FTMS)
But i cannot finish the transaction described in the specifications.
I think it's not very easy.
The xml file which describe the control point is empty!
There is no complete sequence diagram of flow chart.
at this moment, i can receive a write event from zwift into the control point.
First zwift send 0x7 and then 0x0
After that...again write 7 into the control point and then 0
I try to answer (indicate) 0x80, 0x801, 0x0180 for the 2 bytes needed (cf specification)
I think i don't really understand the specification
Have you somme informations to help me?
any flow chart, sequence diagram for the resistance level update?
do you confirme i juste need to indicate 2 bytes to answer to a write from zwift into the control point?
#Yonkee
I agree that it's a bit weird that the Control Point XML is empty, since the FTMS specification PDF doesn't give a complete list of e.g. what to expect as parameters to the various commands.
However, I've managed to create an implementation that supports some of the use cases you might be interested in. It currently supports setting target resistance and power, but not the simulation parameters you will need for the regular simulation mode in Zwift. You can find this implementation on github. It's not complete, and I just handle a few commands there, but you can grasp the concept from the code.
I wrote this using the information I could find online, basically the Fitness Machine Specification and the different GATT Characteristics Specifications.
When an app like Zwift write to the CP it's an OP Code optionally followed by parameters, you should reply with the OP Code 0x80 (Response Code) followed by the OP Code that the reply is for, optionally followed by parameters.
In the case of OP Code 0x00 (Request Control) you should therefore reply with: 0x80, 0x00, 0x01. The last 0x01 is the result code for "Success".
In the case of OP Code 0x07 (Start/Resume), you should reply with: 0x80, 0x07, 0x01. Assuming you consider the request successful, the other possible responses are detailed in Table 4.24 of the FTMS Specification PDF.
On command you should look into is OP Code 0x11. When running Zwift in "non-workout mode", so just the normal mode, Zwift will use OP Code 0x11 followed by a set of simulation parameters: Wind Speed, Grade, Rolling resistance coeff and Wind resistance coeff. See the "4.16.2.18 Set Indoor Bike Simulation Parameters Procedure" of the PDF for details on the format of those parameters.
I hope this enables you to proceed. Happy hacking!
Related
I am using the STM32 Microcontroller (as part of the LoRa node MB1296D). I want to connect a pressure sensor (MS5803) to the LoRa node and program the sensor via the SPI Bus. Basically, this is all very new to me, which is why I looked up an example code
I am trying to understand this code and a couple of questions have come up:
the macros that are defined in the very beginning, what exactly is their purpose, and are the hexadecimal numbers inherent to the used microcontroller? - If I was to write a code from scratch, I figured I would start by defining macros for the GPIO Pins corresponding to SPI_SCK, SPI_MISO and SPI_MOSI
the function unsigned long cmd_adc(char cmd) contains a switch command, which I have absolutely no clue as on what is does. I mean it looks to me as if I am trying to set the resolution of the ADC, but how do I know the corresponding delay and why does the switch command contain the 0x0f ?
So. If you could find some time to give me a useful answer, that'd be great! Also, if you know any good readings with special focus on this topic, please tell me! I am trying to tackle this problem with little time available.
Your questions are basic C programming questions and are not really specific to this pressure sensor or example.
The macros are defined with hexadecimal numbers to make it clear that the values represent bit fields. It's very easy (and second nature for embedded software developers) to convert hexadecimal to binary. Read the register descriptions in the sensor's datasheet. The bits set in the hexadecimal values will correspond to meaningful bits in the sensor's register description.
switch (cmd & 0x0f) performs a bitwise AND of the cmd with 0x0f. The hexadecimal value 0x0f has the four least significant bits set. So the code is ignoring (i.e., masking off or zeroing out) the four most significant bits of cmd and considering only the four least significant bits of the cmd value.
I have a task where I need to read 2 parameters from a BLE Beacon. The documentation was seriously lacking and after a fair amount of effort, I managed to get some basic information about reading the data from the BLE Beacon.
The parameters to read are
1) Battery Voltage of the sensor
2) Temperature the beacon has a built in temperature sensor.
I think I have tried almost every popular Python BLE library out there but I just can't seem to get the temperature reading out of the beacon. "I think" I am able to read the voltage. The reason why I said "I think" is because the value seems to match what was provided in the minimal document. And also when I put the beacon into the charger, I can see the value go up - an indication that it is the voltage reading. As I could not read the temperature ( because the UUIDs that are mentioned in the document, the value doesn't seem to change ). I have tried enabling the sensor in every possible way and method described - by writing 01:00 etc. I spent a fair amount of time to reverse engineer the thing. I ran a packet sniffer and managed to capture the data that was being transferred between the beacon and the mobile app ( They have a mobile app ). But then again I am not able to figure out how the temperature readings are being communicated between the beacon and the app. Let me break the whole stuff in smaller blocks.
Hardware: BLE beacon from which voltage and temperature can be read. The temperature sensor is built into the beacon. And the beacon itself is from Texas Instruments but the temperature, voltage sensing part is done by a third party. They provided us with some minimal information and it was difficult to make sense of some of the sentences as they have trouble communicating in English.
The sequence to get the data goes like this
Scan for beacons
When the beacon is found then connect to it
Enable notification
Set notification interval
Get the voltage and temperature reading.
I have been able to do the first 4 real fast, and "half" of No. 5, i.e getting the voltage part. When I say real fast I mean I got that stuff with nearly no documentation available at that time.
As per the info that I have the data resides in these characteristics/UUIDs. Also please note that the UUID are not standard 128 bit and this caused me issues when using certain libraries. But after some tries I got to read/write to them using handles etc. The handles and other stuff I printed are ones that I read using PYGATT (A Python wrapper for gatttool).
The UUIDs are marked as 1st, 2nd, 3rd and 4th parameters and it has the following to say about the parameters
- A: 1 byte (2nd Param)
- B: Maj + Min values, 4 bytes (4th Param)
- C: 4 bytes (3rd Param)
- D: Enable/disable notification ( I have been able to turn this on )
- E: Set notification interval ( I have been able to set this and can notice the change in notification interval )
This is minimal so as to not have a large file. All it does is this - the mobile app connects to the beacon, then the notifications start and the temperate readings are retrieved by the mobile app. Like I had mentioned, I don't seem to have problem reading the voltage, it's only the temperature that I am getting stuck at. I have been at it for a week now. I think I have tried nearly everything that I could think of. I even enumerated all the writable characteristics and tried writing numbers like 1 ( enables the sensor? ). I could have offered a bounty for this straight away if it were possible. I rarely get stuck for so long with a problem. This is driving me a little crazy. I am getting close to my wits end - I guess it's time for a super hero - anyone out there? :) I can provide for every bit of information needed if someone could indicate what is wrong. I even wrote a cordova app ... and tried a bunch of stuff from my Android phone. I can connect ... write to characteristics, read stuff etc but temperature ready, nah!!! It just won't budge. All I get is the same set of values ( I used a JSON.stringify to display A, B and C). I can bother about the byte order later. I guess that is a smaller problem.
The communication between the beacon and a third party mobile app is fine, it is able to read the temperature info just fine.
I have been looking at wireshark data and I am fairly sure that the temperature data is being communicated at this stage. But then when I decode the "value", it looks like it's the voltage. It mentions l2cap but I am not sure how that is being used here to send the temperature readings ( if it is using that in the first place ).
Update: Wrote to every writable characteristics. Wrote values like 1, 0100, 2, 7 on every writable characteristics. At the same time I was reading every readable characteristic ( in a loop ) and doing a comparison (just true/false) with the previous set of values. This seemed like a quick and easier way to know if something changed. Didn't want to take chances with converting the hex to a float. I can figure out the byte order later.
From the sniffed data (wireshark) I can only see 3 writes happening on the beacon.
I am not fully sure, even after a long discussion, but it seems that the four bytes of the notification are used for the voltage as well as the temperature, since the temperature can most probably be derived from the voltage.
From the values it seems that those four bytes represent the voltage in float (if you ignore the absurd factor of 10^-38 that comes in because only 4 bytes instead of 8 bytes are used).
Since typically the temperature T is derived from a resistivity measurement, where the resistivity R is proportional to the voltage U (if the current is constant), you can in principle calculate the temperature T from the voltage U.
The problem is that T(R) is relatively linear, but not perfectly (in contrast to U(R) which is assumed to be U=RI). So you may need to plot the values for T(U) to find out the curve that they are using.
To add to the confusion, I got the best results when only using the first five bits of the third byte and the eight bits of the fourth byte. I am not aware why this is the case, and it might point to some trouble still.
The best option is to ask for their function T(U) that they are using. If they can and will provide it for you...
I'm trying to read temperature data from BMP180 using my BLE112 via I2C. The problem is that what get in the very end are some unrelevant numbers. I think i am missing something extremely important. I follow BMP180 datasheet point by point. Program that i have is wrote in BGScript from Bluegiga
There are few things that are strange in my opinion:
measuring raw temperature (even though is not correct) sometimes gives 0.. So how slow is this programmable I2C?
http://www.sureshjoshi.com/embedded/ble112-how-to-use-i2c/ Suresh Joshi writes here that register i should write and read in is the one from datasheet left shifted once. Is it necessary also in my case?
can someone verify these staps of algorithm:
a) reading calibrations: call hardware_i2c_read(238,0,"\xaa")(result,data_len,sensor(0:22)) - should i write something before?
b) write 0x2E into reg 0xF4 should it be : call hardware_i2c_write(238,1,2,"\xf4\x2e")(written) ?
c) read reg 0xF6(MSB) and 0xF7(LSB) should it be call hardware_i2c_read(239,0,"\xf6")(result,data_len, MSB) and call hardware_i2c_read(239,0,"\xf7")(result,data_len,LSB)
I am struggling so hard so if anyone could tell me what is wrong or if i can't use this sensor with this BLE module please tell me! (:
My .bgs file : http://pastebin.com/3zHVdNrT
BR Bartek
I'm designing a Z80 compatible project. I'm up to designing the flags register.
I originally thought that the flags were generated straight from the ALU depending on the inputs and type of ALU operation.
But after looking at the instructions and the flags result it doesn't seem that the flags are always consistent with this logic.
As a result I'm then assuming I also have to feed the ALU the op-code as well to generate the correct flags each time. But this would seem to make the design over-complicated. And before making this huge design step I wanted to check with the Internet.
Am I correct? OR just really confused, and it is as simple as I originally thought?
Of course, the type of the operation is important. Consider overflow when doing addition and subtraction. Say, you're adding or subtracting 8-bit bytes:
1+5=6 - no overflow
255+7=6 - overflow
1-5=252 - overflow
200-100=100 - no overflow
200+100=44 - overflow
100-56=44 - no overflow
Clearly, the carry flag's state here depends not only on the input bytes or the resultant byte value, but also on the operation. And it indicates unsigned overflow.
The logic is very consistent. If it's not, it's time to read the documentation to learn the official logic.
You might be interested in this question.
Your code is written for a CP/M operating system. I/O is done through the BDOS (Basic Disc Operating System) interface. Basically, you load an operation code in the C register, any additional parameters in other registers, and call location 0x5. Function code C=2 is write a character in the E register to the console (=screen). You can see this in action at line 1200:
ld e,a
ld c,2
call bdos
pop hl
pop de
pop bc
pop af
ret
bdos push af
push bc
push de
push hl
call 5
pop hl
pop de
pop bc
pop af
ret
For a reference to BDOS calls, try here.
To emulate this you need to trap calls to address 5 and implement them using whatever facilities you have available to you.
I'm increasingly looking at using QR codes to transmit binary information, such as images, since it seems whenever I demo my app, it's happening in situations where the WiFi or 3G/4G just doesn't work.
I'm wondering if it's possible to split a binary file up into multiple parts to be encoded by a series of QR codes?
Would this be as simple as splitting up a text file, or would some sort of complex data coherency check be required?
Yes, you could convert any arbitrary file into a series of QR codes,
something like Books2Barcodes.
The standard way of encoding data too big to fit in one QR code is with the "Structured Append Feature" of the QR code standard.
Alas, I hear that most QR encoders or decoders -- such as zxing -- currently do not (yet) support generating or reading such a series of barcodes that use the structured append feature.
QR codes already have a pretty strong internal error correction.
If you are lucky, perhaps splitting up your file with the "split" utility
into pieces small enough to fit into a easily-readable QR code,
then later scanning them in (hopefully) the right order and using "cat" to re-assemble them,
might be adequate for your application.
You surely can store a lot of data in a QR code; it can store 2953 bytes of data, which is nearly twice the size of a standard TCP/IP packet originated on an Ethernet network, so it's pretty powerful.
You will need to define some header for each QR code that describes its position in the stream required to rebuild the data. It'll be something like filename chunk 12 of 96, though encoded in something better than plain text. (Eight bytes for filename, one byte each for chunk number and total number of chunks -- a maximum of 256 QR codes, one simple ten-byte answer, still leaving 2943 bytes per code.)
You will probably also want to use some form of forward error correction such as erasure codes to encode sufficient redundant data to allow for mis-reads of either individual QR codes or entire missing QR codes to be transparently handled well. While you may be able to take an existing library, such as for Reed-Solomon codes to provide the ability to fix mis-reads within a QR code, handling missing QR codes entirely may take significantly more effort on your part.
Using erasure codes will of course reduce the amount of data you can transmit -- instead of all 753,408 bytes (256 * 2943), you will only have 512k or 384k or even less available to your final images -- depending upon what code rate you choose.
I think it is theoretically possible and as simple as splitting up text file. However, you probably need to design some kind of header to know that the data is multi-part and to make sure different parts can be merged together correctly regardless of the order of scanning.
I am assuming that the QR reader library returns raw binary data, and you will you the job of converting it to whatever form you want.
If you want automated creation and transmission, see
gre/qrloop: Encode a big binary blob to a loop of QR codes
maxg0/displaysocket.js: DisplaySocket.js - a JavaScript library for sending data from one device to another via QR ocdes using only a display and a camera
Note - I haven't used either.
See also: How can I publish data from a private network without adding a bidirectional link to another network - Security StackExchange