Physical vs internal value in AUTOSAR - data-conversion

I have been struggling with the conversion of values in AUTOSAR from internal to physical values and vice versa. I understand this:
The physical value is the value we send to the application, and the internal value is the one we get after quantizing the physical value which is then sent on the bus. Is my understanding wrong?

The software is exclusively working with the internal values. In rare cases, internal values are identical to their physical counter-part.
The definition of a conversion formula of values in the internal domain to and from the physical domain has mainly two use cases:
Provide human-readable values in measurement and calibration tools and diagnostic testers.
The definition of a CompuMethod on both ends of a communication can be taken to compute a conversion formula of the transmitted data (internal values) from the sender to the receiver in case sender and receiver have use different resolution of the data. This aspect is explained in more detail in the AUTOSAR document „SWS RTE“.

Related

Just as in Segmentation, can different addresses in Paging also point to the same physical memory location?

Eg. for segmentation, 0000:FFFF is equivalent to 0001:FFEF (Just a hypothetical case, don't know if we really use these in programming or if these are reserved spaces)
(I am new to assembly programming. Specifically x86.)
Yes, this is allowed. In fact, not only is this legal, it's also frequently used for a feature known as shared memory.
0000:FFFF is equivalent to 0001:FFEF only in real mode, VM86 mode, or SMM mode. In these modes, by definition, paging is not enabled. In protected mode1 without paging, they are necessarily translated to different physical addresses because the segment offsets are different (FFFF vs. FFEF) but the segment base address is the same2. With paging, when the segment offsets get added to the segment base address (which could be zero), they may either point to the same virtual page or different virtual pages, but either way, the the least 12 significant bits of the page offsets would be different (because the least 12 significant bits of the segment offsets are different) and so they cannot be equivalent irrespective of how the page tables are set up.
In general, different addresses may translate to the same physical address. When the page offsets are different but the least significant 12 bits are same nonetheless, it is possible for the logical addresses to get translated to the same physical addresses when they point to pages of different sizes. Otherwise, if at least one of the least significant 12 bits is different in the virtual addresses, they cannot be equal in the physical address space.
In protected mode, the segment selector 0000'0000'0000'00XXb is used as null segment selector and cannot be accessed. But Let's assume for the sake of argument that it is accessible (or consider 0000'0000'0000'0100b vs. 0000'0000'0000'0101b instead).
They refer to the same segment because the segment selector indices (the most significant 13 bits of each selector) and table indicators (the third least signficant bit) are equal

OpenvSwitch port missing in large load, long poll interval observed

ISSUE description
I have a OpenStack system with HA management network (VIP) via ovs (Open vSwitch) port, it's found in this system, with high load (concurrently volume-from-glance-image creation), the VIP port (an ovs port) will be missing.
Analysis
For now, with default log level from log file, the only thing observed is as below the Unreasonably long 62741ms poll interval.
2017-12-29T16:40:38.611Z|00001|timeval(revalidator70)|WARN|Unreasonably long 62741ms poll interval (0ms user, 0ms system)
Idea for now
I will turn debug log on for file and try reproducing the issue:
sudo ovs-appctl vlog/set file:dbg
Question
What else should I do during/after of the issue reproduction please?
Is this issue typical? Caused by what if yes?
I googled OpenvSwitch trouble shoot or other related key words while information was all on data flow/table level instead of this ovs-vswitchd level ( am I right? )
Many thanks!
BR//Wey
This issue was not reproduced and thus I forgot about it, until recently, 2 years afterward, I had a chance to get in touch with this issue in a different environment, and this time I have more ideas on its root cause.
It could be caused by the shift that comes in the bonding, for some reason, the traffic pattern fits the situation of triggering shifts again and again(the condition is quite strong I would say but there is a chance to be hit anyway, right?).
The condition of the shift was quoted as below and please refer to the full doc here: https://docs.openvswitch.org/en/latest/topics/bonding/
Bond Packet Output
When a packet is sent out a bond port, the bond member actually used is selected based on the packet’s source MAC and VLAN tag (see bond_choose_output_member()). In particular, the source MAC and VLAN tag are hashed into one of 256 values, and that value is looked up in a hash table (the “bond hash”) kept in the bond_hash member of struct port. The hash table entry identifies a bond member. If no bond member has yet been chosen for that hash table entry, vswitchd chooses one arbitrarily.
Every 10 seconds, vswitchd rebalances the bond members (see bond_rebalance()). To rebalance, vswitchd examines the statistics for the number of bytes transmitted by each member over approximately the past minute, with data sent more recently weighted more heavily than data sent less recently. It considers each of the members in order from most-loaded to least-loaded. If highly loaded member H is significantly more heavily loaded than the least-loaded member L, and member H carries at least two hashes, then vswitchd shifts one of H’s hashes to L. However, vswitchd will only shift a hash from H to L if it will decrease the ratio of the load between H and L by at least 0.1.
Currently, “significantly more loaded” means that H must carry at least 1 Mbps more traffic, and that traffic must be at least 3% greater than L’s.

Need details about EDIV and Rand

Two questions about EDIV and Rand:
I need to be sure I understand exactly how EDIV and Rand are used in the BLE Legacy Pairing. What I understood from the Bluetooth specs is that these are generated during the pairing phase by the slave device and exchanged with the master along with the LTK. The part that I am not sure I understood well is how it is used by the slave device during encryption setup. It seems to me that the specs give freedom to the actual BLE implementation about this: either you use EDIV/Rand as a kind of index to retrieve the associated LTK after receiving the encryption request or you re-generate the LTK each time using EDIV/Rand and a device-specific, never shared, secret value. Is that correct?
Why have they been removed from Secure Connections pairing? How is the association made between the LTK and the peer device in that case? With the Identity Address?
Thanks in advance.
You seem to be correct about all your thoughts.
For LE Legacy Pairing, according to the Bluetooth Core specification, Vol 3 Part H section 2.4.1:
Encrypted Diversifier (EDIV) is a 16-bit stored value used to identify the LTK distributed during LE legacy pairing. A new EDIV is generated each time a unique LTK is distributed.
Random Number (Rand) is a 64-bit stored valued used to identify the LTK distributed during LE legacy pairing. A new Rand is generated each time a unique LTK is distributed.
So in the Legacy Pairing the idea is that the EDIV and Random Number pair identifies the LTK that is to be used.
And in section 2.4.2:
2.4.2 Generation of Distributed Keys
Any method of generation of keys that are being distributed that results in the keys having 128 bits of entropy can be used, as the generation method is not visible outside the slave device (see Section Appendix B). The keys shall not be generated only from information that is distributed to the master device or only from information that is visible outside of the slave device.
So yes you can use any method to generate the EDIV/Rand/LTK values as long as the method provides good security. The specification itself suggests two different method in Vol 3 Part H Appendix B "Key Management":
The first one is that EDIV is an index to a database of LTKs. I suppose here they mean that a new EDIV and LTK pair is generated for each pairing. To me it seems a bit stupid to not use both EDIV and Rand as a lookup key. A variation mentioned is to use (AES(DHK, Rand) mod 2^16) XOR EDIV (where DHK is a per device-unique secret key) as index for which I don't get their point either.
The second method is to have per device-unique secret keys DHK and ER. Then for each new pairing you randomly generate 16-bit DIV and 64-bit Rand. EDIV is derived as (AES(DHK, Rand) mod 2^16) XOR DIV. The LTK is derived as AES(ER, DIV), which according to me is very stupid compared to simply AES(ER, Rand || EDIV) (and let EDIV be randomly generated instead of DIV) since with their method there can only be 2^16 different keys, which when applying the Birthday Paradox means that after around 256 generated keys there will probably be duplicates. (If anyone in the Bluetooth SIG is reading this - please tell me the reason for this weird method). Anyway by deriving LTK from EDIV and Rand you don't need to store the LTK nor the EDIV/Rand values. A thing they seem to have forgot about is that since a different key size (7-16 bytes) may be negotiated during the pairing, you must still for each bonded device store the key size the resulting key should be truncated to upon further encryptions. Or you can workaround this by for example hardcoding some 4 bits in the Rand value which key size to be used.
There is one issue with simply ignoring having a security database at all and just rely on that the LTK can always be recovered from the master's EDIV and Rand: you will never be able to "remove" the bond or revoke the key. Also, if you strictly follow GAP you should know whether you have a usable key to start encryption for a current connection. For example, when responding with an error when reading a GATT characteristic because the characteristic requires an encrypted link, there are different error codes depending on if an LTK is available or not; "Insufficient Encryption" if LTK is available and "Insufficient Authentication" if not.
In LE Secure Connections, the LTK is not distributed but contributed, which means it's derived from values from both peers (using a Diffie Hellman function as the core). Therefore one part cannot select the LTK freely. The input values to the LTK generation function are the Diffie Hellman shared secret, random nonces from both peers and the Bluetooth Device Addresses of both peers (the address used in the connection, not the Identity Address). Since the input values take up more space than LTK, it's more feasible to just store the LTK in a database.
Since there must be exactly one LTK per bonded device there is no more any point in having EDIV and Rand so they shall be set to 0 in encryption requests. That means we must also now map device to LTK rather than EDIV/Rand to LTK. To do that the Identity Address is used when looking up the LTK. If a random resolvable address is used for a connection we must test all stored IRKs and get the corresponding Identity Address. If public or static random address is used for a connection - that is the Identity Address.

Developing Communication Protocol for XBee

I am using XBee Digimesh Modules in API-Mode to send data between different industrial machines allowing them to share data, information and commands.
The API-Mode offers some basic commands, mainly to perform addressing and talk with the XBee Module itself in order to do configuration, etc.
Sending user data is done via a corresponding XBee API-Command which allows to send user-defined data with a maximum payload of 72 Bytes.
Since I want to expand this communication to allow integration of more machines, etc. I am thinking about how to implement a basic communication system that's tailored perfectly to the super small payload of just 72 Bytes.
Coming from the web, I normally would use some sort of JSON here but that would fill up the payload very quickly.
Also it's not possible to send a frame with lot's of information since this also fills up the payload very quickly.
So I came up with a different way of communicating. Instead of transmitting frames packed with information, what about sending some sort of Messages like this:
Machine-A Broadcasts: Who's there?
Machine-B Answers: It's me I am a xxx-Machine
Machine-C Answers: It's me I am a xxx-Machine
Machine-A now evaluates the replies and decides to work with Machine-B (because Machine-C does not match As interface):
Machine-A to B: Hello B, Give me some Value, please!
Machine-B to A: There you go: 2.349590
This can be extended to different short messages. After each message the sender holds the type of message in a state and the reply will be evaluated in relation to the state / context.
What I was trying to avoid was defining a bit-based protocol (like MIDI) which defines all events as bit based flags. Since we do not now what type of hardware there will be added in the future I want a communication protocol that's very flexible and does not need a coordinator or message broker, etc.
But since this is the first time I am thinking about communication protocols I am curious to know if there might be some existing frameworks that can handle complex communication on a light payload.
You might want to read through the ZigBee Cluster Library specification with a focus on the general commands. It describes a system of attribute discovery and retrieval. Each attribute has a 16-bit ID and a datatype (integers of various sizes, enumerated types, bitmaps) that determines its size.
It's a protocol designed for the small payloads of an 802.15.4 network, and you could potentially based your protocol off of a subset of it. Other ZigBee specifications are simply a list of defined attributes (and commands) for a given 16-bit cluster ID.
Your master device can go through a discovery process to get a list of attribute IDs, and then send a request to get values for multiple IDs in one shot. The response will be packed tight with a 16-bit ID, 8-bit attribute type and then variable length data. Even if your master device doesn't know what the ID corresponds to, it can pass the data along to other systems (like a web server) that do know.

64/66b encoding

There are a few things I don't understand about 64/66bit encoding, and failed to find the answers to on the web. Any help/links would be greatly appreciated:
i) how is the start of a frame recognised? I don't think it can be by the initial 10/01 bits called the preamble on wikipedia because you cannot tell them apart (if an idle link is 0, then 0000 10 and 000 01 0 look rather similar). I expect the end of a frame is indicated by a control word, with the rest of the bits perhaps used for the CRC?
ii) how do the scramblers synchronise, and how do they avoid scrambling the same packet the same way? Or to put this another way, why is not possible for a malicious user to induce substantial packet loss by carefully choosing a bad message?
iii) this might have been answered in ii), but if a packet is sent to a switch, and then onto another host, is it scrambled the same way both times?
Once again, many thanks in advance
Layers
First of all the OSI model needs to be clear.
The ethernet frame is a data link layer, while the 64b/66b encoding is part of the physical layer (More precisely the PCS of the physical layer)
The physical layer doesn't know anything about the start of a frame. It sees only data. (The start of an ethernet frame are data bytes which contain the preamble.)
64b/66b encoding
Now let's assume that the link is up and running.
In this case the idle link is not full of '0'-s. (In that case the link wouldn't be self-synchronous) Idle messages (idle characters and/or synchronization blocks ie control information) are sent over the idle link. (The control information encoded with 0b10 preamble) (This is why the emitted spectrum and power dissipation don't depend on if the link is in idle state or not)
So a start of a new frame acts like following:
The link sends idle information. (with 0b10 preamble)
Upper layer (data link layer) sends the frame (in 64bit chunks of data) to physical layer.
The physical layer sends the data (with 0b01 preamble) over the link.
(Note that physical layer frequently inserts control (sync) symbols into the raw frame even during a data burst)
Synchronization
Before data transmission 64b/66b encoded lane must be initialized. This initialization includes the lane initialization which the block synchronization. Xilinx's Aurora's specification (P34) is an example of link initialization.
Briefly receiver tries to match the sync character in different bit-position, and when it match multiple times it reports link-up.
Note, that the 64b/66b encoding uses self-synchronous scrambler. This is why the scrambler (itself) doesn't need to know anything about where we are in the data stream. If you run a self-synchronous (de-)scrambler long enough, it produces the decoded bit stream.
Maliciousness
Note, that 64b/66b encoding is not an encryption. This scrambling won't protect you from eavesdropping/tamper. (Encryption should placed at higher level of the OSI model)
Same packet multiple times
Because the scrambler is in different state/seed when you sending the same packet second time, the two encoded packet will differ. (Theoretically we can creates packets, which sets back the shift register of the scramble, but we need to consider the control symbols, so practically this is impossible.)

Resources