Z80 Status Flag registers - cpu-registers

This will very likely look like a very simple question to some, but when looking at the following register schema:
and considering that the 'general' flag register is an 8-bit representation of the flag bits, I'm wondering:
Does the 'alternate' F' register always point to the exact same flag bits, or can it contain a different byte in any way (I know alternate registers can not be directly accessed but only be swapped in).
In other words, are the byte contents of 'general' register F content always equal to 'alternate' register F' (representing the same unique flag bits - assuming the 'swap the alternate register' instruction does not alter the flag bits) ?

They are completely different. Alt-F is simply the flag register for the Alt-set.

Related

How to convert byte*payload to an int?

I am programming ESP8266thing dev board using arduino.
I have a value stored in byte*payload. I want to convert that value and store it into an int variable. I tried different methods but non of them is working fine. Can anyone suggest me a good method ? Thank You!!
How you do this depends entirely upon how you represented the value when you transmitted it via MQTT.
If you transmitted it in binary - for instance, you published the integer as series of bytes - then you also need to know the byte order and the number of bytes. Most likely it's least-significant-byte first (so if the integer in hex were 0x1234 it would be transmitted as two bytes - 0x34 followed by 0x12) and 32 bits.
If you're transmitting binary between two identical computers running similar software then you'll probably be fine (as long as that never changes), but if the computers differ or the software differs, you're dealing with representations of your integer that will be dependent on the platform you're using. Even using different languages on the two ends might matter - Python might represent an integer one way and C another, even if they're running on identical processors.
So if you transmit in binary you should really choose a machine-independent representation.
If you did transmit in binary and made no attempt at a machine-independent representation, the code would be something like:
byte *payload;
int payload_length;
int result;
if(payload_length < sizeof(int)) {
HANDLE THIS ERROR
} else {
result = *(int *)payload;
}
That checks to make sure there are enough bytes to represent a binary integer, and then uses a cast to retrieve the integer from the payload.
If you transmitted in binary in a machine-independent format then you'd need to do whatever transformation is necessary for the receiving architecture.
I don't really recommend transmitting in binary unless you know what you're doing and have good reasons for it. Most applications today will be fine transmitting as text - which you could say is the machine-independent representation.
The most likely alternative to transmitting in binary is in text - which can be a machine independent format. If you're transmitting an integer as text, your code would look something like this:
byte *payload;
int payload_length;
char payload_string[payload_length + 1];
int result;
memcpy(payload_string, payload, payload_length);
payload_string[payload_length] = '\0';
result = atoi(payload_string);
This code uses a temporary buffer to copy the payload into. We need to treat the payload like a C string, and C strings have an extra byte on the end - '\0' - which indicates end-of-string. There's no space for this in the payload and an end-of-string indicator may or may not have been sent as part of the payload, so we'll guarantee there's one by copying the payload and then adding one.
After that it's simple to call atoi() to convert the string to an integer.
Don't know if you found an answer yet, but I had the exact same issue and eventually came up with this:
payload[length] = '\0'; // Add a NULL to the end of the char* to make it a string.
int aNumber = atoi((char *)payload);
Pretty simple in the end!

Modbus TCP registers

I'm trying to read a register using pymodbus. The modbus input register I'm trying to read is 310301. Since registers need to be 65535 or below, how can I read this register?
310301 looks to be an address specified in the "Modicon" notation, where the first digit indicates the Modbus table type (Holding Register, Input Register, Coil, Discrete Input).
3xxxxx addresses are Input Registers, so try reading Input Register 10301.
Generally, in this scheme:
Coils span from 000001 to 065536
Discrete Inputs span from 100001 to 165536
Input Registers span from 300001 to 365536
Holding Registers span from 400001 to 465536
Sometimes you'll find manufacturers only use 5 digits to specify the address instead of 6. I find this practice deplorable because it leads to ambiguity, but what can you do...

add new headers parsing in tcpdump

I have a necessity to add support for a proprietary headers that FPGA in our design inserts in incoming Ethernet frames between MAC header and payload. Obviously have to dig in tcpdump sources and libpcap, but could anybody give some hints at where exactly to start, so that I could save time?
The first thing you need to do is to get a DLT_/LINKTYPE_ value for your proprietary headers. See the link-layer header types page on the tcpdump.org Web site for the existing DLT_/LINKTYPE_ link-layer header type values and information on how to either use one of the DLT_USERn values internally or get a new value assigned if you plan to have people outside your organization use this.
Once you have the value assigned, you'll have to do some work on libpcap:
If you've been assigned a DLT_ value, you'll have to modify the pcap/pcap.h file to add that link-layer type (and change the DLT_MATCHING_MAX value in that header file, and LINKTYPE_MATCHING_MAX in pcap-common.c, so that they are >= your DLT_ value), or wait for whoever at tcpdump.org (which will probably be me) assigns your DLT_ value and updates the libpcap Git repository (at which point you could use top-of-trunk libpcap).
If you plan to do live capturing, you may have to add a module to libpcap to support live capturing on your hardware, or, if your device looks like a regular networking device to your OS, so that you can use its native capture mechanism, modify the module for that OS to map whatever link-layer header type value the OS uses (e.g., a DLT_ value on *BSD/OS X or an ARPHRD_ value on Linux) to whatever DLT_ you're using for your link-layer header type.
You'd have to modify gencode.c to be able to compile capture filters for your DLT_ value.
Once that's done, libpcap should now work.
Now, for tcpdump:
Add an if_print routine that processes the proprietary headers (whether it just skips them or prints things for them), calls ether_print(), and then returns the sum of the length of your proprietary headers and the Ethernet header (ETHER_HDRLEN as defined in ether.h). See ether_if_print() in print-ether.c for an example.
Add a declaration of that routine to interface.h and netdissect.h, and add an entry for it, with the routine name and DLT_, to ndo_printers[] if you copied ether_if_print() (which you should) or to printers[] if you didn't (if you didn't, you'll have to pass &gndo as the first argument to ether_print()). Those arrays are in tcpdump.c.

Decrypting DUKPT Encrypted Track Data

As the title says, I am trying to decrypt DUKPT encrypted track data coming from a DUKPT enabled scanner.
I have the ANSI Standard (X9.24) for DUKPT and have successfully implemented the ability to generate the IPEK from the KSN and BDK. Furthermore, I have successfully implemented the ability to generate the Left and Right MAC Request and Response Keys by XORing the PIN Encryption Keys. Lastly, I am able to generate the EPB.
From here, I don't understand how to generate the MAC Request and Response from the L/R Keys that I have generated.
Lastly, once I get to that step, what comes next? When do I actually have the key that decrypts the track data sent by a DUKPT enabled device?
I am aware of the Thales Simulator and jPOS. My code is currently referencing the Thales Simulator to do all of its work. But, the file decryption process just isn't returning the expected data.
If anybody can offer some insight into decrypting track data, it would be much appreciated.
http://thalessim.codeplex.com/
http://jpos.org/
I spent too much time studying the horrible X9.24 spec and finally got both the encryption and decryption working with my vendor’s examples and marketing promptly decided to switch vendors. Since it is a standard, you would think that anybody’s implementation would be the same. I wish. Anyway, there are variations on how things are implemented. You have to study the fine print to make sure you are working things the same as your other side.
But that is not your question.
First if you need to decrypt a data track from a credit card, you are probably interested in producing a key that will decrypt the data based upon the original super secret Base Derivation Key. That has nothing to do with the MAC generation and is only mentioned in passing in that dreadful spec. You need to generate the IPEK for that key serial number and device ID and repeatedly apply the “Non-reversible Key Generation Process” from the spec if bits are set in the counter part of the full key serial number from the HSM.
That part of my code looks like this: (Sorry for the long listing in a posting.)
/*
* Bit "zero" set (this is a 21 bit register)(ANSI counts from the left)
* This will be used to test each bit of the encryption counter
* to decide when to find another key.
*/
testBit=0x00100000;
/*
* We have to "encrypt" the IPEK repeatedly to find the current key
* (See Section A.3). Each time we encrypt (generate a new key),
* we need to use the all prior bits to the left of the current bit.
* The Spec says we will have a maximum of ten bits set at any time
* so we should not have to generate more than ten keys to find the
* current encryption key.
*/
cumBits=0;
/*
* For each of the 21 possible key bits,
* if it is set, we need to OR that bit into the cumulative bit
* variable and set that as the KSN count and "encrypt" again.
* The encryption we are using the goofy ANSI Key Generation
* subroutine from page 50.
*/
for(int ii=0; ii<21; ii++)
{
if( (keyNumber&testBit) != 0)
{
char ksr[10];
char eightByte[8]={0};
cumBits |= testBit;
ksn.count=cumBits; /* all bits processed to date */
memcpy(ksr, &ksn,10); /* copy bit structure to char array*/
memcpy(crypt,&ksr[2],8); /* copy bytes 2 through 9 */
/*
* Generate the new Key overwriting the old.
* This will apply the "Non-reversible Key Generation Process"
* to the lower 64 bits of the KSN.
*/
keyGen(&key, &crypt, &key);
}
testBit>>=1;
}
Where
keyNumber is the current counter from the ksn
ksn is an 80 bit structure that contains the 80 bit Key Serial Number from the HSM
crypt is a 64 bit block of data I have it of type DES_cblock since I am using openSSL.
key is a 128 bit double DES_cblock structure.
The keyGen routine is almost verbatim from the “Non-reversible Key Generation Process” local subroutine on page 50 of the spec.
At the end of this, the key variable will contain the key that can be used for the decryption, almost. The dudes that wrote the spec added some “variant” behavior to the key to keep us on our toes. If the key is to be used for decrypting a data stream such as a credit card track, you will need to XOR bytes 5 and 13 with 0xFF and Triple DES encrypt the key with itself (ECB mode). My code looks like:
DOUBLE_KEY keyCopy;
char *p;
p=(char*)&key;
p[ 5]^=0xff;
p[13]^=0xff;
keyCopy=key;
des3(&keyCopy, (DES_cblock *)&key.left, &key.left);
des3(&keyCopy, (DES_cblock *)&key.right, &key.right);
If you are using this to decrypt a PIN block, you will need to XOR bytes 7 and 15 with 0xFF. (I am not 100% sure this should not be applied for the stream mode as well but my vendor is leaving it out.)
If it is a PIN block, it will be encrypted with 3-DES in ECB mode. If it is a data stream, it will be encrypted in CBC mode with a zero initialization vector.
(Did I mention I don’t much care for the spec?) It is interesting to note that the encryption side could be used in a non-hardware, tamper resistant security module if the server side (above) remembers and rejects keys that have been used previously. The technology is pretty neat. The ANSI spec leaves something to be desired but the technology is all right.
Good luck.
/Bob Bryan
For data encryption, the variant is 0000000000FF0000.0000000000FF0000 so you need to XOR bytes 5 and 13 instead of 7 and 15. In addition, you need an additional 3DES self-encryption step of each key parts (left and right).
Here is the relevant code in jPOS
https://github.com/jpos/jPOS/blob/master/jpos/src/main/java/org/jpos/security/jceadapter/JCESecurityModule.java#L1843-1856

Find out the character pressed key

If I add a listener to KeyboardEvent.KEY_DOWN, I can find out the keyCode and the charCode.
The keyCode maps to a different character depending on the keyboard.
The charCode is just as useless, according to the help:
The character code values are English keyboard values. For example, if you press Shift+3, charCode is # on a Japanese keyboard, just as it is on an English keyboard.
So, how can I find out which character the user pressed?
You left out a pretty important part of the quote or it was missing where you found it:
For example, if you press Shift+3, the
getASCIICode() method returns # on a
Japanese keyboard, just as it does on
an English keyboard.
http://livedocs.adobe.com/flex/201/langref/flash/events/KeyboardEvent.html
This is probably more helpful:
The charCode property is the numeric value of that key in the current character set (the default character set is UTF-8, which supports ASCII).
http://livedocs.adobe.com/flex/2/docs/wwhelp/wwhimpl/common/html/wwhelp.htm?context=LiveDocs_Parts&file=00000480.html
Your application determines what characters set is used, meaning that the even if you have to use separate keys of different keyboard locals to produce the same character, it will have the same charCode.
NOTE: (This is about keyboard messages in general and does not apply to actionscript alone. I misread the question and provided a deeper answer then was helpful)
Really, the path from keyboard to windows char is a VERY complex one, it goes something like this:
Keyboard send scancode to Keyboard device driver (KDD).
KDD sends a message to the system message queue.
The system then sends the message to the foreground thread that created the window with the current keyboard focus.
The thread's message loop picks up the message and figures out the correct character translation.
The 'real' char that was typed is not calculated until it finishes that whole process, as each window and thread can be on a different locale and you can't really 'translate' the key without knowing the locale and key buffer history.
The "WM_KEYDOWN" and "WM_KEYUP" messages cannot just be converted with MapVirtualKey or something because you don't know how many key presses make up a single char. The simple method is just handle the 'WM_CHAR' event and use that. Consider the following:
en-US locale, you press the following keys a + ' + a, you get the following output "a'a"
pt-BZ locale, you press the following keys a + ' + a, you get the following output "aá"
So in both examples you would get 3 KEYDOWN, KEYUP messages, but in the first you get 3 WM_CHAR and in the second you only get 2.
The following article is really good for the basic concepts:
http://msdn.microsoft.com/en-us/library/ms646267(VS.85).aspx
You cannot effectively use charCode or keyCode to determine the character that was entered. You must compare strings only. The KeyboardEvent does not give you the entered text, which is also silly.
In my case I implemented a KeyboardEvent.KEY_DOWN event in addition to a TextEvent.TEXT_INPUT event. In the handler for the latter I implemented all functionality where the charCode was needed and didn't vary per keyboard locale (eg. space bar or enter). In the the former I checked for the text property of the event to compare what I needed locale independent.
Forgot to mention that this post hinted me to that solution: How to find out the character pressed key in languages?
Typing Japanese hiragana etc characters often require several keystrokes and sometimes even selecting the appropriate character from a drop down menu. You probably want to listen for a different event, something like a textfield's change event.

Resources