Status of program counter during hlt - intel

In the Intel 8085 microprocessor, precisely at what point (t state) does the program counter get updated? Is it just after t1 (i.e., just when the current address in the PC is placed on the address bus) or at t3, when the instruction fetch is being done?
Also, when a hlt instruction is encountered, what happens to the state of the program counter? Does it get incremented or does it contain the address of the current hlt instruction?

Similar question has been asked at this.
Usually in the first clock cycle the current value of the PC is loaded into the address buffer, and next 2 clock cycles fetches the instruction opcode at that address.
During this time the 16-bit PC is updated by the incrementer while the opcode is being fetched into the IR. So PC is already incremented before the 8085 can even decode and realize that the instruction is HLT.

HLT works as a "wait for next interrupt" instruction. If you want to halt forever, you have to put HLT in a loop, or disable interrupts. So the saved interrupt-return address that's software visible inside the interrupt is the byte after HLT.
This is what happens on 8086 / x86, and 8080 is asm source compatible. Also, having the interrupt return address point at HLT would mean it would run again after an interrupt handler returned normally. Using it as anything other than a "only run interrupt handlers from now on" would mean that all your interrupt handlers would have to check what the saved PC was pointing to, and if so increment it before return. Instead, to get that behaviour you put HLT in a loop at the one place you want that.
As codeR explains, PC has already been incremented before HLT sleeps, not at the last moment when waking up to handle an interrupt.

Related

BLE Missing Packets (Protocol / Spec question)

I have been learning the nuts and bolts of BLE lately, because I intend to do some development work using a BLE stack. I have learned a lot from the online documentation and the spec, but there is one aspect that I cannot seem to find.
BLE uses frequency hopping for communication. Once two devices are connected (one master and one slave), it looks like all communication is then initiated via the master and the slave responds to each packet. My question involves loss of packets in the air. There are two major cases I am concerned with:
Master sends a packet that is received by the slave and the slave sends a packet back to the master. The master doesn't receive the packet or if it does, it is corrupt.
Master sends a packet that is not received by the slave.
Case 1 to me is a "dont care" (I think). Basically the master doesn't get a reply but at the very least, the slave got the packet and can "sync" to it. The master does whatever and tries transmitting the packet at the next connection event.
Case 2 is the harder case. The slave doesn't receive the packet and therefore cannot "sync" its communication to the current frequency channel.
How exactly do devices synchronize the channel hopping sequence with each other when packets are lost in the air (specifically case 2)? Yes, there is a channel map, so the slave technically knows what frequency to jump to for the next connection event. However, the only way I can see all of this happening is via a "self timed" mechanism based on the connection parameters. Is this good enough? I mean, given the clock drift, there will be slight differences in the amount of time the master and slave are transmitting and receiving on the same channel... and eventually they will be off by 1 channel.. 2 channels, etc. Is this not really an issue, because for that to happen 'a lot' of time needs to pass based on the 500ppm clock spec? I understand there is a supervisor timer that would declare the connection dead after no valid data is transferred for some time. However, I still wonder about the "hopping drift", which brings me to the next point.
How much "self timing" is employed / mandated within the protocol? Do slave devices use a valid start of packet from the master every connection interval to re synchronize the channel hopping? For example if the (connection interval + some window) elapses, hop to the next channel, OR if packet received re sync / restart timeout timer. This would be a hop timer separate from the supervisor timer.
I can't really find this information within the core 5.2 spec. It's pretty dense at only over 3000+ pages... If somebody could point me to the relevant sections in the spec or somewhere else.. or even answer the questions, that would be great.
The slave knows the channel map. If one packet is not received from the master, it will listen again after one connection interval on the next channel. If that it also not received, it adds one extra connection interval and next channel.
The slave also stores a timestamp (or event counter) when the last received packet from the master was detected, regardless of if the crc was correct or not. This is called the anchor point. This is not the same time point used for supervision timeout.
The amount of time between the anchor point and the next expected packet is multiplied by the master + slave accuracy (for example 500 ppm) to get a receive window, plus 16 microseconds. So the slave listens this amount of time before and after the expected packet time of arrival.

Is DMA synchronous in network card drivers?

My understanding is that when a NIC adapter receives new packets, the top half handler uses DMA to copy data from the RX buffer to the main memory. I think this handler should not exit or release the INT pin before the transmission is completed, otherwise new packets would corrupt the old ones.
However, DMA is generally considered asynchronous and itself requires the interrupt mechanism to notify the CPU that data transmission is done. Thus my question, is DMA actually synchronous here, or interrupt can in fact happen within another interrupt handler?
In general, this synchronisation happens via ring descriptor between NIC(device driver) and host CPU. You will get packet path details here. I have explained the ring-descriptors below.
Edit:
Let me explain with Intel's ethernet Controller. If you look at section 3.2.3, where the RX descriptor format is given, it has status field which solves packet ownership problem. There are two major points to avoid contention and packet corruption as to who owns the packet (NIC driver or CPU).
DMA (from I/O device to Host memory): RX/TX Ring consists of 'hardware descriptors' and 'buffers' (carved from host memory). When we say DMA, controller transfer data, this happens from hardware FIFOs to this host memory.
Let us assume my ring buffers (of 512 bytes) are not big enough to hold the complete incoming packet(1500 or Jumbo packet), in that case the packet may span across multiple ring buffers and with EOP(End Of Packet) status field, indicates that the complete packet is now received (considering all the sanity checks/checksums are already done).
Second is who owns the packet now (driver or CPU for further consumption)? Now until the status flag DD (Descriptor Done) is set, it belongs to driver. Once set CPU can grab it for picking-and-poking.
This is specific to RX path. TX path is slightly different.
Consider it this way, there are multiple interrupts (IO, keyboard, mouse, etc) happening all the time in the system, but the time duration between two interrupts are so huge that CPU can do lot of other good stuff in between. And to further offload CPUs work DMA helps transferring data. So if an interrupt is raised and subroutine is called, all the subsequent interrupts can be masked as you are already inside that subroutine, but trust me these subroutine are very tiny they hardly consume any time until your next packet arrives. That means your packet arriving speed has to be higher than your processing speed.
Another Ex: for router/switches 99% time task is routing and switching hence subroutine and interrupt priorities are completely different, moreover all the time they are bombard with tons of packets and hence the subroutine in such cases will never come until there is another packet at bay. At least i have worked on such networking gears.

MSDOS API serial port only reading last sent character

I've been playing with the MSDOS API in assembler for some time and I'm trying to build an application to read/write from the serial port. I'm currently using VMware Workstation 11 + VSPE (http://www.eterlogic.com/Products.VSPE.html) to emulate the serial port communication.
One thing I noticed is that if I send let's say "asdfgh" into the serial port and then read it in MSDOS (Using interrupt 21h function 03h, but I also tried interrupt 14h function 02h), it only returns the last character read: "h"
According to some documentation I read, if an application sends data faster than I can process it, characters will be lost, which means that either there is another way to make MSDOS save bytes to a buffer (controlling flow) or I have to write a driver that does this (or maybe a TSR program that manages this I dunno).
So the question is, do I have to write a driver or is there another way to do this?
I managed to write a program that turns on the 16550 FIFO buffer and now It works using the DOS API.
So it was just the FIFO buffer
No need to slow down the baud rate or whatsoever. Nonetheless, if one happen to have a 8250 it's probably either that or implement a communications driver on top of it
Also, this helped a lot:
https://en.wikibooks.org/wiki/Serial_Programming/8250_UART_Programming#UART_Registers

Serial driver hw fifo overrun at 460800 baud rate

I am using 2.6.32 OMAP based linux kernel. I have observed that at high speed data rate (Serial port set to 460800 baud rate) serial port HW fifo overflow happens.
The serial port is configured to generate interrupt at every 8 bytes in rx and tx both direction (i.e when the serial port HW fifo is 8 byte full serial interrupt is generated which reads the data from the serial port at once).
I am transmitting 114 bytes packet continuously (Serial driver has no clue about the packet mode, it receives data in raw mode). Based on calculations,
460800 bits/sec => 460800/10 = 46080 bytes/sec (Where 1 stop bit and 1 start bit) so in 1 second I can transmit under worst case 46080/114 => 404.21 packets without any issue.
But, I expect the serial port to handle at least 1000 packets per second as such I have configured serial driver to generate interrupt every 8 bytes.
I tried the same using windows XP and I am able to read upto 600 packets / second.
Do you think this is feasible on linux under above circumstances? or I am missing something? Let me know your comments.
could someone also, send some important configuration settings that needs to be configured in .config file. I am unable to attach .config file otherwise, I can share it.
There are two kind of overflows that can occur for a serial port. The first one is the one you are talking about, the driver not responding to the interrupt fast enough to empty the FIFO. They are typically around 16 bytes deep so getting a fifo overflow requires the interrupt handler to be unresponsive for 1 / (46080 / 16) = 347 microseconds. That's a really, really long time. You have to have a pretty drastically screwed up driver with a higher priority interrupt to trip that.
The second kind is the one you didn't consider and offers more hope for a logical explanation. The driver copies the bytes from the fifo into a receive buffer. Where they will sit until the user mode program calls read() to read them. Getting an overflow on that buffer will happen when don't configure any kind of handshaking with the device and the user mode program is not calling read() often enough. It looks exactly like a fifo buffer overflow, bytes just disappear. There are status bits to warn about these problems but not checking them is a frequent oversight. You didn't mention doing that either.
So start by improving the diagnostics, do check the overflow status bits to know what's going on. Then do consider enabling handshaking if you find out that it is actually a buffer overflow issue. Increasing the buffer size is possible but not a solution if this is a fire-hose problem. Getting the user mode program to call read() more often is a fix but not an easy one. Just lowering the baud rate, yeah, that always works.

Regarding CAN bus

I am using a 16-bit MCU PIC24HJ64GP504 to write a CAN based application. Basically it is communication between my board and one another node which continuously keeps on sending data to my board using CAN at 1 Mbit/s. I am configuring the ECAN module in my PIC24 to work at 1 Mbit/s. I have written the code in such a way that for the first 10 ms the ECAN module will accept all messages coming in from the other side and after that I have re-configured the ECAN module to accept only those messages with message ID 0x13.
Now here comes the issue... The other node and my board are powered up at the same instant. The other node starts transmitting messages after 40 ms or so after powerup. But I am not able to get any message from it on my board. Now if I power up my board first, give it some time to reconfigure the ECAN module with new filters and settle down and then power up the other node, then everything works perfectly.
Now the strangest part... If I have a CAN bus analyzer connected between my board and the other node and even if I power up both the nodes at the same time, everything works fine... No need to power up my board first. I have tried this with three different bus analyzers from different manufacturers and got the same results.
To me it appears that during re-configuration of the ECAN module, it takes some time to settle down. And with the introduction of the bus analyzer in the bus, this time is somehow cut short so that everything works perfectly. But I am not sure what exactly the problem might be.
The problem might be a missing ACK. The CAN-Analyzer might acknowledge frames and the device does not switch to error passive.
I would hold off sending until the whole bus is initialized.
Also sounds like missing ACKs to me.
Are you seeing any error frames (get the scope to trigger off 6 consecutive dominant bits) - the Tx node might be going off the bus or even into some application-error mode if it doesn't get acknowledged enough.
You might be able to coax it back on bus by transmitting a dummy message on the bus.
I've found a Saleae Logic very useful in these circumstances (as well as a scope) - hang it off the Rx pin of your physical layer (or even wire up a standalone PHY that you can use to monitor the bus). The Saleae software will interpret the CAN and show you what's happening. Sometimes it's useful to use the scope trigger out to trigger the Logic.
CAN Communication requires at least two active devices on bus to have successful communication. This is because, a CAN frame is not completed unless someone acknowledges it.
When you power up your board and other node, it seems your board is not getting ready in 40msec. If it is not ready, it leaves "Other node" to be the only member on the bus and voilates above stated rule. Other node will get Tx error and after 128 erros, That other node will go in error mode and stop sending messages -- Hence you are not getting anything.
When you power up your board first, give it time - your board is ready and will ACK every message sent by other node -- Hence communication is good!
When you add CANalyzer, even if your board is not powered up, there are two active nodes on the bus -- Hence communication is good!

Resources