VMX virtual apic interrupt - intel

How to send external interrupts to guest when following enabled:
Use TPR shadow
Virtualize APIC access
APIC register virtualization
Virtual-interrupt delivery
Acknowledge external interrupts
External interrupts exit
Process posted-interrupt
I've tried vmcs guest interrupt status 0x810, but couldn't make it work correctly.
My goal is to redirect external interrupts to guest.

To inject an interrupt into the guest when virtual interrupt delivery is enabled, follow steps similar to those that the CPU performs when it processes a posted interrupt, as described in SDM volume 3, section 30.6, "Posted Interrupt Processing", steps 4, 5, and 6.
The steps are:
If the interrupt being injected into the guest is due to receipt of a physical interrupt, send EOI to the physical APIC.
Set the bit in VIRR corresponding to the interrupt vector being injected.
Set RVI to the vector number, if it is greater than the prior value of RVI.
Evaluation of pending virtual interrupts will be performed by the CPU upon VM entry.
See also SDM volume 3, section 30.2, "Evaluation and Delivery of Virtual Interrupts".

Related

Read remote XBee I/O pin states

So I thought this would be simple, and I'm new to XBee so it may be, but I want to request a remote XBee to transmit its current I/O pin states to me, whether they be input or output.
Use case: The controller sends state to the XBee and XBee updates to match and then controller goes down. While controller is down, the user at the remote XBee toggles a switch that changes an I/O input pin state. The controller comes back up and needs to know of this change.
How can I request the I/O state from remote routers?
NOTE: Running in API mode
Digi has a useful article in their Knowledge Base about configuring a remote device to send I/O samples on a periodic basis, or when inputs change. You can use ATIR to set a sample rate, or ATIC to configure inputs to monitor for changes. All samples go to the address specified in ATDH and ATDL address registers.
For a one-off sample, you can simply send a remote ATIS command.

SMP affinity vs XPS on paired queues and TX queue selection control

I have a solarflare nic with paired rx and tx queues (8 sets, 8 core machine real machine, not hyperthreading, running ubuntu) and each set shares an IRQ number. I have used smp_affinity to set which irqs are processed by which core. Does this ensure that the transmit (tx) interrupts are also handled by the same core. How will this work with xps?
For instance, lets say the irq# is 115, set to core 2 (via smp_affinity). Say the nic chooses tx-2 for outgoing tcp packets, which also happens to have 115 irq number. If I have an xps setting saying tx-2 should be accessible by cpu 4, then which one takes precedence - xps or smp_affinity?
Also is there a way to see/set which tx queue is being used for a particular app/tcp connection? I have an app that receives udp data, processes it and sends tcp packets, in a very latency sensitive environment. I wish to handle the tx interrupts on the outgoing on the same cpu (or one on the same numa node) as the app creating this traffic, however, I have no idea how to find which tx queue is being used by this app for this purpose. While the receive side has indirection tables to set up rules, I do not know if there is a way to set the tx-queue selection and therefore pin it to a set of dedicated cpus.
You can tell the application the preferred CPU by setting the cpu affinity (taskset) or numa node affinity, and you can also set the IRQ affinities (in /proc/irq/270/node, or by using the old intel script floating around 'set_irq_affinity.sh' which is on github). This won't completely guarantee which irq / cpu is being used, but it will give you a good head start on it. If all that fails, to improve latency you might want to enable packet steering in the rxqueue so you get the packets in quicker to the correct cpu (/sys/class/net//queues/rx-#/rps_cpus and tx-#/xps-cpus). There is also the irqbalance program and more....it is a broad subject and i am just learning much of it myself.

How can the master figure out if an I2C slave is busy without interrupting time-sensitive code on the slave?

So I have a device set as the I2C master, and the rest of the devices on the bus are set as slaves. The master sends a command to each slave, and the slave executes this task (running motors, etc, important time-sensitive code). I would like to be able to know when the slave is finished executing its task. The only way I can see to do this is to constantly have the master poll the slave, but this creates an issue, because every time the master polls the slave, it triggers and i2c interrupt on the slave and quits running the motor code for a short amount of time.
Is there anyway to solve this? I was thinking of setting all devices as a master, so then when each device finishes it task, it can send the data over saying that it is done, without the need for polling. The issue with this is I'm worried about data collision over the bus with devices possibly trying to talk at the same time.
What is the correct way to solve this issue?
Let the slave disable its I2C interface while it's running a time-critical task, and re-enable afterwards. Then, the master can poll as often as it wants to, it would get no ACK from the busy slave, and the slave won't get any interrupts either.

MSP430 Low power mode LPM3.5

MSP430f6436
I just wanted to know, if the micro controller will go into Low power state (LPM3.5) if we cut the main power supply of the micro controller?
Backup battery is already connected with the internal RTC. And while the RTC is in operation the RTC is active.
According to the User Guide,
LPMx.5 entry and exit is handled differently than the other low power
modes. LPMx.5, when used properly, gives the lowest power consumption
available on a device. To achieve this, entry to LPMx.5 disables the
LDO of the PMM module, removing the supply voltage from the core of
the device.
Further, the way to enter this LPM3.5 mode is also described very well in the UG
The program flow for entering LPMx.5 is: 1. Configure I/O
appropriately. See the Digital I/O chapter for complete details on
configuring I/O for LPMx.5.
• Set all ports to general purpose I/O.
Configure each port to ensure no floating inputs based on the
application requirements.
• If wakeup from I/O is desired, configure
input ports with interrupt capability appropriately.
2. If LPM3.5 is available, and desired, enable RTC operation. In addition, configure any RTC interrupts, if desired for LPM3.5 wakeup
event. See the RTC Overview chapter for complete details.
3. Ensure clock system settings allow LPMx.5 entry according to Table 5-1 in UCS chapter.
4. Enter LPMx.5 by setting PMMREGOFF=1 and LPM4 status register bits. The following code example shows how to enter LPMx.5 mode. See the PMM
chapter for further details.
Broadly speaking what these two paragraphs mean, is that some amount of SW intervention is required to enter the LMP3.5 mode.
"Cutting" the main power supply to the micro controller will not perform these operations for you, unless it is still possible (i.e. the ramp down is long enough) to capture and handle those events via Low-Side Supervisor (SVSL) and Low-Side Monitor (SVML) interrupts.

Who captures packets first - kernel or driver?

I am trying to send packets from one machine to another using tcpreplay and tcpdump.
If I write a driver for capturing packets directly from the NIC, which path will be followed?
1) N/w packet ----> NIC card ----> app (no role of kernel)
2) N/w packet -----> Kernel -----> NIC card ---> app
Thanks
It's usually in this order:
NIC hardware gets the electrical signal, hardware updates some of its registers and buffers, which are usually mapped into computer physical memory
Hardware activates the IRQ line
Kernel traps into interrupt-handling routine and invokes the driver IRQ handling function
The driver figures out whether this is for RX or TX
For RX the driver sets up DMA from NIC hardware buffers into kernel memory reserved for network buffers
The driver notifies upper-layer kernel network stack that input is available
Network stack input routine figures out the protocol, optionally does filtering, and whether it has an application interested in this input, and if so, buffers packet for application processing, and if a process is blocked waiting on the input the kernel marks it as runnable
At some point kernel scheduler puts that process on a CPU and resumes is, application consumes network input
Then there are deviations from this model, but those are special cases for particular hardware/OS. One vendor that does user-land-direct-to-hardware stuff is Solarflare, there are others.
A driver is the piece of code that interacts directly with the hardware. So it is the first piece of code that will see the packet.
However, a driver runs in kernel space; it is itself part of the kernel. And it will certainly rely on kernel facilities (e.g. memory management) to do its job. So "no role of kernel" is not going to be true.

Resources