UEFI runtime service next to OS - efi

I had the idea of running a small service next to the OS but I'm not sure if it is possible. I tried to figure it out by reading some docs but didn't get far, so here comes my question.
I read about the UEFI runtime services.
Would it be possible to have a small module in the firmware which runs next to what ever operating system is used and that sends information concerning the location of the device to an address on the internet?
As far as my knowledge goes, I would say that it should not possbile to run something in the background once UEFI handed the control over to the OS kernel.
To clarify my intentions, I would like to have something like that on my laptop. There is the Prey project but it is installed inside the OS. I'm using a Linux distribution without autologin. If somebody would steal it they would probably just install Windows.

What you want to do is prohibited because that would be the gateway for viruses, loggers and other malwares.
That said, if you want to get some code running aside of the OS, you should look at System Management Mode (SMM).
SMM is an execution mode of the x86 processors orthogonal to the standard protected mode. SMM allows the BIOS to completely suspend the OS on all the CPUs at once and enter in SMM mode to execute some BIOS services. Switching to SMM mode happens right now on your x86 machine, as you're reading this Stackoverflow answer. It is triggered either by:
hardware: a dedicated System Management Interrupt line (SMI#), very similar to how IRQs work,
software: via an I/O access to a location considered special by the motherboard logic (port 0xb2 is common).
SMM services are called SMM handlers and for instance sensors values are very often retrieved by the means of a SMM call to a SMM handler.
SMM handlers are setup during the DXE phase of UEFI firmware initialization into the SMRAM, an area dedicated to SMM handlers. See the following diagram:
SMM Drivers are dispatched by the SMM Core during the DXE Phase. So
additional SMI handlers can be registered in the DXE Phase. Late in
the DXE Phase, when no more SMM drivers can be dispatched, SMRAM will
be locked down (as recommended practice). Once SMRAM is locked down,
no additional SMM Drivers may be dispatched, so not additional SMI
handlers can be registered. For example, an SMM Driver that registers
an SMI handler cannot be loaded from the EFI Shell or be added as a
DriverOption in the UEFI Boot Manager.
source: tianocore
This means that the code of your SMM handler must be present in the BIOS image, which implies rebuilding the BIOS with your handler added. It's tricky but tools exist out there to both provide a DXE environment and build your SMM handler code into a PE executable, as well as other tools to add a DXE driver to an existing BIOS image. Not all BIOS manufacturers are supported though. It's risky unless your Flash chip is in a socket and you can reprogram it externally.
But the first thing is to check if the SMRAM is locked on your system. If you're lucky you can add your very own SMM handler directly in SMRAM. It's fidly but doable.
Note: SMM handlers inside the BIOS are independent from the OS so it would run even if a robber installs a new Operating System, which is what you want. However being outside of an OS has huge disadvantages: you'd need to embedd in your SMM handler a driver for the network interface (a polling-only, interrupt-less driver!) and wlan 802.11, DHCP and IP support to connect to the Wifi and get your data routed to an external host on the Internet. How would you determine the wifi SSID and password? Well you could wait for the OS to initialize the network adapter for you, but you'd need to save/restore the full state of the network host controller between calls. Not a small or easy project.

As far as my knowledge goes, I would say that it should not possbile to run something in the background once UEFI handed the control over to the OS kernel.
I agree. Certainly, the boot environment (prior to ExitBootServices() only uses a single threaded model.
There is no concept of threads in the UEFI spec as far as I can see. Furthermore each runtime service is something the OS deliberately invokes, much like the OS provides system calls for applications. Once you enter a runtime system function, note the following restriction from 7.1:
Callers are prohibited from using certain other services from another processor or on the same
processor following an interrupt as specified in Table 30.
Depending on which parts of the UEFI firmware your runtime service needed access to would depend on which firmware functions would be non-reentrant whilst your call was busy.
Which is to say that, even if you were prepared to sacrifice a thread to sit eternally inside an EFI runtime service, you could well block the entire rest of the kernel from using other runtime services.
I do not think it is going to be possible unfortunately, but interesting question all the same!

Related

How to add fault tolerance support to an existing MPI based system such that the system continues even after a machine goes down?

I am trying to modify an MPI based system to add fault tolerance (process should continue if machines go down).
I was thinking of using Apache Zookeeper to handle the machine failure case. Is it the best way to proceed further? Also, what happens to the MPI calls (like send, receive, broadcast) when using Zookeeper? Send/Recv calls in MPI are typically bound to machine id (source/destination); now in an environment where machines fail and may never come back, how would it work?
What will be the performance drop by porting the existing application from MPI to Zookeeper based solution?

ECC error injection on Intel Xeon C5500 platform and issue with unlocking Integrated memory controller registers

I am working on Error Detection module and was attempting to test using the error injection implementation mentioned in Intel® Xeon® Processor C5500/C3500 Series Datasheet, Volume 2 in section 4.12.40. It asks to program the MC_CHANNEL_X_ADDR_MATCH, MC_CHANNEL_X_ECC_ERROR_MASK and MC_CHANNEL_X_ECC_ERROR_MASK registers but attempting to write to this has no effect. Realized there is a lock for this space which is indicated by status in MEMLOCK_STATUS register (device 0: function 0: offset 88h), which in my case is reporting 0x40401 as the set value. This means MEM_CFG_LOCKED is set and I am not able to even unlock using the MC_CFG_CONTROL register (device 0:function 0: offset 90h). I am writing 0x2 to this register but that does not help to unlock the ECC injection registers for writing. How can I achieve this? I am running FreeBSD on the bare metal and not as a virtual machine.
To the best of my knowledge, the whole TXT thing that is necessary for this is not supported on FreeBSD.
But this a quite an arcane area. You might have more luck asking this on the freebsd-hackers mailing list.

Implementing asynchronous read/write support in linux device driver

I need to implement asynchronous read/write support in my linux device driver.
The user space program should get a asynchronous signal from device driver, indicating that the driver has data and the user space program can read it.
Below are the options i found by googling and from LDD book.
[1] Implement poll-read. The driver returns status of read/write queue. The user space program can then decide whether to perform read/write on the device.
[2] Implement async notifications. The device driver is able to send a signal to user space when data is ready on driver side. The user space program can then read the data.
However i have seen developers using select_read call with tty driver. Not sure what support should be added to my existing device driver for using select_read from user space.
Need your advice on the most efficient methods from the above.
Asynchronouse notifications (signals) are harder to use, so it is usually recommended to use poll() instead.
You do not need to separately implement select(), both poll() and select() are user-space interfaces that map to your driver's .poll callback in the kernel.

Concurrency in the Linux network drivers: probe() VS ndo_open(), ndo_start_xmit() VS NAPI poll()

Could anyone explain if additional synchronization, e.g., locking, is needed in the following two situations in a Linux network driver? I am interested in the kernel 2.6.32 and newer.
1. .probe VS .ndo_open
In a driver for a PCI network card, the net_device instance is usually registered in .probe() callback. Suppose a driver specifies .ndo_open callback in the net_device_ops, performs other necessary operations and then calls register_netdev().
Is it possible for that .ndo_open callback to be called by the kernel after register_netdev() but before the end of .probe callback? I suppose it is, but may be, there is a stronger guarantee, something that ensures that the device can be opened no earlier than .probe ends?
In other words, if .probe callback accesses, say, the private part of the net_device struct after register_netdev() and ndo_open callback accesses that part too, do I need to use locks or other means to synchronize these accesses?
2. .ndo_start_xmit VS NAPI poll
Is there any guarantee that, for a given network device, .ndo_start_xmit callback and NAPI poll callback provided by a driver never execute concurrently?
I know that .ndo_start_xmit is executed with BH disabled at least and poll runs in the softirq, and hence, BH context. But this serializes execution of these callbacks on the local CPU only. Is it possible for .ndo_start_xmit and poll for the same network device to execute simultaneously on different CPUs?
As above, if these callbacks access the same data, is it needed to protect the data with a lock or something?
References to the kernel code and/or the docs are appreciated.
EDIT:
To check the first situation, I conducted an experiment and added a 1-minute delay right before the end of the call to register_netdev() in e1000 driver (kernel: 3.11-rc1). I also added debug prints there in .probe and .ndo_open callbacks. Then I loaded e1000.ko, and tried to access the network device it services before the delay ended (in fact, NetworkManager did that before me), then checked the system log.
Result: yes, it is possible for .ndo_open to be called even before the end of .probe although the "race window" is usually rather small.
The second situation (.ndo_start_xmit VS NAPI poll) is still unclear to me and any help is appreciated.
Wrt the ".ndo_start_xmit VS NAPI poll" qs, well here's how am thinking:
the start-xmit method of a network driver is invoked in NET_TX_SOFTIRQ context - it is in a softirq ctx itself. So is the NAPI receive poll method, but of course in the NET_RX_SOFTIRQ context.
Now the two softirq's will lock each other out - not race - on any local core. But by design intent, softirq's can certainly run in parallel on SMP; thus, who is to say that these two methods, the ".ndo_start_xmit VS NAPI poll", running in two separate softirq context's, will not ever race?
IOW, I guess it could happen. Be safe, use spinlocks to protect global data.
Also, with modern TCP offload techniques becoming more prevalent, GSO is/could also be invoked at any point.
HTH!

Networking from a Kernel Mode Driver

The question is pretty self-explanatory, I require the ability to open and control a socket from a kernel mode driver in windows xp. I know that vista and after provides a kernel mode winsock equivalent, but no such thing for XP.
Cheers
Edit
I've had a recommendation to have a user-mode service doing the socket work, and one to use TDI. Which is best?
TDI is not an easy interface to use. It is designed to abstract network transport drivers (TCP, NetBEUI, AppleTalk, etc) from applications. You will have to fully understand the API to be able to use it for socket work - this is certainly a lot more work than writing a user-mode service and communicating with that. You can issue a reverse IRP from the service to the driver, so the driver can trigger comms when it needs to.
Also, the more complexity you remove from your driver (here, into user-mode), the better.
However, using a user-mode service will require a context switch per data transfer to the driver, which for you might be on a per-packet basis. This is an overhead best avoided.
I'm curious as to why a driver needs to perform network I/O. This superficially at least seems to indicate a design issue.
Use TDI interface, it's avaliable on XP and Vista.
http://msdn.microsoft.com/en-us/library/ff565112.aspx

Resources