Intel SGX: Creating an attestation report within the enclave - intel

I'm using SCONE Docker image to run my code inside Intel SGX enclave.
SCONE handles the creation of enclaves and their eventual destruction, which makes my life as a programmer easier. But if I want to perform a Local or Remote Attestation of my code inside the enclave, I need to rely on SCONE's proprietary service.
I wonder if it's possible to create a report for attestation from within the current enclave. In other words, the code running inside the enclave must be able to create an attestation report and send it out - all without leaving the enclave.
I've checked Intel SGX documentation and samples code. All of the calls to EREPORT function rely on knowing the Enclave ID, which I don't know (SCONE creates enclaves on my behalf). Hence my question is mainly about the way to find out the enclave information from within it.
Is it actually possible?

Related

Does a call to BPXBATCH from JCL use the priority of the batch job or is priority in OMVS independent?

I am calling a shell script that does some processing from JCL using BPXBATCH like this:
//STEP2 EXEC PGM=BPXBATCH,
// PARM='SH PATHTOSCRIPT.SH MYARGUMENT'
The JCL has the service class with the highest priority. However, the shell script enters in a queue waiting for resources. Sometimes it runs quickly, and other times waits a lot of time for resources. The priority of the JCL seems to be independent of the shell script. I read maybe using the "nice" command in Unix would increase the priority of the shell script.
I want to be sure first, that the priority of a JCL from z/OS doesn't affect the priority of Unix process that was called from that JCL through BPXBATCH. I cannot find any documentation about it.
Short Answer
To answer your question first: BPXBATCH runs in one address space, and the shell runs in a second address space. Commands issues by the shell may run in the same address space as the shell, or may run in more additional address spaces.
The BPXBATCH address space has got a service class, and the shell address space(s) has got a service class, probably a different one. Each service class has its own performance goal, and this tells the system how to manage that work.
Detailed Answer
The z/OS workload manager (WLM) is responsible to assign work to a service classes when it is presented the new work. Service classes specify performance goals, and importance levels, not priorities. WLM manages all work in the system according to is performance goal based on the importance of the goal.
There are a couple (workload management) subsystems, that may start new work. Examples of such subsystems are
JES, which manage batch work, i.e. batch jobs.
TSO, which manages interactive TSO user work (TSO login).
OMVS, which manages forked, and non-locally spawned z/OS UNIX work.
STC, which manages started job workload.
This list is not complete; I listed only the subsystems that I need to answer the question.
When JES2/3 receives a job that shall run on the system, it presents some job attributes to WLM, and WLM assigns the job to a service class. It does so using WLM classification rules for subsystem type JES, and the attributes given.
Everything that runs in this job, i.e. in the job's address space will be managed towards the performance goal of the sercive class assigned. This includes z/OS UNIX work that is run in this very address space, i.e. work that is not started via UNIX fork(), or non-local spawn().
When a z/OS UNIX process starts an new process via fork(), or via non-local spawn(), this new work is handled by the WLM subsystem OMVS. The OMVS subsystem presents some attributes of the new process to WLM, and WLM assigns the process to a service class. It does so using WLM classification rules for subsystem type OMVS, and the attributes given. This kind of work is always runs in a separate, new address space.
BPXBATCH starts the (first) UNIX command it is told via PARM=, or //STDPARM, as a new process using either fork(), or spawn(). The spawn() may be a local spawn(), or a non-local spawn(). Which one is done depends on many factors, too complex to explain here.
The important point here is, when running BPXBATCH with PARM='SH ...', the shell proces will always run in a separate, new address space and will be classified via WLM subsystem OMVS.
The result is BPXBATCH is running in one address space with its service class, and the shell is run in a second address space with its service class. The service classes may be the same, but usually they are different WLM defintions with different performance goals.
As a starter, have a look at z/OS MVS Planning: Workload Management
nice() on z/OS UNIX
nice() has no effect on z/OS UNIX, unless the system has been setup to support it. There is parameter PRIORITYGOAL(...) in BPXPRMxx parmlib member to setup a list of up to 40 WLM service classes that will be used in conjunction with nice(). I have never heard of anyone having set this parameter.
See z/OS MVS Initialization & Tuning Reference for details about BPXPRMxx member

How to get the Quote from an Intel SGX Enclave

Recently I am developing a trusted computing project with the help of an Intel SGX Enclave.
To verify an Enclave i need the Quote generated by the Quoting-Enclave.
I know how it works theoretically and how to start an Enclave.
But I am not able to find any code examples or detailed explanation on how to recieve the quote for an Enclave and sending it to the calling programm.
Can someone please explain that to me through an example?
Thanks!
Well, what you are trying to do is called Attestation.
Attestation is a process to verify:
whether an enclave is running the expected bynaries (signed library), and,
whether it is running in a real SGX enabled processor.
Attestation usually is required prior to providing secrets to an enclave. This process is called Provisioning.
There are two kinds of Attestation:
Local Attestation: two enclaves, running on the same Platform (PC) want to "verify" each other.
Remote Attestation: a Service Provicer needs to verify an enclave remotely.
You mention Quote Enclave (QE) so I suppose you are using Remote attestation.
If you are searching for examples, please refer to the example projects comming with the Intel SGX SDK, or the ones available at the Intel SGX site.

R web server that handles sessions

I am not sure if this is the right place to ask this question. Please point out to me where if this is the case.
I must build a multi user, stateful (sessions; object persistance) web application that will uses .NET in the backend and must connect to R in order to perform calculations on data that lies in a SQL server 2016 DB. Basically, I need to connect a MS based backend with R.
Everything is clear, except for the problem that I need to find an R server that handles sessions. I know shiny but I can't use it (long story).
rApache and openCPU do not handle sessions.
Rserve for windows is very limited (no parallel connections are supported, subsequent connections share the same namespace and sessions are not supported - this is a consequence of the fact that parallel connections are not supported)
Finally, I have seen Rook (i.e. Run R/Rook as a web server on startup) but I can't read anywhere, even the docs. if it is able to deal with sessions. My question is: is there a non stateless R web server or does anyone knows if Rook is stateless?
EDIT:
Apparently, this question has been around for longer: http://jeffreyhorner.tumblr.com/about#comment-789093732

Intel SGX Threading and vs TCS

I'm trying to understand the difference between SGX threads enabled by TCS and untrusted threading provided by SDK.
If I understand correctly, TCS enables multiple logical processors to enter the same enclave. Each logical processor will have its own TCS and hence its own entry point (the OENTRY field in TCS). Each thread runs until an AEX happens or reaches the end of the thread. However, these threads enabled by TCS have no way to synchronize with each other yet. At least, there is no SGX instruction for synchronize.
Then, on the other hand, the SGX SDK offers a set of Thread Synchronization Primitives, mainly mutex and condition variable. These primitives are not trusted since they're eventually served by OS.
My question is, are these Thread Synchronization Primitives meant to be used by TCS threads? If so, wouldn't this deteriorate the security? The OS is able to play with scheduling as it wishes.
First, let us deal with your somewhat unclear terminology of
SGX threads enabled by TCS and untrusted threading provided by SDK.
Inside an enclave, only "trusted" threads can execute. There is no "untrusted" threading inside an enclave. Possibly, the following sentence in the SDK Guide [1] misled you:
Creating threads inside the enclave is not supported. Threads that run inside the enclave are created within the (untrusted) application.
The untrusted application has to set up the TCS pages (for more background on TCS see [2]). So how can the TCS set up by the untrusted application be the foundation for trusted threads inside the enclave? [2] gives the answer:
EENTER is only guaranteed to perform controlled jumps inside an enclave’s code if the contents of all the TCS pages are measured.
By measuring the TCS pages, the integrity of the threads (the TCS defines the allowed entry points) can be verified through enclave attestation. So only known-good execution paths can be executed within the enclave.
Second, let us look at the synchronization primitives.
The SDK does offer synchronization primitives, which you say are not to be trusted because they are eventually served by the OS. Lets look at the description of these primitives in [1]:
sgx_spin_lock() and unlock operate solely within the enclave (using atomic operations), with no need for OS interaction (no OCALL). Using a spinlock, you could yourself implement higher-level primitives.
sgx_thread_mutex_init() also does not make an OCALL. The mutex data structure is initialized within the enclave.
sgx_thread_mutex_lock() and unlock potentially perform OCALLS. However, since the mutex data is within the enclave, they can always enforce correctness of locking within the secure enclave.
Looking at the descriptions of the mutex functions, my guess is that the OCALLs serve to implement non-busy waiting outside the enclave. This is indeed handled by the OS, and susceptible to attacks. The OS may choose not to wake a thread waiting outside the enclave. But it can also choose to interrupt a thread running inside an enclave. SGX does not protect against DoS attacks (Denial of Service) from the (potentially compromised) OS.
To summarize, spin-locks (and by extension any higher-level synchronization) can be implemented securely inside an enclave. However, SGX does not protect against DoS attacks, and therefor you cannot assume that a thread will run. This also applies to locking primitives: a thread waiting on a mutex might not be awakened when the mutex is freed. Realizing this inherent limitation, the SDK designers chose to use (untrusted) OCALLs to efficiently implement some synchronization primitives (i.e. non-busy waiting).
[1] SGX SDK Guide
[2] SGX Explained
qweruiop, regarding your question in the comment (my answer is too long for a comment):
I would still count that as a DoS attack: the OS, which manages the resources of enclaves, denies T access to the resource CPU processing time.
But I agree, you do have to design the other threads running in that enclave with the awareness that T might never run. The semantics are different from running threads on a platform you control. If you want to be absolutely sure that the condition variable is checked, you have to do so on a platform you control.
The sgx_status_t returned by each proxy function (e.g. when making an ECALL into an enclave) can return SGX_ERROR_OUT_OF_TCS. So the SDK should handle all threading for you - just make ECALLs from two different ("untrusted") threads A and B outside the enclave, and the execution flow should continue in two ("trusted") threads inside the enclave, each bound to a separate TCS (assuming 2 unused TCS are available).

MPI - Add/remove node while program is running

Is there an MPI implementation that allows nodes to be dynamically added/removed at runtime? Do any recover from complete hardware failure of a node, allowing the node to be repaired and relaunched without restarting the program?
Is there an MPI implementation that allows nodes to be dynamically added/removed at runtime?
This is actually two questions. Nodes usually can be dynamically added at runtime using calls like MPI_Comm_spawn. As #Hristo pointed out in the comments, you should set the correct info key in Open MPI. It may also be possible in other implementations. As for removing nodes, that's a big area of research at the moment. Most MPI implementations currently have varying levels of success surviving a total node failure. In the current releases of Open MPI, I don't believe there is any support for that sort of failure [citation needed], though there is work to improve that ongoing. In the current version of MPICH, you can pass the flag -disable-auto-cleanup to mpiexec and it will not automatically clean up your application after a process/node failure. However, you'll still have to modify your MPI application to handle this situation. The various derivatives of MPICH (Intel MPI, Cray MPI, IBM MPI, MVAPICH, etc.) all don't support this feature AFAIK. There are other research implementations that are also available to extend the support of the MPI Standard. User Level Failure Mitigation is currently being considered by the standardization body as a way of letting the user handle process failures. There is a research implementation based on Open MPI available at the website linked, and an experimental prototype will also be in the next version of MPICH (3.2).
Do any recover from complete hardware failure of a node, allowing the node to be repaired and relaunched without restarting the program?
This is essentially the same process as above. You would need to use the APIs to remove a process and then somehow find out that it's available and add it back using spawn. These calls have to be made from inside the application though, not externally.

Resources