Intel SGX HeapMaxSize and EPC page swapping - intel

The .edl files contain a HeapMaxSize entry. The SDK User Guide states that this is because
Enclave memory is a limited resource. Maximum heap size is set at
enclave creation.
But doesn't the SGX specification allow EPC page swapping (EPA, EBLOCK, ETRACK, EWB)?
Or in a more practical sense: is there a disadvantage to setting HeapMaxSize=2^64 Bytes?
Maybe EPC page swapping is not yet supported by the SDK, or maybe the trusted enclave code has to manually trigger such swapping?
Edit
As ab. points out, with SGXv1 all EPC pages have to be EADDed prior to enclave execution. Does the SDK at this point support only SGXv1 instructions?

I'm not familiar with the SGX SDK, but note that the SGXv1 paging instructions (EWB/ELDU and friends) still require you to have EADDed all the pages in the first place, and to keep their encrypted contents around somewhere in case they are used. Even if the SDK did support this, it would take your enclave a lot longer to start up, and it would consume a ton of storage space somewhere while running for all the paged-out pages.
SGXv2 addresses this with EAUG/EACCEPT.

Related

Guidelines for providing large downloads in IIS + ASP.NET (MVC)

We want to allow users to download large files from our ASP.NET MVC2 system.
We are providing the files through the Controller.File method, which streams from FileStream to Response.OutputStream.
The reason we use Controller.File instead of providing a direct link is that we need to verify security rules on the logged in (Forms authentication) user.
What would be the largest areas of concern when doing this?
Security: we'll probably need to increase executionTimeout. Does this expose security issues?
Memory: I assume that, since Controller.File is streaming the contents directly from disk, there are little memory implications.
CPU: I read on various blogs that providing large downloads is heavy on the cpu, but these were unconfirmed statements, so I did not find any recommendations from MS.
Network: how many concurrent downloads are possible? Can we throttle, so that other traffic is not hindered by this?
Other?
What would be your recommendations?
What would be other options than going through the ASP.NET pipeline, but still provides us with the data we need to validate the logged in user. ISAPI is said to reduce CPU and Memory, maybe some other advantages here?
Are there any (official) guidelines or best practices available concerning this?
I would look to do it asynchronously. Make sure buffering is switched off, that way your data is sent to the client rather than asp waiting for you to finish. if you're already streaming, then that's a good thing. I'm assuming you mean you read x bytes from the filestream and write those bytes to the output stream, repeat until EOF.
I'm not aware of any guidelines I can point you at. The above come from my own experience.

How much space IIS appropriate for Sessions in an ASP.NET application

How much space IIS 6 and 7 appropriate to any asp.net application for sessions?
have this space limitation?
thanks
AFAIK, In-Process Mode is limited only by the RAM available on the server. Do you have specific concerns that you might exceed available RAM? If so, either increase your RAM or use an alternative session state mode:
http://msdn.microsoft.com/en-us/library/ms178586.aspx
There are so called sessionstateprovider. So you can actually choose. The ones I'm aware of are:
In memory
Session state service
SQL server
Velocity
Velocity is a distributed caching that can be used as session state container. I think it's now called or part of AppFabric.
The size of a session depends on your app. There is no real hard limit I know of. The best thing is to keep it as small as possible, but again, even large sessions could be required and if planned for are probably no problem.

Caching Data .Net 4.0 (Asp.NET)

Can anybody explane me what is "CacheSpecificEviction" in detail and how to avoid it?
I am geting this in CacheEntryRemovedArguments.RemovedReason.
CacheSpecificEviction as reason for removing cache entry means "the item was removed, because the cache provider's eviction policy determined it should be removed" - I know, it is pretty unspecific, but it hardly can be more specific, because of many possible cache-engine implementations and their different eviction policies (often configurable, for example in AppFabric Cache aka Velocity). In generel, eviction means "ok, there are risk of running out of memory, we should remove some items - for example these Least Recently Used (LRE eviction policy), or maybe Least Frequently Used with Dynamic Aging (LFDA), etc.". So to get rid of eviction problems, you sould check your cache memory usage and limits, eviction configuration options...

How to determine the size in bytes of the ASP.NET Cache?

I'm in active development of an ASP.NET web application that is using server side caching and I'm trying to understand how I can monitor the size this cache during some scale testing. The cache stores XML documents of various sizes, some of which are multi-megabyte.
On the System.Web.Caching.Cache object of System.Web.Caching namespace I see various properties, including Count, which gets "the number of items stored in the cache" and EffectivePrivateBytesLimit, which gets "the number of bytes available for the cache." Nothing tells me the size in bytes of the cache.
In the Understanding Caching Technologies section of the "Caching Architecture Guide for .NET Framework Applications" guide, there is a "Managing the Cache Object" section with a table (Table 2.2: Application performance counters for monitoring a cache) listing a set of application performance counters, but I don't see any that give me the current size of the cache.
What is a good way to find the size this cache? Do I have to set a byte limit on the cache and look at one of the turnover rates? Am I thinking of this problem in the wrong way? Is the answer to How to determine total size of ASP.Net cache really the best way to go?
I was about to give a less detailed account of the answer you refer to in your question until I read that. I would refer you to this, seems spot on to me. No better way than seeing the physical size on the server, anything else might not be accurate.
You might want to set up some monitoring, for which a Powershell script might be handy to record and send on to yourself in a report. This way you could run various tests overnight say and summarise it.
On a side note, they sound like very large documents to be putting in a memory cache. Have you considered a disk based cache for these larger items and leaving the memory for smaller items which is more ideal for it. If your disks are reasonably fast this should be fairly performant.

Output cache versus application cache?

I have an application that uses the application cache to store the responses generated but custom HTTP handlers. The same response is always returned to requests for the same URL, and the entire response is inserted whole into the cache.
If an application is caching per-URL, is there any advantage to using the application cache? Or should I just be using the output cache?
Note that because I'm using a custom HTTP handler, all of this is being done in C#, not in page directives.
Assuming you do not use authorization and no dynamic content, the lower level you go the better the results. The lowest level is kernel mode caching. http://learn.iis.net/page.aspx/154/walkthrough-iis-70-output-caching/
Think of it in terms of an office.
Technically the request chain is: The boss, a secretary, an answering machine, and the phone line provider.
Imagine an office without a secretary. The boss have to answer every call. This is a scenario without a cache at all.
Application cache is a secretary. It handles the calls so the boss (application) don't have to answer just to tell the same thing over and over.
Secretary is someone who sits between the boss and external world. She can handle most simple scenarios. The boss get's bothered when there's no secretary at work (low memory).
But the secretary is a human, so she goes home at some time in the evening (ASPNET application recycles at some time, and application cache get's exposed, so in terms of ASPNET the secretary shares same app donain with the boss).
Here an answering machine comes to play. Not only it can screen a secretary from answering stupid questions over and over again, it screens the boss when no secretary is available. It's just a machine, and the client listens to a nice prerecorded voice or a music (cached item) when neither secretary, no boss can answer them.
IIS caching kernel mode is the answering machine for your asnet "office". An answering machine is much cheaper then a secrtetary. It's just a microcontroller with a tape, it does not even consume coffee, it just plays back a tape or something of that kind.
Well it runs on the same box, but it performs much better, because it just does the simple task of giving out content on maximum speed with it's own low level system resources management.
That said, kernel mode is the perferred way to cache, if you have semi-dynamic content in terms of performance.
First I'll state the usual caveat that it depends on the specific case. Factors such as available web server memory, load, page size, data size, etc.
That said, if there are not a huge number of urls and they don't have to be very fresh then the output cache would have the edge I believe. Especially if you are going to do it publicly, that is encourage caching on isp and browser level. Thus saving load on your server and shortening trip for the return user, or user using same isp or proxy.
I would think it would come down to whether or not you need to adjust the settings of the cache programatically at runtime from within your code or not. If you don't then setting the output cache declaratively would be fine.

Resources