I read that TPM takes measurements of all critical components and writes their hashes to its PCR registers at boot time.
Does TPM also take measurement at run time or during when these components are in operation?
The TPM itself does not take any measurments at all. Also not at boot time. It is a place where a trust enabled piece of code can store measurements in a tamper proof way.
During boot, the measurments are taken by the firmware (BIOS, UEFI) and stored in the TPM. It is possible to configure your system in a way that also after the firmware has finished, additional measurements are taken. Like a trusted boot loader.
If you are interested in extending the chain of trust further to every executed bit of code, projects like IBM's Integrity Measurement Architecture are worth looking at. However, I consider those measurements pointless. What do you do with these? There are rarely any casese where you can actually verify that a certain chain of measurements is trusted.
You may also write your own piece of software that stores measurements at any given time or use tools like jTSS, TrouSerS or IBM's libtpm tools.
Related
Is it possible to prove to the remote party that the application I am running in my system is the same as I am claiming that I am running using DRTM or SRTM? If yes then How?
Theoretically: yes. The concept is called remote attestation.
The basic idea is: First you have a sound chain of trust built on your platform, like:
BIOS ==> Boot loader ==> OS ==> Applications
The resulting measurements are stored in the PCRs.
Now you can let the TPM sign this set of PCRs, that's called quote.
You can submit this quote to a remote entity. Here the problems start:
How can you proof that the quote was signed by a hardware TPM and not an emulator?
Possible solutions: pre-shared keys or some kind of CA.
How can you be sure that the PCR values represent a trusted system state?
That's not so easy. If you have SRTM, you have to consider every possible combination of
how your system load the components. E.g. in BIOS-phase, in which order are the
option-ROMs loaded?
Here DRTM comes for the rescue, but it makes the matter just slightly easier. With DRTM
you can forget about all the pre-DRTM stuff. If you have a small trusted environment,
say like flicker, then you'll have a manageable set of trusted configurations.
If you have a full-featured OS, than it's hard.
First, you have to find an OS that measures everything. IBM's IMA for the Linux
kernel is one example.
Then, the slightest difference in the order of loaded components will lead to
different PCR values. Furthermore consider all the combinations of states the
different installed software packages might be in.
Possible solutions are to restrict the possible set of PCR values that represent a
valid configuration. For example you can measure a whole OS image instead of each
binary. An example is the acTvSM platform published a few years ago.
Conclusion: There is no easy, off-the-shelf solution, but you can design a system such that it fits your requirements.
I have a question related to MPI.
In order to keep track of the communication volume used by my implementation, I would like to get the currently-transferred data amount since the mpi-process' start until the current measure-point.
I checked the specification as well as the mpi.h header file of mpich and did not find a matching function to call or variable that keeps track of the network transfer costs. It would, of course, be possible to implement a small traffic registry or define a macro for tracking communication sizes, but maybe it can be read out from somewhere.
Do you know a method to gain the current transfer size, maybe it is also possible to get this number using a system call to get the network traffic size of the process?
Is it maybe possible to access the proc information of the current process, maybe the /proc/net is maintained per process as well, such as /proc/self/net?
Thank you in advance,
Martin
I need to complete some performance tests on SOA appliances with side cache.
I have developed a simple application to generate SOAP/HTTP traffic but now i need some way of monitoring the E2E applications performance.
One vital metric i require is an accurate figure for the Transactions Per Second, as well as e2e response time.
I've used soapUI and Loadui but just do not believe the reported TPS figures as they seem very high, e.g. > 1300 TPS.
can anyone recommend a method to measure TPS that is "fool proof"?
I'd suggest cross checking SoapUI's numbers against the logs from your server (count the number of lines with the same second), or cross check this way:
Time the test run yourself.
Verify that the number of transactions SoapUI cites are accurate (logs or another measure on the server itself).
Divide trans count by seconds.
In the past I've done this and found SoapUI to be pretty reliable.
One thing to keep in mind in terms of whether your numbers are as good as they can be is whether or not you might need to simultaneously run soapui from more than one machine. I suggest monitoring the CPU, memory, bandwidth, etc on the SoapUI machine. If any of these get rather high, run the test on two machines simultaneously with very close to the same start and stop times and then you can safely add the two TPS numbers.
I have the specific scenario for which we want to use Coherence as sitributed cache. Which I am gonna describe here.
I have 20+ standalone processes which are going to put the data in cache continuously. the frequency of all of them differs, though thats not a concern.
And 2 procesess which will be reading data from those cache.
I dont need any underlying db except for the way which coherence provide. Data will be written to the cache and read from the cache.
I have 4 node cluster at my disposal (cost constraint whatever) and the coherence cluster will be on different boxes (infra constraint whatever) and both the populating portion of the cache and the reading part will be on differnt nmachines.
The peak memory size of the cache daily will hover around 6 GB max, min being 2 GB.
Cache will have daily data only and I will have separate archiving processes to simulatneosuly keep archiving it also. the point is that cache size for now will have this size only. Lets say I am gonna keep the date out of key equation.
Though Would like to explore if I can store more into those 4 nodes. Right now its simple serialization, can explore other nbinary formats. Or should I definietly at this size of the cache?
My read and write operations are fairly spread out in the day. Meaning the read and write will keep on happening by those 2 reading clients and 20+ writing clients. Its not like one of them is more. Though there is a startup batch process in all of the background process which push more to the cache than the continuous pushing afterwards. But continuous pushing pushes fair amount of data too.
Now my questions regarding those above points (and because of some confusion also)
The biggest one is somebody told me that I an have limited number of connection depending on the nodes we have bought. so he said if its 4, you ideally should have 4 connections only at the max. So, develop a gatekeeper kind of application and what not. Even if we use TCP Extend. Now from my reading so far, I dont think so. Is it? The point is dont wanna go that way if its really is not a constraint.
In other words is there limit on connection through Proxy Service dependeing on the nodes in the cluster?
Soemwhat related to above only. at the very max, I am going to get some penalty on the performance while pushing to cache only if I go the Extend way, right?
Partioned cache/near cache. As the reading time as well as the most update cache both are extremely critical. (the most imp question i have).
Really want to see the benefit which can be obtained from going to POF instead of lets say serialization/externalizatble/protobuf. Can coherence support protobuf out of the box? (may be for later on)
There's no technical limitation to the number of connections a Coherence Extend proxy can support except normal network and hardware resource constraints. You will have to ask an Oracle sales person if there are licensing limitations.
There is some performance impact from using a proxy because you are adding an additional network hop (client to proxy to cluster). If you use POF serialization then the proxy does not have to serialize/deserialize values. It can just pass the object through in its serialized form. In most applications the performance impact of using a proxy is tiny because Coherence is highly optimized for network speed. You are not required to use a proxy unless your clients are .NET or C++, but there are advantages of isolating client performance from impacting the cache.
Near cache will improve retrieval performance dramatically if there a number of frequently retrieved items for a client since they will be found in-process.
POF offers performance improvements based on faster serialization/deserialization and more compact storage. It is always best to try with test data based on your real production data and measure the difference yourself. Coherence does not support protobuf out of the box.
I've been tasked with creating a WCF service that will query a db and return a collection of composite types. Not a complex task in itself, but the service is going to be accessed by several web sites which in total average maybe 500,000 views a day.
Are there any special considerations I need to take into account when designing this?
Thanks!
No special problems for the development side.
Well designed WCF services can serve 1000's of requests per second. Here's a benchmark for WCF showing 22,000 requests per second, using a blade system with 4x HP ProLiant BL460c Blades, each with a single, quad-core Xeon E5450 cpu. I haven't looked at the complexity or size of the messages being sent, but it sure seems that on a mainstream server from HP, you're going to be able to get 1000 messages per second or more. And with good design, scale-out will just work. At that peak rate, 500k per day is not particularly stressful for the commnunications layer built on WCF.
At the message volume you are working with, you do have to consider operational aspects.
Logging
Most system ops people who oversee WCF systems (and other .NET systems) that I have spoken use an approach where, in the morning, they want to look at basic vital signs of the system:
moving averages of request volume: 1min, 1hr, 1day.
comparison of those quantities with historical averages
error/exception rate: 1min, 1hr, 1day
comparison of those quantities
If your exceptions are low enough in volume (in most cases they should be), you may wish to log every one of them into a special application event log, or some other audit log. This requires some thought - planning for storage of the audits and so on. The reason it's tricky is that in some cases, highly exceptional conditions can lead to very high volume logging, which exacerbates the exceptional conditions - a snowball effect. Definitely want some throttling on the exception logging to avoid this. a "pop off valve" if you know what I mean.
Data store
And of course you need to insure that the data source, whatever it is, can support the volume of queries you are throwing at it. Just as a matter of good citizenship - you may want to implement caching on the service to relieve load from the data store.
Network
With the benchmark I cited, the network was a pretty wide open gigabit ethernet. In your environment, the network may be shared, and you'll have to check that the additional load is reasonable.