Remotely Verifying the Application in execution - intel

Is it possible to prove to the remote party that the application I am running in my system is the same as I am claiming that I am running using DRTM or SRTM? If yes then How?

Theoretically: yes. The concept is called remote attestation.
The basic idea is: First you have a sound chain of trust built on your platform, like:
BIOS ==> Boot loader ==> OS ==> Applications
The resulting measurements are stored in the PCRs.
Now you can let the TPM sign this set of PCRs, that's called quote.
You can submit this quote to a remote entity. Here the problems start:
How can you proof that the quote was signed by a hardware TPM and not an emulator?
Possible solutions: pre-shared keys or some kind of CA.
How can you be sure that the PCR values represent a trusted system state?
That's not so easy. If you have SRTM, you have to consider every possible combination of
how your system load the components. E.g. in BIOS-phase, in which order are the
option-ROMs loaded?
Here DRTM comes for the rescue, but it makes the matter just slightly easier. With DRTM
you can forget about all the pre-DRTM stuff. If you have a small trusted environment,
say like flicker, then you'll have a manageable set of trusted configurations.
If you have a full-featured OS, than it's hard.
First, you have to find an OS that measures everything. IBM's IMA for the Linux
kernel is one example.
Then, the slightest difference in the order of loaded components will lead to
different PCR values. Furthermore consider all the combinations of states the
different installed software packages might be in.
Possible solutions are to restrict the possible set of PCR values that represent a
valid configuration. For example you can measure a whole OS image instead of each
binary. An example is the acTvSM platform published a few years ago.
Conclusion: There is no easy, off-the-shelf solution, but you can design a system such that it fits your requirements.

Related

How to customize BlueZ?

I will be asking a very subjective question, but it is important as I am looking to recover from failure to effectively use BlueZ programatically.
Basically I envision an IoT edge device that runs on a miniature computer (Ex: Raspberry pi or Intel Compute Stick). The device would then run AlpineLinux OS and interact with Cloud.
Since it is IoT environment, it is needless to mention the importance of Bluetooth BLE over ISM band. Hence the central importance of being able to customize and work with BlueZ.
I am looking to do several things with BlueZ BLE including but not limited to
Advertising
Pairing
Characteristic
Broadcast
Secure transport of data etc...
Since I will be needing full control over data, for data-processing and interacting with cloud (Edge AI or Data-science on Cloud) I am looking at three ways of using BlueZ:
Make DBus API calls to BlueZ Methods.
Modify BlueZ codebase and make install a custom bin.
(So that callback handlers can be registered and wealth of other bluez
methods can be invoked)
Invoke BlueZ using command line utils like hcitool/bluetoothctl inside a program using system() calls.
No 1 is where I have failed. It is exorbitant amount of effort to construct and export DBus objects and then to invoke BlueZ methods. Plus there is no guarantee that you will be able to take care of all BLE issues.
No 2 looks very promising and I want to fully explore how feasible it is to modify the BlueZ code to my needs.
No 3 is the least desirable option, but I want to have it as a fallback option nevertheless.
Given my problem statement, what is the most viable strategy forward? I am asking this aloud so that I do not make more missteps and cost myself time and efforts.
Your best strategy is to start with the second way (which you already found promising) as this is a viable solution and many developers go about this method in order to create their BlueZ programs. Here is what I would do:-
Write all the functionality of the system in some sort of flowchart or state machine. This helps you visualise your whole system and what needs to be done to reach your end goal.
Try to perform all the above functionality manually using bluetoothctl and btmgmt. This includes advertising, pairing, etc. I recommend steering away from legacy commands such as hcitool and hciconfig as these have been deprecated and have a very different code structure.
When stumbling upon something that is not the default in bluetoothctl/btmgmt or you want to tweak the functionality, update the source to do so.
Finally, once you manually get the system to perform the functionality that you need (it doesn't have to be all, it can just be a subset of the functions), you can move to automating the whole process. This involves modifying the source for bluetoothctl/btmgmt commands so that instead of manual intervention, everything would be event-driven.
This is a bonus, but if you can create automated tests using python or some other scripting language, then this would ensure that your system is robust and that previous functionality doesn't break when adding new ones.
By the end of this process, you'll have a much better understanding of the internals of bluetoothctl/btmgmt and D-BUS APIs that you might be able to completely detach your code from the original bluetoothctl/btmgmt or create the program from scratch.
You probably already know this, but when modifying the tools, this is the starting point for the source code:-
bluetoothctl - client/main.c
btmgmt - tools/btmgmt.c
For more references on using bluetoothctl commands and btmgmt, please see the links below:-
BlueZ D-Bus C or C++ Sample
Bluetoothctl set passkey
https://stackoverflow.com/a/51876272/2215147
Bluez Programming
Linux command line howto accept pairing for bluetooth device without pin
https://stackoverflow.com/a/52982329/2215147
Bluetooth Low Energy in C - using Bluez to create a GATT server
I hope this helps.

FreeSWITCH minimal installation and module selection

As someone who is very new to the opensource PBX projects like Asterisk and FreeSWITCH, I am grappling with some information overload. Have read the basic FreeSWITCH docs on Wiki, but still have few questions. Since I am not very familiar with the terminology, I will try to use close approximations.
Trying to create a small/minimalistic build of FreeSWITCH, that needs to run on an rather old laptop (Celeron 1GHz, 512MB RAM, 20GB HDD, already running Debian "Wheezy"), and set it up as a 6-port GSM-SIP/Jabber gateway. So, by "small" and "minimalistic", I mean one which doesn't have modules/optional-software that is not absolutely necessary (e.g. no need for IVR announcements, or Skype integration) -- to keep memory footprint smallest, and occupy less hard-disk real-estate.
The rough idea is to have 6 GSM ports (via 'GSM-open module', similar to chan_dongle) towards public telephony network, and about 60 SIP extension, and support upto 6 calls involving GSM ports, and about 6 SIP-SIP calls (intra PBX), on this setup. I have read that the CPU overhead of GSMopen module is pretty low, so I am guessing this is possible.
Can someone confirm this to be a realistic goal?
What might be the minimum set of modules to select for minimalistic build?
For modules not chosen during initial build, can those be added later? If so, would it require me to rebuild FreeSWITCH completely, only the modules, or that everything would be built, but only configuration changes would be required to ensure that modules are loaded, and configure?
Is there any rough estimate of what might be the maximum call-rate that could be supported in such a configuration? For SIP-SIP calls? Given the underpowered processor, and little RAM (as per modern standards), I am guessing that both shall be bottlenecks, but adding RAM might still be possible (even if costly and difficult).
I have read that "hooks" can be created using Lua/Python/Java etc.. However if someone share share few examples of what-all is possible using such hooks, it would make the concept clearer. Can one hope to write an application like "missed call log" or "redirect on no answer" using these hooks?
Can someone confirm this to be a realistic goal?
Yes, this is quite realistic. You need to target as little as possible transcoding, because that's where CPU resources are needed. But even with a 1Ghz Celeron, 6 transcoded sessions seem quite realistic. But it needs testing :)
What might be the minimum set of modules to select for minimalistic build?
Just start with the default list of modules, and add gsmopen (I have no experience with gsm gateways, can't help with that part). The memory footprint is pretty low, and you may need some of those modules later.
For modules not chosen during initial build, can those be added later?
as far as I remember, Wiki describes this process. You edit modules.conf and make the specific module.
Is there any rough estimate of what might be the maximum call-rate that could be supported in such a configuration? For SIP-SIP calls? Given the underpowered processor, and little RAM (as per modern standards), I am guessing that both shall be bottlenecks, but adding RAM might still be possible (even if costly and difficult).
It really depends on complexity of your dialplan. Each context consists of a number of conditions, which are doing regexp match on channel variables. So, the more complex your dialplan is, the less CPS you get. But for a 6-channel gateway, I don't see this a problem. GSM network will be much slower than your box :)
I have read that "hooks" can be created using Lua/Python/Java etc.. However if someone share share few examples of what-all is possible using such hooks, it would make the concept clearer. Can one hope to write an application like "missed call log" or "redirect on no answer" using these hooks?
You can control every aspect of FreeSWITCH behavior with FreeSWITCH. There are even examples when the complete dialplan is re-implemented by an external program (Kazoo does that).
The simplest mode of operation is when your Lua/JS/Perl/Python script is launched from within the dialplan: then it receives a "session" object, and you can do whatever you want with the call: play sounds, bridge, forward, make a new call and bridge them together, and so on. Here in my blog there's a little practical example.
Then, you can build an external application which connects to the FS socket and monitors the events and performs actions on active calls.
Also, it can be done in the opposite direction: you run a server, and FS connects to it with its socket library.
Also, you can have an HTTP service which delivers pieces of XML configuration to FreeSWITCH, and it requests those on every call (this would be the most CPU-intensive application). This way, you can feed FS from some internal database, and build fault-tolerant systems.
I hope this helps :)
You can also find me in skype if needed.
FreeSWITCH is not really memory-hungry, and you can simply start with the default set of modules (the best is to use the prebuilt Debian packages). For example, on my 64bit machine, the FreeSWIITH process occupies only 35MB of memory.
freeswitch#vx03:~$ uname -a
Linux vx03 2.6.32-5-xen-amd64 #1 SMP Thu Nov 3 05:42:31 UTC 2011 x86_64 GNU/Linux
freeswitch#vx03:~$ ps -p 11873 v
PID TTY STAT TIME MAJFL TRS DRS RSS %MEM COMMAND
11873 ? S<l 10:29 0 0 258136 36852 2.3 /opt/freeswitch/bin/freeswitch -nc -rp -nonat -u freeswitch -g freeswitch
I will go through the rest of your questions later today

Difference between gLite and OpenNebula

What is the difference between gLite and OpenNebula?
I intend to differences in structure and in usage. When to use the one and when the other?
I think g.lite is a DSL service that offers a download speed that is slower than other forms (maximum of 1.5 Mbps).
Whereas, OpenNebula is a cloud computing toolkit for managing heterogeneous or different distributed database center infrastructures.
gLite is a grid middleware, which -- very simply put -- takes care of (bulk) submission and processing of grid jobs (+ related tasks like scheduling, data handling, monitoring, information service and others).
OpenNebula is a cloud management framework, which -- again put simply -- provides you with virtual machines on demand, based on available templates.
So, you will use gLite if your workload comes in the form of jobs (think scripts), which can run in the target system (gLite is currently only supported in Debian 6 and Scinetific Linux 5 & 6). You will use OpenNebula if you need on-demand virtual machines running your own (or publically available vanilla) system images. You can of course preinstall any software you need in those images.)
I promise to give a better aimed answer if you say more about what your situation is and what you want to achieve. Both products are actually intended to provide different kind of service.

Peer-to-peer replication of a sqlite database

I am looking for a way to replicate a small and simple relational database (like SQLite) across peers. This should work in an environment with unstable network connections, hence the need for each peer to have a full copy of the database. This should allow a peer to continue working off-line in the event of network failure.
To keep things simple, replication should only have to support the replication of addition of data, i.e. only INSERTs, not DELETEs or UPDATEs.
Does anyone know of a good - and ideally cross-platform - technology or method of creating such a system? I am currently looking at JXTA and JXSE, but I am put off by its complexity and apparant lack of life in its community after the takeover of Sun by Oracle.
Thanks!
Frans
rqlite uses the raft consensus algorithm, so it should be fairly resilient to unstable network connection.
Also, it seems to be possible to configure rqlite to accept reads even in the case of a network failure.
A similar project, dqlite, exists as a library, available in various languages, but it seems less explicit about the event of a network failure.
You may want to explore JGroups for the communication layer if you don't like JXTA. For the replication, I think you will have to implement your own code.
I am working on something similar (though the code is far from ready). I'll describe a little about my intended approach, but whether that is suitable for you depends on some key design points you'd need to consider. I am not aware of any ready-built projects that will do this, unfortunately.
In particular we'd need to know what language you wish to use, or which languages you'd rather avoid.
Also, consider how you intend to do peer dicovery - can you set up trust between node pairs manually, or do you want them to auto-discover?
Presumably all peers may insert data?
If you are able to use PHP, and are happy manually peering node pairs, then my approach may be of interest. Set up an ORM such as Doctrine, Propel or NotORM, and get each node to regularly sync with an internet time source. For each new row in a db, grab the data (either in an array or ORM object), serialise it, and push it out to all nodes that you have a trust relationship with. Where a push fails, keep a note of this and retry at periodic intervals (potentially giving up after a remote node fails to answer a large number of retries).
Pushes can either be kicked off by your application that creates the row, or can be called by whatever scheduler is available on each machine. A push message can be XML, or for simplicity can be just a POST message containing the new row and whatever metadata (e.g. timestamp of save, so as to resolve INSERT order from several nodes).
If your nodes do not have static IP addresses, they could be registered with a dynamic DNS addressing service so as to allow each node to stay in touch with peers even if their IP changes. You might also consider adding a message signing system, to ensure that messages between nodes are genuine.

Host ASP.Net WebSite - What would be an Ideal Hardware Configuration?

I am studying various ASP.Net deployment approaches. In there, I got a basic question. Is there any thumb rule about enviornment definition? What could be called a 'good' setup if I have to support 1000 concurrent users(requests).
I understand that there are many factors like how application is designed etc. But assuming that everything else is great, what configuration should I look for like Which processor, how much RAM etc?
Also how many concurrent users below configuration should be able to support ?
CPU: Dual 3.40 GHz Intel Xeon (Hyper-Threaded)
Memory : 3GB
OS: Windows Server 2003 SP2
Thanks for thelp
Having been on both sides of the equation (web developer and hardware engineer), my current opinion is that the answer involves both of those sides as well.
Your hardware needs to be not only sufficient for general usage, but it also has to cope with reasonable unexpected peaks and failures - which means that it needs to be redundant, and in excess of your capacity planning.
Your software needs to be designed so its easily redundant - theres no point in speccing a tiered hardware architecture (now or for future planning) if the software is going to require significant amount of changes to handle it.
Your software also needs to be designed so sudden unexpected peaks in resource usage don't happen as a regular occurrence for no external reason (eg marketing campaign).
I know that you say you understand the non-hardware factors, but the real answer to your question is that there is no real way to answer it without knowing the other factors - each situation and circumstance is unique, and requires a unique solution.
However, in an effort to add generalised recommendations, try these:
CPU - choose something with a lot of cache, and individual cache per core as well. This will do wonders to speed up the system. I typically go for dual core, dual processor at a minimum (for a total of 4 cores on two seperate physical cpus). Processor speed ratings don't really matter as much as you think these days.
Memory - fast memory, minimum of 8GB of it. Use the smallest dimms possible for the server.
Harddisk - SAS 15K RPM at a minimum, RAID 6 for the data partition on one controller, RAID 1 or 6 for the system partition on another controller. Choose a good quality controller backed by a good support or warranty package - your controller is no good if it dies in 3 years time and you can't get a replacement.
But above all, don't just install the OS and app and let it be, profile the set up as much as possible, don't be afraid of making changes to optimise to the individual setup (within reason). Move your ASP.Net temporary files to a fast disk (or a ram disk - if they are going to be rebuilt anyway, no matter worrying over losing them). Move the database to a second server, with a crossover 1GBit link between the two. Turn off disk maintenance in the OS, turn off services you do not need.
Good luck!

Resources