What is the difference between Intel TXT and TPM? - intel

What is the difference between Intel TXT and TPM? What more Intel TXT has to offer as compared to TPM?
Basically, I wanted to know how TXT works? Any easy to follow literature for beginners will be highly appreciated!

Scolytus is right but let me explain a bit more.
As he said, a TPM is a dependency of TXT but not the other way around. The TPM is where TXT will store the measurements - hash of components - of the platform. If TXT is not supported by a platform but a TPM is still present you still have all those features:
Integrity measurement – securely measure the platform's components (hashes stored within the TPM)
Authenticated boot – a process by which a platform's state (the sum of its components) is reliably measured and stored. SRTM - Static Root of Trust for Measurements
Sealed Storage - encrypt data based on the current state of the platform or in other words, what has been measured (the PCR hash values stored in the TPM) - seal operation
Attestation - securely report to other parties the state of the platform, e.g., quote operation aka Remote Attestation.
As such you could use trustedgrub (SRTM - Static Root of Trust for Measurements) but not tboot which implements a DRTM (Dynamic Root of Trust for Measurements) aka TXT.
About "how TXT works" see this question.

It's like asking "What's the difference between a car and an engine?"
The TPM is a vital part of Intel TXT. Without it Intel TXT does not work.

Related

Kafka Connector for Oracle Database Source

I want to build a Kafka Connector in order to retrieve records from a database at near real time. My database is the Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 and the tables have millions of records. First of all, I would like to add the minimum load to my database using CDC. Secondly, I would like to retrieve records based on a LastUpdate field which has value after a certain date.
Searching at the site of confluent, the only open source connector that I found was the “Kafka Connect JDBC”. I think that this connector doesn’t have CDC mechanism and it isn’t possible to retrieve millions of records when the connector starts for the first time. The alternative solution that I thought is Debezium, but there is no Debezium Oracle Connector at the site of Confluent and I believe that it is at a beta version.
Which solution would you suggest? Is something wrong to my assumptions of Kafka Connect JDBC or Debezium Connector? Is there any other solution?
For query-based CDC which is less efficient, you can use the JDBC source connector.
For log-based CDC I am aware of a couple of options however, some of them require license:
1) Attunity Replicate that allows users to use a graphical interface to create real-time data pipelines from producer systems into Apache Kafka, without having to do any manual coding or scripting. I have been using Attunity Replicate for Oracle -> Kafka for a couple of years and was very satisfied.
2) Oracle GoldenGate that requires a license
3) Oracle Log Miner that does not require any license and is used by both Attunity and kafka-connect-oracle which is is a Kafka source connector for capturing all row based DML changes from an Oracle and streaming these changes to Kafka.Change data capture logic is based on Oracle LogMiner solution.
We have numerous customers using IBM's IIDR (info sphere Data Replication) product to replicate data from Oracle databases, (as well as Z mainframe, I-series, SQL Server, etc.) into Kafka.
Regardless of which of the sources used, data can be normalized into one of many formats in Kafka. An example of an included, selectable format is...
https://www.ibm.com/support/knowledgecenter/en/SSTRGZ_11.4.0/com.ibm.cdcdoc.cdckafka.doc/tasks/kcopauditavrosinglerow.html
The solution is highly scalable and has been measured to replicate changes into the 100,000's of rows per second.
We also have a proprietary ability to reconstitute data written in parallel to Kafka back into its original source order. So, despite data having been written to numerous partitions and topics , the original total order can be known. This functionality is known as the TCC (transactionally consistent consumer).
See the video and slides here...
https://kafka-summit.org/sessions/exactly-once-replication-database-kafka-cloud/

Can we set single Database for two different IBM BPM Std 8.5.7 environment?

We want to setup DR server for IBM BPM Std 8.5.7 and planning to use Prod DB (Oracle) so that if for some reason Prod BPM environment becomes unavailable we can use Prod DB data in DR IBM BPM. Is this possible? What factors need to considered for this?
At present we take the snapshot of Prod DB and using this DB snapshot for COB, all servers are started but when we open Process Admin console we don't see the "Installed App" option and menus on left side to manage users. It seems DR BPM admin ID does not have required roles to get the details.
First of all, I'd like to point you to the article below;
Disaster recovery guidance for IBM Business Process Manager
Please note the difference between configuration data and runtime data as defined on this article. Since some configuration data resides at profile folders of your servers, not the database, it's not enough to just move a snapshot of production database to DR. You must also synchronise configuration data on your file system. This is probably why you can't use your DR BPM as you expected, you move runtime data to DR, but you're missing configuration data.
As for your question on what configurations are possible and what factors to consider, unfortunately answer is not so simple, as you have many alternatives.
Article above highlights which factors to consider for your DR design. There are seven different alternatives for DR topology. These evaluated according to abovementioned factors, and advantages/disadvantages are explained. You must choose one of these according to your specific requirements and resource availability.

Check if a movie/zip/audio file is legal

I am running a server with owncloud for a bunch of users.
However, i totally forbid the usage of this cloud for illegal stuff, like movies/audio/album/zip containing tv shows etc.
How can i be sure that users won't store files downloaded from torrent websites ?
Is there a unix binary that can fetch some screenshots from the mkv/avi file, and check whether there is a watermark or a known picture (20th century fox, warner etc.)
I cannot search for weird strange names like 'DVDRIP', since filenames are encrypted.

Where does OpenStack Swift store the rings?

does anybody know where OpenStack Swift stores the "Rings"? Is there a distributed algorithm or is it just one table somewhere on some of the Storage Nodes with information about all (!) the physical object locations (I cannot believe that because from my understanding of Object Storage, it should scale to Exabytes, and this would need lots of entries in such a table...)?
This page could not help me: http://docs.openstack.org/developer/swift/overview_ring.html
Thanks in advance for your help!
Ring Builder
The rings are built and managed manually by a utility called the ring-builder. The ring-builder assigns partitions to devices and writes an optimized Python structure to a gzipped, serialized file on disk for shipping out to the servers. The server processes just check the modification time of the file occasionally and reload their in-memory copies of the ring structure as needed.
so, it's stored in all servers.
If you were asking the path of ring,gz files it is under /etc/swift by default
Also these ring files are can be updated using the .builder files when swift rebalance is run.

What are the well-known UIDs?

According to the useradd manpage, UIDs below 1000 are typically reserved for system accounts.
I'm developing a service that will run as its own user. I know that well-known ports can be found in /etc/services.
Is there a place where I can find out what well-known UIDs are out there? I would like to avoid crashing with someone else's UID.
getpwent(3) iterates through the password database (usually /etc/passwd, but not necessarily; for example, the system may be in a NIS domain). Any UID known to the system should be represented there.
For demonstration, the following shell fragment and C code both should print all known UIDs on the system.
$ getent passwd | cut -d: -f3
#include <pwd.h>
#include <stdio.h>
int main() {
struct passwd *pw;
while ((pw = getpwent()))
printf("%d\n", pw->pw_uid);
}
UID 0 is always root and conventionally UID 65534 is nobody, but you shouldn't count on that, nor anything else. What UIDs are in use varies by OS, distribution, and even system -- for example, many system services on Gentoo allocate UIDs as they are installed. There is no central database of UIDs in use.
Also, /etc/login.defs defines what "system UIDs" are. On my desktop, it is configured so that UIDs 100-999 are treated as system accounts, and UIDS 1000-60000 are user accounts, but this can easily be changed.
If you are writing a service, I would suggest that the package installation be scripted to allocate a UID as needed, and that your software be configurable to use any UID/username.
I know this is an old post, but since I am here in 2017, still trying to answer a similar question I thought this additional information was relevant for anyone else in the same position.
The concept of "Well known UIDs" stems back to the early days of unix, before there were multitudes of distributions and unix variants. "Well known" UIDs were considered to be those for system users like adm, daemon, lp, sync, operator, news, mail etc, and were standard across all the various systems in order to avoid uid clashes. These users are still present in modern unix-like operating systems.
Standardising uid's across an organisation is the key to avoiding these problems. As was pointed out in a comment above, these days any uid you choose is likely to be in use 'somewhere', so the best a sysadmin can aim for is to ensure that uid's are standard across all the systems that they maintain, then allocating a new uid for an application becomes simple.
To that end, for many years I have found the post linked below invaluable, and sadly there are not a lot of similar posts on the topic, and what's out there is hard to find.
UNIX/Linux: Analyzing user/group UID/GID conflicts
If you search that blog under the 'uid' tag there are other relevant posts, including a script to automate the process of standardising uid's across multiple hosts under Linux.
This User ID Definition is also an invaluable resource.
The short answer is, that it doesn't really matter which uid's you use, as long as they are unique and standard across your organisation, to avoid clashes.
I'm not sure such a list exists. How about just noting that UID's are in use through the /etc/passwd file, /etc/shadow file, and the NIS global user list, noting what ones are in use? Then use one that isn't!
In Linux, that is configured in /etc/login.defs. Sometimes, when I install a Debian-based system, I change the "uid start" option (I forget its name, I'm not on Linux now) from 1000 to 500 for consistency with the other, Red Hat-y machines.
man login.defs should give you all the info you want.

Resources