Simple question here. When looking at Airflow's UI, like in the screenshot below:
What does Is Encrypted and Is Extra Encrypted stand for? Is there clear documentation on this?
Thanks
'Is Encrypted' means a password value were set in the connection and it will be encrypted using a fernet key. You can see more details here. When an Apache Airflow enviromente are running with no encryption, even the logs prints the passwords connections for instance.
'Is Extra Encrypted' means that whatever were set in Extras dictionary and it will be encrypted or not.
In a first moment, you can imagine that 'Is Extra Encrypted' can be an extra security layer. This is not the case.
Related
So I created encrypted key using ansible-vault create my.key.
Then I use it as var:
my_key: "{{ lookup('file','{{ inventory_dir }}/group_vars/my.key') }}"
And then when running my playbook, like this:
- name: Create My Private Key
ansible.builtin.copy:
content: "{{ secrets.my_key }}"
dest: "{{ secrets_key }}"
no_log: true
It does properly create key on remote host and it is then unencrypted. But I'm thinking if this is the right way to do it? Does it unencrypt at the right time and I am not exposing sensitive data where it should not be?
I thought encrypted variables must also have !vault keyword specified. But if I do this for my my_key, I get this error:
fatal: [v14-test]: FAILED! => {"msg": "input is not vault encrypted data. "}
So this got me worried, that file is unencrypted at the wrong time or maybe message is misleading or something.
Is this the right way to do it? Or I should do it differently?
Firstly, a definitive answer as to whether this approach is appropriate, is directly linked to what you want to achieve from encryption. Therefore all the answers here can do is talk about how Vault works and then you can decide if it is right for your requirements.
Fundamentally what you are doing is a 'correct' usage of Ansible Vault, although I have not previously seen it used in quite this workflow (typically I have seen create used for encrypting YAML files of vars).
Using your method, your secret is turned into ciphertext and stored in my.key (which can be confirmed by using basic text tools such as cat, less or more). You will see the first line of the file, contains a bunch of metadata that allows Ansible to understand the file contents and decrypt on demand.
At runtime, Ansible will then use the password/key for the encrypted file (accessed through a number of methods) to decrypt the file contents into plain text and then store it in the variable my_key for use during the play.
A non-exhaustive list of things to consider when determining if Ansible Vault is the right approach for you:
Ansible Vault encryption is purely designed to protect secrets at rest (i.e. when they are stored on your hard disk)
At run time, the secrets are converted into plain text and treated like any other variable/string data, however the file on disk still contains ciphertext so the plaintext is only accessible within the running Ansible process (i.e. on a multi-user system, at no point can anybody view the plaintext simply by looking inside the my.key file. However, depending on their level of access, skills and what your Ansible tasks are doing, they may be able to access the plaintext from the running process.)
Given inside the process the data is just plain text, it is vulnerable to leakage (for example by writing the contents out into a log file - checkout the Ansible no_log option)
At run time, Ansible needs some way to access the key necessary to decrypt the ciphertext. It provides a variety of methods, including prompting the user, accessing it from a file stored on disk, accessing it from an Env var, using scripts/integrations to pull it from another secrets mgmt tool. Careful thought needs to be given about which option is chosen, relative to what you are looking to achieve from the encryption (e.g. if your goal is to protect your data in the event that your laptop gets stolen, then storing the key in a file on the same system, renders the whole operation pointless). Quite often, with more sophisticated methods, you can still end up in a 'chicken and egg' situation, once more relative to what your goal from using encryption is
I might be talking complete cobblers or be a nefarious individual trying to sow disinformation, so read the docs thoroughly if the value of the secrets if significant to you :)
Unfortunately there is no getting away from generally good security is harder to achieve than the illusion of good security :|
I have set up table-level InnoDB database encryption on MariaDB.
I'd like to know if there is any way to confirm that the data is truly encrypted. I've tried searching /var/lib/mysql/ibdata1 for sample data in the tables, but I don't know if that's a reliable test or not.
I posted this question on mariadb.com, and the suggestion there was to perfom a grep for some known data.
A DBA at Rackspace suggested using the strings command instead, to better handle the binary data, for example:
strings /var/lib/mysql/sample_table/user.ibd | grep "knownuser"
This approach returns no results on an encrypted table and does return results on an unencrypted table (assuming both have "knownuser" loaded into them).
You can query information_schema.innodb_tablespaces_encryption. When innodb tablespace is encrypted it is present in the table.
SELECT * FROM information_schema.INNODB_TABLESPACES_ENCRYPTION
WHERE NAME LIKE 'db_encrypt%';
source
My advice for testing is to copy the full dataset to another node without the encryption keys in place and try to start MySQL and query the encrypted tables. I'm making an (big) assumption that they will not be readable since the valid encryption keys are missing.
To parse the files on disk as they lay may prove difficult unless you have a special tool to do this. Maybe something like Jeremy Cole's innodb_ruby would be another litmus test https://github.com/jeremycole/innodb_ruby.
[probably don't works if you change the key which encrypts the log.]
Stop the database server.
BACKUP the keyfile
Change a key in the keyfile. (don't delte - it still has to remain a valid key otherwiese the server can't restart)
Start MariaDB again.
Try to read the table (e.g. with phpMyAdmin).
If encrypted correctly there is an answer: "The table is encrypted..." when trying to read the encryted table.
Stop Maria
Restore the backup
Restart Maria
During vm creation in openstack, one can specify a keypair name, so that the specified public key get injected to the newly created vm.
I would like to know in which state of machine the key injection is done, completely? Given the machine is in ACTIVE state, does that guarantee that the key injection is completed?
Details:
I have a limited quota for the key pairs and I would like to delete each keypair from openstack immediately after they get injected to the target machine. I have only access to openstack ReST API and NOT to the target vm.
UPDATE
Looking at nova instances table, I can see that "key name" and "key data" are existing there too. I think the key is copied to this table and then the original key is not referenced any more. So deleting the key shouldn't cause any issue. am I wrong?
What you can do is try a ssh connection and once that succeeds, proceed to delete the keypair.
To answer your question directly, the key is added via the cloud-init. You can grep for ssh in /var/log/cloud-init.log to see when exactly it happens. (It happens pretty early in the cloud-init process).
I don't think there is any API way of figuring out when exactly the key injection happens. Machine in ACTIVE state is not a guarantee that cloud-init part of key injection is done (though for practical purposes, it does happen pretty early).
You could try checking it via nova console-log. Though the output of console-log has limited buffer, so it may overshoot the key addition part and hence you may not see it in console log.
So, I think checking via actual ssh connection is the only sure shot way.
Could someone please explain how to obtain a list of all existing databases on a PostgreSQL server, to which the user already has access, using Qt? PostgreSQL documentation suggests the following query:
SELECT datname FROM pg_database WHERE datistemplate = false;
What are the correct parameters to the following functions:
QSqlDatabase::setDatabaseName(const QString & name) //"postgres" or "pg_database"?
QSqlDatabase::setUserName(const QString & name) //actual user name?
QSqlDatabase::setPassword(const QString & password) //no password? or user password?
Much appreciated. Thank you in advance.
You appear to have already answered the first part of your question. Connect to the postgres or template1 database and issue the query you've quoted above to get a list of databases. I'm guessing - reading between the lines - that you don't know how to connect to PostgreSQL to send that query, and that's what the second part of your question is about. Right?
If so, the QSqlDatabase accessor functions you've mentioned are used to set connection parameters, so the "correct" values depend on your environment.
If you want to issue the query above - to list databases - then you would probably want to connect to the postgres database as it always exists and isn't generally used for anything specific, it's there just to be connected to. That means you'd call setDatabaseName("postgres");. Passing pg_database to setDatabaseName would be nonsensical, since pg_database is the pg_catalog.pg_database table, it isn't a database you can connect to. pg_database is one of those odd tables that exists in every database, which might be what confused you.
With the other two accessors specify the appropriate username and password for your environment, same as you'd use for psql; there's no possible way I could tell you which ones to use.
Note that if you set a password but one isn't required because authentication is done over unix socket ident, trust, or other non-password scheme the password will be ignored.
If this doesn't cover your question, consider editing it and explaining your problem in more detail. What've you tried? What didn't work how you expected? Error messages? Qt version?
I am connecting to a Teradata database through ODBC with Stata on an Ubuntu server (12.04 LTS). Everything works fine, except that I have my TD userid and password stored in the .odbc.ini file, which seems like a terrible idea. The alternative is to enter them in Stata, which seems even worse and is awkward. Is there a way to do this more securely? The login info that I use to ssh into the server is synced with the TD database. It seems that it should be possible to pass that information along.
In ODBC terms you do not need to store usernames / passwords in any of your ODBC ini files. Both the ODBC SQLConnect and SQLDriverConnect support the passing in of username / password at the time they are called.
SQLDriverConnect would need something in your InConnectionString like "DSN=YourDataSourceName;UID=username;PWD=password".
You could go one step further and pass in the whole DSN as a command line argument thus meaning that you would not need an ODBC data source in an ini file. I'm sure one of the forum readers can post a sample for you from Teradata.
As for passing in the user name and password from your SSH loging. Your application would need to capture that and pass it to ODBC.
If you want to establish a finer grain of security around your odbc.ini file or other files on your Ubuntu server that may contain user credentials I would strongly suggest the use of Access Control Lists (ACLs). Beyond the typical Owner::Group::World permissions you can specify permissions down to the specific user on whether they are allowed or denied an explicit permission for a given file.
Other options regarding security on Teradata include the use of LDAP authentication if your environment supports it. Configuring LDAP on Teradata is beyond the scope of SO and in many cases a billable, professional services engagement with Teradata's Information Security CoE.