WAZUH All Commands monitor - kibana

How to monitor each and every command executed by user, even in sudo level.
I have configured audit rules and they are appearing in audit.logs, but I want to view each command timely from server to Kibana/wazuh manager.
enter image description here

Auditd share complete commands and users UID too with wazuh if configured properly.
So I just added those columns from list in Kibana and now data is apearing fine.

Related

why does `airflow connections list` shows unencrypted results?

Airflow version: 2.1.0
I set FERNET_KEY and checked login/password fields are encrypted when I add connections via Web UI.
However, when I add a connection via CLI:
airflow connections add 'site_account' \
--conn-type 'login' \
--conn-login 'my_user_id' \
--conn-password 'wowpassword'
And run airflow connections list, it shows everything in raw value(not encrypted at all).
I think this could be dangerous enough if I manage all connections using CLI commands (I want to make my airflow infra restorable. That's why I tried to use CLI command to manage connections)
How to solve it?
Airflow decrypts the connections passwords during the processing of your cli commands.
You can use airflow connections list --o yaml to see whether your record was actually encrypted in the database or not.
Furthermore, if you are able to access the cli, you are also able to access the config, meaning you can always extract the database connection and fernet_key and get the full password on your own.
Jorrick answer is correct however I want to elaborate on the background as I feel it will bridge between the question and the answer.
It's very understandable that Airflow needs to be able to decrypt the connection when DAG/user asks to. This is needed for normal operation of the app so Airflow must assume that if a user can author DAGs he is permitted to utilize the system resources (Connections, Variables).
The security measurements are on a different level. If utilizing them (using Fernet) then Airflow will encrypt the sensitive information (like connection passwords) this means that in the database itself the value is encrypted. The security concern here is where the ferent_key is stored? is it rotating? etc...
There are many other security layers that handle different aspects like: access control, hiding sensitive information in the Ui but that's a different topic.
I think the important thing to understand that security handles two types of users:
A user that is permitted but you just want to limit what actions he can do or what he can see. (This is more what Airflow itself handles see security docs)
A user that is malicious and wants to do damage. While Airflow does provide some features in that area this is more of an issue of where you setup Airflow and how well you protect it (IP allow-list etc...)
keep in mind that if a malicious user gained access to Airflow server then there is little you can do about it. This user can simply use his admin privileges to do anything. This is no different than a user that hacked into any other server that you own.

Is there a way to connect google-play data to metabase?

I want to connect the Metabase to google-play console to retrieve the data like number of install, uninstall, Active Users and do some analysis on the top of it. Can you please help in the process how to link the google-play to Metabase.

Storing and Retrieving Published APIs in WSO2 AM

I have a docker instance of wso2-am running with the published API which are working fine. However, when the docker instance is shutdown and started up again the published APIs together with the configurations are lost.
How can I persist the published API, map and display it accordingly when the wso2-am docker instance is once started?
This is the basic issue with docker where once the container is stopped all of its data is also lost with it.
In order, to save the data I had to use the docker commit command to save the previous working state.
APIM related data is stored in the database(api related metadata) and filesystem(synapse apis and throttling policies, etc). By default APIM uses H2 database. To persist the data, you will have to point this to a RDBMS (mysql, oracle, etc). See https://docs.wso2.com/display/AM260/Changing+the+Default+API-M+Databases
To persist API related artifact (synapse files, etc), You have to preserve the content in the repository/deployement/server location. For this you could use a NFS mounting.
Also please refer this https://docs.wso2.com/display/AM260/Deploying+API+Manager+using+Single+Node+Instances on information about doing a single node deployment

Permission error on user management

We just set up elasticsearch, logstash and kibana on our swisscom application cloud instance. Now when I log in into kibana with the full_access_username and full_access_password I can do almost everything except adding new users and manage existing ones under settings - user management.
There I always get a message saying:
You do not have permission to manage users. Please contact your administrator.
Does anyone of you has an idea on how to fix that?
We d like to have different users and give them permissions on some indices and attributes only.
Thanks in advance for your help.
As Swisscom provides their Elasticsearch Service as managed, you have some limitations in terms of administrative functions. At the time of writing this includes cluster and user management as well as watches.
You can provide new users by creating service-keys cf create-service-key <service-instance-name> <service-key-name>.

How to allow multiple users to manage application running on server?

I'm not sure if the title makes sense. Hard question to ask.
I have an application running on a server under my network account, and it's scheduled to run daily.
I can remote in with my user credentials and check on the application.
What if I want more than one person to be able to remote in and check it? I can create a new account on the server, but it wouldn't have network rights and the application needs access to network folders.
What would be the best approach?
Thanks! :-)
P.S. Feel free to edit the tags. I can't figure out what to pick.
I would recommend your application writes out log files or status messages to a place the necessary users can see. They can see the status via logs or output and don't need access to the scheduled task itself.
well, in Unix you'd create a group and add users to said group. I'm fairly certain you can do this on a windows server (make sure of course, that the group has permission to execute and read the app or the app's log files)

Resources