Issue- .siem-security-space1 docs are getting added under .siem-security-space2 index - kibana

I need some more insight on .siem-security index creation by kibana.
We have created 3 spaces in kibana and enabled the same Security -> elastic rules under all 3 spaces (eg: Whitespace Padding in Process Command Line).
we have also assigned the roles and and users per space to view their own data. This settting is working fine with all integrations we installed by centralized fleet agent . However we noticed that siem-security data is getting written in all spaces i.e.
( Space1 generates the alerts with DataStream.namespace=Space1 under siem-security-Space1-* ,siem-security-Space2-* and siem-security-Space3-* )
Based on my understanding , fleet manager is smart enough to filter data based on namespace from elastic ; but kibana is not.
Is that correct understanding ? Or I am missing some point which is causing kibana to add docs to all siem-security- indexes {spaces where same elastic rules is enabled}.
Thanks in advance !
Regards,
Nivedita

Related

Accessing User ID as a variable in Google Tag Manager for mobile

I'm getting started with Google Tag Manager for Android/iOS, and can't find a way to access the User ID as a variable. I can access Firebase User Properties and Event Parameters just fine.
So far, I've tried setting it using FirebaseAnalytics.setUserId and trying to access it as a User Property called user_id / userId.
Some workarounds I've thought of:
Using a CustomVariableProvider (preferred)
Setting the User ID as an Event Parameter (this wouldn't work with built-in events)
I'm just trying to make sure there's no built-in way of doing this before I resort to workarounds. Thanks!
I was not able to find the User ID (or UID) in the list of built-in variables, see this screenshot
There is a built-in way, but it requires quite a sophisticated setup.
In GA version 4 the path is changed comparing to the previous version, where the same could be done much easier via "Tracking Info".
Here are the starting steps in GA4:
Open https://analytics.google.com/analytics/web
Bottom-left corner -> Admin -> Setup Assistant -> Advanced setup -> User ID
Follow the instruction
After that UserID will be available in GTM.
Video guide for exact steps: https://www.youtube.com/watch?v=TVJMFVOXFUQ

Identify from which location cosmosdb query ran against

I am currently using CosmosClient V3. I am trying to figure out which region I am querying against when I have multi region setup. Previously, in V2 (DocumentClient) we were able to do it using connectionPolicy.PreferredLocations where we will be setting the preferred location and using client.ReadEndpoint we will able to verify the current read endpoint chosen based on availability.
But in V3 I am able to set the preferred location using ApplicationRegion/LimitToEndpoint, but there is no option to validate the which region the SDK choose for the query like client.ReadEndpoint. Is there any equivalent options available in SDK V3(CosmosClient).
You don't need to use LimitToEndpoint. As per the comment on that property:
When the value of this property is false, the SDK will automatically discover write and read regions, and use them when the configured application region is not available.
When set to true, availability is limited to the endpoint specified on the CosmosClient constructor.
Defining the ApplicationRegion is not allowed when setting the value to true.
You need to set the ApplicationRegion, which will make the SDK connect to the closest endpoint based on the account's regions. If your account has one of the regions matching the one the application is in, then it will use that one, otherwise, it will pick the closest one.
You can check the Diagnostics in the query FeedResponse to see which was the region used (please update to the latest SDK).

Azure Cosmos DB Entity Insert and Data Explorer Error

Just this morning when trying to view the Data Explorer UI for an Azure Cosmos DB table the window is totally blank and I see no rows (the table should not be empty). The only connection to this table is a Python script that pushes in simple rows with only a few variables however this has also stopped working just this morning.
I am still able to connect to the table service properly and I've even been able to create a new table through my Python script. However, as soon as I call table_service.insert_or_replace_entity('traps', task) ('traps' is the name of my table and task is the row I'm trying to push up) I receive back an HTTP Error 400. The request URL is invalid.
For reference, my connection in Python is as follows where Account_Name = my personal account name and Account_Key = my personal account key.
table_service = TableService(connection_string="DefaultEndpointsProtocol=https;AccountName=Account_Name;AccountKey=Account_Key;TableEndpoint=https://Account_Name.table.cosmosdb.azure.com:443/;")
for i in list(range(0,len(times))):
print(len(tags))
print(len(times))
print(len(locations))
task = {'PartitionKey': '1', 'RowKey': '{}'.format(tags[i]),'Date_Time' : '{}'.format(times[i]), 'Location' : '{}'.format(locations[i])}
table_service.insert_or_replace_entity('traps', task)
UPDATE
In reference to the HTTP Error 400 I discovered that I was trying to push a \n at the end of each of the tags string (i.e. tags[0] = 'ab123\n'). Stripping out the \n has resolved the HTTP 400 error but I am now receiving The specified resource does not exist. message when I attempt to upload which makes more sense as at why my Data Explorer is blank. I have tried uploading to a new table but its the same thing.
Second Update
Silly mistake on resource not found error was that my table is called "Traps" not "traps". Data appears to be uploading correctly now on the API side. However, the table is still not displaying at all in the data explorer page of the Azure portal. If anyone has insight on this it would be appreciated because the explorer is super helpful while we are still in development.
Third Update
I am able to connect to the table/database through Python and query data effectively. It all seems to be in there and up to date. The only thing I'm left unsure about is why the Data Explorer is not displaying properly. Aside from that, my recommendation is to obviously check your capital letters (my usual mistake haha) and DO NOT try to push up line feeds (\n) in the task/payload.
Want to provide an official update and response to your issue. This issue is being Hotfixed with an ETA rolled out by Monday (09/24/2018).

gitlab: get all projects/groups of a member

I'm trying to find inactive members in my GitLab-CE instance via the Gitlab API (v4).
One of the criteria for "(in)activity" is, whether a given user is member of any project or group.
While this information seems to be readily available via the webinterface (Groups and projects tab on the user's overview page in the admin area), I cannot find that information via the API.
The only way i currently found is, to iterate over all projects (resp. groups) and check whether the user is member thereof.
This strikes me as very slow (as there are probably zillions of projects), so I'm looking for a more direct way to query the system for all projects where user is member-of.
As in doc(https://docs.gitlab.com/ce/api/members.html), you can use:
GET /groups/:id/members
GET /projects/:id/members
to get only members added directly in a group/project
or:
GET /groups/:id/members/all
GET /projects/:id/members/all
to get all members (even those inherit from groups above)
---EDIT regards to #Nico question ---
In order to know if a user is a member of a project the solution tested by #umläute is to iterate over project members then all subgroup untill it reaches the user:
Given \fu\bar\project_p
With project_p.id = 1
bar.id = 10
fu.id = 100
Is user 'Nico' a member of project_p ?
GET /projects/1/members returns ('Paul') / No
GET /groups/10/members returns ('Marc', 'Jean') / No
GET /groups/100/members returns ('Nico') / Yes
Instead Gitlab provide an other API :
GET /projects/1/members/all returns ('Paul', 'Marc', 'Jean', 'Nico') / Yes

Two wordpress database with same users

I want to have the same WordPress users in two different databases
For example, if a user registers on SiteA, then he can login to SiteB. And reverse.
Also i want create same cookie for both after login.
mywebsite.com/ (SiteA_DB)
mywebsite.com/blog/ (SiteB_DB)
I've never done this before and maybe Wordpress has hooks to archive this, but I prefer using mysql for such a trick.
You could try ..
.. using 'federated storage' ( https://stackoverflow.com/a/24532395/10362812 )This is my favorite, because you don't even have to share a database or even the mysql serverThe downside is, that it doesn't work with db cache and uses an additional connection.
.. creating a 'view' ( https://stackoverflow.com/a/1890165/10362812 )This should be possible when using the database-name in the query itself and it would be the simplest solution if it works. Downside: The 2 tables have to share the same mysql-server and have to be assigned to the same user as far as I know.
-- **Backup your database before trying!** --
DROP TABLE `second_database`.`wp_users`;
DROP TABLE `second_database`.`wp_usermeta`;
CREATE VIEW `second_database`.`wp_users` AS SELECT * FROM `first_database`.`wp_users`;
CREATE VIEW `second_database`.`wp_usermeta` AS SELECT * FROM `first_database`.`wp_usermeta`;
This should work, according to: Creating view across different databases
.. creating a 'shadow copy' ( https://stackoverflow.com/a/1890166/10362812 )Works with caching and is a standalone tableDownsides as 2. solution + a bit of setup and I think it might be the worst option in performance
This were answers to this question: How do I create a table alias in MySQL
I merged them together for you and made them fit your use-case.
Please also notice, that solution 1 and 2 will replace your current user-tables auf "second_database" because you write directly into "first_database" when querying the fed. storage or the view. This can lead to problems with user-role plugins. You should take care of syncing the plugin-options too, if you should use one of them and in case it uses different tables or 'wp_options' values.
Let me know if this works, I have to do a similar task next week. While researching I found the linked answers.
EDIT: I was missing the point of "cookie-sharing" in my answer. Your example shows a blog on the same domain - you should be able to change the way wordpress sets its cookies to be domain-wide. What I did once for 2 different domains was, that I hooked into the backend (is_admin) and added a javascript which did a post-request to siteB, receiving a token which is stored but marked as 'invalid' on siteB. This token then was passed back to my plugin on siteA which checked if the user is logged_in and (in my case) have adminrights (current_user_can()) and if so, it was sending this token back to sideB which was marking this token as valid to login. (Make sure only sideA can tell sideB to make this token valid!) Once a user is seen with this token in a cookie on siteB, the user is logged-in automatically in the background. Also I made this bidirectional. I am sorry, that I can't share the code for you. I don't have access to it anymore.
Greetings, Eric!

Resources