Permission denied for schema when syncing and scanning Metabase - metabase

Im seeing thousands of 'permission denied' errors every time Database syncing and Scanning for Filter Values are done. For some reason metabase user is trying to access schemas other than the ones listed in Data Model -> Data -> DB -> Schemas and then fails because it doesnt have permissions to them. Why is this happening?
Thank you in advance for all the help!

Related

terraform GCP VPC connector creation issue

Overview
I tried creating a VPC network, having a subnet and adding a Serverless VPC connector with terraform in GCP. I was following the official guide ( https://cloud.google.com/vpc/docs/configure-serverless-vpc-access#terraform ) and initially everything was working well. After that I accidently commited my JSON key to github, someone stole it and used it for crypto, the project was disabled but shortly after that reinstated
After that my terraform VPC connector creations started to fail. I tried a lot of different things but nothing seems to work(running destroy, changine service accounts, changing names, deleting all of the terraform subfolders, deleting EVERY resource and restarting the process)
The errors I am getting are:
│ Error: Error waiting to create Connector: Error waiting for Creating Connector: Error code 13, message: An internal error occurred: Failed to create a VPC Access connector. Please delete the connector manually.
│
or
│ Error: Error creating Connector: googleapi: Error 409: Requested entity already exists
Today I tried to create VPC connector from the command line(gcloud) and from the UI tool. The errors persisted
Unknown error. Original error message: Operation failed: Insufficient CPU quota in region.
Max throughput of the connector per day over last seven days.
or
An internal error occurred: Failed to create a VPC Access connector. Please delete the connector manually.
errors while deleting:
│ Error: Error waiting for Deleting Network: The network resource 'projects/static-emblem-327016/global/networks/sun-serverless-network' is already being used by 'projects/static-emblem-327016/global/routes/default-route-5cbc9de02e21bb35'
│
I was lookint at this issue https://issuetracker.google.com/issues/164378672 In it I was problems with us-central1 but I tried a couple of different regions and still I have the same issue
Questions:
I am running out of ideas, I was wondering if this is an infrastructural issue, maybe I should dump the project and create a new one ? Where can I check if there are infra issues ? How can I resolve my issue?
I recently get this error Error: Error creating Connector: googleapi: Error 409: Requested entity already exists. So I can explain the root cause and it's fix.
What I was doing is like trying to create a GCP resource (Create PubSub topic) using terraform (plan and then apply).
But before executing the terraform apply, I created the resource manually long time back with the same name. I expected that the terraform plan or terraform apply will not try to create it again since the resource name is same. But instead of Refreshing state, I found it was trying to Creating the resource. The reason it that, terraform does not know about your resource history. Either you need to import your resource history using terraform import command or else delete the manually created resource and then run the terraform apply command.
The message “An internal error occurred: Failed to create a VPC Access connector. Please delete the connector manually” can indicate that you don't have enough resources in your project to create the connector. Please make sure you have enough Resource Quota available in your GCP project.
The message “googleapi: Error 409: Requested entity already exists” indicates that The resource that a client tried to create already exists.
If you want to know what the root cause is, you can check the logs of the VPC Connector creation in the System Event Audit Logs.
System Event audit logs contain log entries for Google Cloud actions that modify the configuration of resources. System Event audit logs are generated by Google systems; they aren't driven by direct user action. System Event audit logs are always written; you can't configure, exclude, or disable them. The instructions to access them are here.
On the other hand, generating and distributing service account keys poses severe security risks to your organization. They are long-lived credentials that are not automatically rotated. These keys can be leaked accidentally or maliciously allow attackers to gain access to your sensitive GCP resources. If you accidentally compromised your JSON Key, please read the recommendations in this link.
If you want to know more about the risk and alternatives to download Service Account, Key please follow this link. Please note that this is not GCP official documentation, so I cannot vouch for its accuracy.
I was able to resolve my issue. It turns out that I had deleted my default compute engine service account in panic. I was able to recover it and everything worked out from there. For more info go here: https://cloud.google.com/iam/docs/creating-managing-service-accounts#undeleting_a_service_account
you have to identify the default service acc for compute engine and undelete it:
gcloud beta iam service-accounts undelete ACCOUNT_ID

neo4j failing to reload - possible causes for failed store_lock permissions

Until yesterday a site using neo4j + flask + nginx was working ok.
This morning I found error 500.
From the logs, it seems that the culprit is neo4j:
tail -100 /var/log/neo4j/neo4j.log
Caused by: org.neo4j.kernel.StoreLockException: Unable to obtain lock on store lock file: /usr/share/neo4j/data/databases/target_association.db/store_lock. Please ensure no other process is using this database, and that the directory is writable (required even for read-only access)
at org.neo4j.kernel.internal.StoreLocker.storeLockException(StoreLocker.java:94)
at org.neo4j.kernel.internal.StoreLocker.checkLock(StoreLocker.java:86)
at org.neo4j.kernel.internal.StoreLockerLifecycleAdapter.start(StoreLockerLifecycleAdapter.java:40)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:433)
... 11 more
Caused by: java.io.FileNotFoundException: /usr/share/neo4j/data/databases/graph.db/store_lock (Permission denied)
This is for me cumbersome: it was working fine yesterday, all queries are GET, and the user permission of the db folder are owned by testuser:staff
I have not changed persmissions: what could have happened?
Testuser also run the flask app that interact with the db - so I don't get why know there are user permission problems.
The app is also owned by www-data group.
I looked at:
https://groups.google.com/forum/#!topic/neo4j/k2c6PgV6WFE
Could you please show a good procedure (ubuntu commands) to debug and correct permissions the user that is running neo4j and the permissions to db folder? should I change to root?
my app permissions are set as :
/var/www/app/
testuser:www-data
py2neo and connectors are in
/var/www/app/virtualenvironment/ ..
and
/usr/share/neo4j/data/databases/graph.db
testuser:staff

OpenLdap with Jxplorer: unable to list dc=maxcrc,dc=com

i am a newbie to ldap and i am facing an issue while accessing openldap server with jxplorer. I am running openldap on my laptop with windows 7 OS. I have configured the openldap with the BDB database. When i try accessing the ldap server from jxplorer, i get an error that says
unable to list dc=maxcrc,dc=com.
unable to perform read entry operation.
In the slapd.conf file, the below entries are present.
database bdb
suffix "dc=maxcrc,dc=com"
rootdn "cn=Manager,dc=maxcrc,dc=com"
And also when i try adding entries in the ldap, it fails saying,
"Unable to perform modify operation".
I went through the openldap readme PDF file, but of no help.
Any suggestions
Quick help would be greatly appreaciated :) Thanks in advance
Below is the error details which i saw:
javax.naming.NameNotFoundException: [LDAP: error code 32 - No Such Object]; remaining name 'dc=maxcrc,dc=com'.

Issue with XPage access to documents from another database

I have 2 databases in one server, a Web App db containing XPages only, and another database containing documents. When I tried to open a document in Xpage, an error appears saying that I don't have access to the document (I did a checking using db.queryAccess(myUserName) and found out that I don't have access to the document database, even though my user name is specified directly as Manager). I created a new copy of the document database, then points my web app db to that. Here I have access to the documents! I had implemented this before and this is the first time I had this problem. What are the probable problem(s) with my original document database? I already did a fixup and compacting, but to no avail. Please help me... Thanks!
Please check the the "Maximum Internet name and password" option in the ACL settings. This option overrides every ACL entry: If you are Manager but the option is set to "No Access" - you have no access.

Restoring a database into a different instance of tridion

I have got most of the way but there seems to be a permissions issue somewhere:
Before the restore everything is working fine in my target environment - target has a server login account TCMDBUser which is mapped to my tridion_cm database user TCMDBUser
My source tridion_cm database has user TCMDBUser_DEV.
After restoring the source .bak into my target TCMDBUser_DEV is orphaned.
I edit the TRUSTEES table to correct MTSUser and my admin log accounts for my target environment and run the following to fix up my orphaned database user:
sp_change_users_login #Action='update_one',
#UserNamePattern='TCMDBUser_DEV',
#LoginName='TCMDBUser'
GO
I can log back in to Tridion explorer and see the expected list of publications and can walk through the tree structure but when I come to a folder which should contain items I see nothing with error:
and the corresponding event log error is:
Unable to get list of SDL Tridion Content Manager items.
DESCRIPTION
Error Code:
0x80040000 (-2147221504)
Call stack:
System.Data.ProviderBase.FieldNameLookup.GetOrdinal(String)
System.Data.SqlClient.SqlDataReader.GetOrdinal(String)
System.Data.SqlClient.SqlDataReader.get_Item(String)
Tridion.ContentManager.Data.AdoNet.DatabaseUtilities.ConvertToFieldDictionary(IDataRecord,IDictionary`2)
Tridion.ContentManager.Data.AdoNet.IdentifiableObjectDataMapper.Read(TcmUri,IDataRecord,IDictionary`2)
Tridion.ContentManager.Data.AdoNet.ContentManagement.OrganizationalItemDataMapper.GetListItemsPost(IDataReader,TcmUri,OrganizationalItemItemsFilterData)
Tridion.ContentManager.Data.AdoNet.ContentManagement.OrganizationalItemDataMapper.Tridion.ContentManager.Data.ContentManagement.IOrganizationalItemDataMapper.GetListItems(TcmUri,OrganizationalItemItemsFilterData)
Tridion.ContentManager.ContentManagement.OrganizationalItem.GetListItemsData(OrganizationalItemItemsFilter)
Tridion.ContentManager.ContentManagement.OrganizationalItem.GetListItemsStream(OrganizationalItemItemsFilter)
Tridion.ContentManager.BLFacade.ContentManagement.OrganizationalItemFacade.GetListItemsXml(UserContext,String,ListFilter,ListColumnFilter)
Tridion.ContentManager.BLFacade.ContentManagement.OrganizationalItemFacade.GetListData(UserContext,String,EnumListKind,ListColumnFilter,String)
Folder.GetListItems
You will need to delete/drop the TCMDBUser_DEV form the DB and then create a new one with the same name and password (or reattach it to your cm DB). That should fix your problem.
I normally use the delete method with MS SQL server. I believe this occurs due to the ownership status that the TCMDBUser has on the database Schema.
When complete your TCMDBUser user should have the following permissions on your Tridion_CM database
Like Chris mentioned, I always drop the user from the database and then assign the existing TCMDBUser in SQL Server the rights to the restored database. You can drop the user with the following command (on the restored database):
EXEC sp_dropuser TCMDBUser
Then through the SQL Server - Security - Logins, you request the properties of your TCMDBUser and in the User Mapping add the following database roles: db_datareader, db_datawriter and db_ddladmin.
That's what I've always done in the past and works for me, not sure if its all required, but worth a try I guess
Try creating new user TCMDBUser in the database and run the following command
EXEC sp_change_users_login 'Update_One', 'TCMDBUser', 'TCMDBUser'

Resources