How to add existing S3 bucket (not AWS, but my custom provider) to Nexus Storage - nexus

How could I add S3 solution for storage from other Cloud-Provider, not AWS ones. I have an other solution of S3 (my custom S3 service), but not AWS. Therefore I wanted to ask how could I add other storage to Nexus?
I have only public link for S3 and gave all rights for bucket, but it fails to add it to Nexus, due to
An error occurred saving data. Unable to initialize blob store bucket: test, Cause: Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region.
I have tried to change location from default to eu2 (European Union (Germany) but I have this location on my solution for S3)

Related

Using Airflows S3Hook is there a way to copy objects between buckets with different connection ids?

I'm copying files from an external companies bucket, they've sent me an access key/secret that I've set up as an env variable. I want to be able to copy objects from their bucket, I've used the below but that's for moving objects with the same connection, how do I use S3Hook to copy objects w. a different conn id?
s3 = S3Hook(self.aws_conn_id)
s3_conn = s3.get_conn()
ext_s3 = S3Hook(self.ext_aws_conn_id)
ext_s3 conn = ext_s3.get_conn()
#this moves objects w. the same connection...
s3_conn.copy_object(Bucket="bucket",
Key=f'dest_key',
CopySource={
'Bucket': self.partition.bucket,
'Key': key
}, ContentEncoding='csv')
From my point of view this is impossible. First of all, you can only declare one URL endpoint.
Secondly, Airflow S3Hook work with Boto3 in its background, and probably, both of your connections will have different acces_key and secret_key to create the boto3 resource/client. As explained in this post, if you wish to copy between different buckets, then you will need to use a single set of credentials that have:
GetObject permission on the source bucket
PutObject permission on the destination bucket
Again in the S3Hook, you can only declare a single set of credentials. You could maybe use the credentials given by your client and declare a bucket in your account with PutObject permission, but this will imply that you are allowed to do this in your enterprise (not very wise in terms of security), and even though your S3Hook will still only reference to one single endpoint.
To sum up, everything I have been dealing with the same problem and ended up creating two S3 connections using the first one for downloading from the original bucket and the second to upload to my enterprise bucket.

How to encrypt CloudTrail storing trails in another account

I've got two accounts:
1111111 - it's my main account
2222222 - used for audit purposes
I have created an S3 bucket in 2222222 (and called it 'my-audit-bucket).
In 11111 I have created a trail in CloudTrail and set it to store all data in 'my-audit-bucket'. That has been working perfectly fine for quite a while (additionally I've got SQS configured, which is used to push logs to Splunk).
Now I want to encrypt the CloudTrails using CMK. I changed the configuration but wasn't able to save it, as I got:
"You don't have adequate permissions in S3 to perform this operation"
Any idea what should I add to the configuration?

VPC creation problem in aws via terraform

I have been trying to create vpc infrastructure in AWS through terraform I am unable to perform the "Terraform apply" command anyone has a similar problem while using a free trial account.
Error: Error creating VPC: UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: 4HZVo3-eWCS-YLhRy55P_0T13F_fPtA29TYrJrSe5_dyPxIcqRbh7_wCcrCZr2cpmb-B5--_fxVaOngBfHD_7yfnPH7NLf1rrqpb7ge1mvQrK8P0Ltfpgpm37nZXezZUoYf1t4peB25aCxnbfeboHpgJjcFnHvqvf5so5G2PufnGZSB4FUZMfdaqppnJ-sNT7b36TonHUDNbLhBVUl5Fwd8d02R-6ZraRYvDx-o4lDfP9xSWs6PMUFXNr1qzruYaeMYMxIe-9kGOQptgBLYZXsxr966ajor-p6aLJAKlIwPGN7Iz7v893oGpGgz_8wxTv4oEb5GnfYOuPOqSyEMLKI69b2JUvVU1m4tCcjKBaHJARP5sIiFSGhh4lb_E0_cKkmmFfKzyET2h8YkSD8U9Lm4rRtGbAEJvIoDZYDkNxlW7W2XvsccmLnQFeSxpLolVhguExkP7DT9uXffJzFEjQn-VkhqKnWlwv0vxIcOcoLP04Li5WAqRRr3l7yK2bYznfg
│ status code: 403, request id: 5c297a4d-7bcf-4bb4-b311-37480e1f26b8
make sure you have properly setup aws credentials and permissions.
check these two files
~/.aws/credentials
~/.aws/config
this docs can help you.
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
Did you configure your access keys?
provider "aws" {
region = "us-west-2"
access_key = "my-access-key"
secret_key = "my-secret-key"
}
There are multiple ways to do it (described here).
My above example can be a good start but you don't want to commit those so I recommend to configure the keys in ~/.aws/credentials (like you need them for AWS CLI). The aws provider will pick them up automatically and so you don't need to define them somewhere in your terraform code.

Airflow logs in s3 bucket

Would like to write the airflow logs to s3. Following are the parameter that we need to set according to the doc-
remote_logging = True
remote_base_log_folder =
remote_log_conn_id =
If Airflow is running in AWS, why do I have to pass the AWS keys? Shouldn't the boto3 API be able to write/read to s3 if correct permission are set on IAM role attached to the instance?
Fair point, but I think it allows for more flexibility if Airflow is not running on AWS or if you want to use a specific set of credentials rather than give the entire instance access. It might have also been easier implementation as well because the underlying code for writing logs into S3 uses the S3Hook (https://github.com/apache/airflow/blob/1.10.9/airflow/utils/log/s3_task_handler.py#L47), which requires a connection id.

How to create data base link in oracle11g

How to create data base link in oracle 11 g to Access Tables.
You seem to have copied the example in the documentation without really understanding it.
The USING 'local' part of the statement is creating a link to 'the local database', where local is the service name of a database. (The example is a bit confusing, to be fair).
When the link is used it tries to interpret local as a service name, appending the current database's domain, as the docs say:
USING 'connect string'
Specify the service name of a remote database. If you specify only the
database name, then Oracle Database implicitly appends the database
domain to the connect string to create a complete service name.
Therefore, if the database domain of the remote database is different
from that of the current database, then you must specify the complete
service name.
If you're trying to create a link back into the same database - which would be a bit odd but I've seen it done in place of grant access across schemas, and that seems to be what the example is hinting at - then you can replace 'local' in the USING clause with the service name of your current database (e.g. USING 'orcl', or whatever).
You can also use a TNS alias; if your tnsnames.ora has an entry for SOME_DB which points to the SID or service name of another database, you can have USING'some_db'`. You should be able to use any connect string I think; certainly Easy Connect is allowed. There's more in the net services admin guide.

Resources