I have a problem. The openstack community provide the MySQL database dumps with the complete datasets.
I want to identify the same developers in different data schemas. And I find in different data schemas, there is a date file named 'people_uidentities', which provides the people id and his uuid as the
.
Because I find many developers who have same name but have different uuids in different schemas. Take 'Thai Tran' as an example, his uuid in openstack_sourcecode and openstack_tickets are different.
uuid in openstack_sourcecode:
uuid in openstack_tickets:
My question is what the generation mechanism of uuid in OpenStack is.
Can one person have serval uuids?
Thank you for you help!
I am not sure about the answer for the general mechanism of uuid in OpenStack. But one user account can only have one uuid that refers to the account. And you can use the uuid to get the username. For example, if you are using the keystone client library (https://docs.openstack.org/python-keystoneclient/latest/using-api-v3.html):
from keystoneauth1.identity import v3
from keystoneauth1 import session
from keystoneclient.v3 import client
auth = v3.Password(auth_url='https://my.keystone.com:5000/v3',
user_id='myuserid',
password='mypassword',
project_id='myprojectid')
sess = session.Session(auth=auth)
keystone = client.Client(session=sess)
You have to be an admin role in the OpenStack to run the following commands:
user = keystone.users.get("###") # user's uuid to replace "###"
user.name
HTH.
Related
Would like to write the airflow logs to s3. Following are the parameter that we need to set according to the doc-
remote_logging = True
remote_base_log_folder =
remote_log_conn_id =
If Airflow is running in AWS, why do I have to pass the AWS keys? Shouldn't the boto3 API be able to write/read to s3 if correct permission are set on IAM role attached to the instance?
Fair point, but I think it allows for more flexibility if Airflow is not running on AWS or if you want to use a specific set of credentials rather than give the entire instance access. It might have also been easier implementation as well because the underlying code for writing logs into S3 uses the S3Hook (https://github.com/apache/airflow/blob/1.10.9/airflow/utils/log/s3_task_handler.py#L47), which requires a connection id.
I want to use Accounts in my CorDapp and I would like to check if an account already exists before creating a new one.
How can I query the Vault to see if an account with a specific name exists or not?
int numberOfResults = getServiceHub().cordaService(KeyManagementBackedAccountService.class)
.accountInfo("USER_NAME").size();
Btw, this service KeyManagementBackedAccountService has a lot of useful functions; it comes with the Accounts library. I recommend exploring it.
I use this to test if I can retrieve references from a paper using doi from rscopus package
I use this:
library(rscopus)
library(dplyr)
auth_token_header("please_add")
akey="please_add"
object_retrieval("10.1109/ISCSLP.2014.6936630", ref = "doi")
but I receive this error:
Error in get_api_key(api_key, error = api_key_error) :
API key not found, please set option('elsevier_api_key_filename') or option('elsevier_api_key') for general use or set environment variable Elsevier_API, to be accessed by Sys.getenv('Elsevier_API')
Why do I receive it?
Please follow the steps I outlined in the section of https://github.com/muschellij2/rscopus#steps-to-get-api-key
Which is posted below:
In order to use this package, you need an API key from https://dev.elsevier.com/sc_apis.html. You should login from your institution and go to Create API Key. You need to provide a website URL and a label, but the website can be your personal website, and agree to the terms of service.
Go to https://dev.elsevier.com/user/login. Login or create a free account.
Click "Create API Key". Put in a label, such as rscopus key. Add a website. http://example.com is fine if you do not have a site.
Read and agree to the TOS if you do indeed agree.
Add Elsevier_API = "API KEY GOES HERE" to ~/.Renviron file, or add export Elsevier_API=API KEY GOES HERE to your ~/.bash_profile.
Alternatively, you you can either set the API key using rscopus::set_api_key or by options("elsevier_api_key" = api_key). You can access the API key using rscopus::get_api_key.
You should be able to test out the API key using the interactive Scopus APIs.
A note about API keys and IP addresses
The API Key is bound to a set of IP addresses, usually bound to your institution. Therefore, if you are using this for a Shiny application, you must host the Shiny application from your institution servers in some way. Also, you cannot access the Scopus API with this key if you are offsite and must VPN into the server or use a computing cluster with an institution IP.
See https://dev.elsevier.com/tecdoc_api_authentication.html
I'm trying to use the R AzureDSVM package to create a Linux DSVM through R. I am reading the guide https://raw.githubusercontent.com/Azure/AzureDSVM/master/vignettes/10Deploy.Rmd (Azure DSVM guide)
First the guide requests you create an Azure Active Directory application which will provide a "tenant ID", "client ID" and "user key", the guidelines described in http://htmlpreview.github.io/?https://github.com/Microsoft/AzureSMR/blob/master/inst/doc/Authentication.html (Azure SMR Auth guide)
As I understand it, this creates an app registered in Azure Active Directory, creates an "authentication key" for the app, which is the user key, and associates the app with a Resource Group. I've done this sucessfully.
The Azure DSVM guide then creates a VM with public key authentication in a similar way to what follows:
library(AzureSMR)
library(AzureDSVM)
TID <- "123abc" # Tenant ID
CID <- "456def" # Client ID
KEY <- "789ghi" # User key
context <- createAzureContext(tenantID=TID, clientID=CID, authKey=KEY)
resourceGroup<-"myResouceGroup"
location<-"myAzureLocation"
vmUsername<-"myVmUsername"
size<-"Standard_D1_v2"
mrsVmPassword<-"myVmPassword"
hostname<-"myVmHostname"
ldsvm <- deployDSVM(context,
resource.group = resourceGroup,
location = location,
hostname = hostname,
username = vmUsername,
size = size,
os = "Ubuntu",
pubkey = PUBKEY)
The guide vaguely describes creating a public key (PUBKEY) from the users private key, which is sent to the VM to allow it to provide SSH authentication:
To get started we need to load our Azure credentials as well as the
user’s ssh public key. Public keys on Linux are typically created on
the users desktop/laptop machine and will be found within
~/.ssh/id_rsa.pub. It will be convenient to create a credentials file
to contain this information. The contents of the credentials file will
be something like the foloowing and we assume the user creates such a
file in the current working directory, naming the file _credentials.R.
Replace with the user’s username.
TID <- "72f9....db47" # Tenant ID
CID <- "9c52....074a" # Client ID
KEY <- "9Efb....4nwV....ASa8=" # User key
PUBKEY <- readLines("~/.ssh/id_rsa.pub") # For Linux DSVM
My question:
Is this public key PUBKEY generated from the authentication/user key created by setting up the Azure Active Directory application in the Azure SMR Auth guide (the KEY variable in the above script)? If so, how? I've tried using the R sodium library pubkey(charToRaw(KEY)) to do this but I get "Invalid key, must be exactly 32 bytes".
If PUBKEY isn't generated from KEY, from what is it generated? And how does the package know how to authenticate with the private key to this public key?
The AAD key is used for authenticating to AAD. The public/private keypair is separate and is used for authenticating to the VM. If you do not have a public key (in the file ~/.ssh/id_rsa.pub), you can create one using ssh-keygen on Linux.
SSH connections use the private key (in ~/.ssh/id_rsa) by default.
A couple of things in addition to Paul Shealy's (correct) answer:
ssh-keygen is also installed on recent versions of Windows 10 Pro, along with ssh, scp and curl. Otherwise, you probably have the Putty ssh client installed, in which case you can use puttygen to save a public/private key pair.
AzureDSVM is rather old and depends on AzureSMR, which is no longer actively maintained. If you want to deploy a DSVM, I'd recommend using the AzureVM package, which is on CRAN and GitHub. This in turn builds on the AzureRMR package which provides a general framework for managing Azure resources.
library(AzureVM)
az <- AzureRMR::az_rm$new(tenant="youraadtenant", app="yourapp_id", password="password")
sub <- az$get_subscription("subscription_id")
rg <- sub$get_resource_group("rgname")
vm <- rg$create_vm(os="Ubuntu",
username="yourname",
passkey=readLines("~/.ssh/id_rsa.pub"),
userauth_type="key")
Have a look at the AzureRMR and AzureVM vignettes for more information.
Disclaimer: I'm the author of AzureRMR and AzureVM.
Context: We are trying to load some CSV format data into GCP BigQuery using GCP Dataflow (Apache Beam). As a part of this for the first time (for each table) creating the BQ tables thru BigQueryIO API. One of the customer requirement is the data on GCP needs to be encrypted using Customer supplied/managed Encryption keys.
Problem Statement: We are not able to find any way to specify the "Custom Encryption Keys" thru APIs while creating Tables. The GCP documentation details about how to specify the Custom encryption keys thru GCP BQ Console but could not find anything for specifying it thru APIs from within DataFlow Code.
Code Snippet:
String tableSpec = new StringBuilder().append(PipelineConstants.PROJECT_ID).append(":")
.append(dataValue.getKey().target_dataset).append(".").append(dataValue.getKey().target_table_name)
.toString();
ValueProvider<String> valueProvider = StaticValueProvider.of("gs://bucket/folder/");
dataValue.getValue().apply(Count.globally()).apply(ParDo.of(new RowCount(dataValue.getKey())))
.apply(ParDo.of(new SourceAudit(runId)));
dataValue.getValue().apply(ParDo.of(new PreProcessing(dataValue.getKey())))
.apply(ParDo.of(new FixedToDelimited(dataValue.getKey())))
.apply(ParDo.of(new CreateTableRow(dataValue.getKey(), runId, timeStamp)))
.apply(BigQueryIO.writeTableRows().to(tableSpec)
.withSchema(CreateTableRow.getSchema(dataValue.getKey()))
.withCustomGcsTempLocation(valueProvider)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND));
Query: If anybody could let us know
If this is possible to provide encryption key thru Beam API?
If its not possible with the current version what could be the possible work
around?
Kindly let know if additional information is required.
Customer supplied encryption keys is a new feature, not all libraries have been updated to support it yet.
If you know the table name in advance, you can use UI/CLI or API to create table, then run your normal flow to load data into that table. That might be a work around for you.
https://cloud.google.com/bigquery/docs/customer-managed-encryption#create_table
API to create table: https://cloud.google.com/bigquery/docs/reference/rest/v2/tables/insert
You need to set this section on table object:
"encryptionConfiguration": {
"kmsKeyName": string
}
More details on table: https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#resource