How to setup pubsub in MQ v-8 in distributed mode from One QM to anothe QM - mq

I am able to create pubsub setup on Local QM
BUt I want to create Pubsub setup from TEST to TEST1 Qmgr. Can someone help me on this?

Maybe this cookbook in KnowledgeCenter helps you

Related

Managing DBT profile file in MWAA

I would like to use DBT in MWAA Airflow enviroment. To achieve this I need to install DBT in the managed environment and from there run the dbt commands via the Airflow operators or CLI (BashOperator).
My problem with solution is that I need store the dbt profile file(s) -which contains the target / source database credentials- in S3. Otherwise the file is not going to be deployed to the Airflow worker nodes hence cannot be used by dbt.
Is there any other option? I feel this is a big security risk and also undermines the use of Airflow (because I would like to use its inbuilt password manager)
My ideas:
Create the profile file on the fly in the Airflow dag as a task and
write it out to local. I do not think this is a feasible workaround, because there is no guarantee that the dbt task is going to run on the same worker node which my code created.
Move the profile file manually to S3 (Exclude it from CI/CD). Again, I see a security risk, as I am storing credentials on S3.
Create a custom operator, which builds the profile file on the same machine as command will run. Maintenance nightmare.
Use MWAA environment variables (https://docs.aws.amazon.com/mwaa/latest/userguide/configuring-env-variables.html) and combine it with dbt's env_var command. (https://docs.getdbt.com/reference/dbt-jinja-functions/env_var)
Storing credentials in System wide EVs, this way feels awkward.
Any good ideas or best practices?
#PeterRing, in our case we use Dbt Cloud. Once the connection is set up in the Airflow UI, you are calling Dbt Job IDs to trigger the job (then using a sensor to monitor it until it completes).
If you can't use Dbt Cloud, perhas you can use AWS Secrets Manager to store your db profile/creds: Configuring an Apache Airflow connection using a Secrets Manager secret

Export CSV file to OneDrive Using R Scripting in Power BI Service

I am trying to build a fully automated and sustainable reporting tool in Power BI. I have built a report in Power BI that among other things uses R scripting at one point to create a data export to my local C: drive which is the following code:
# 'dataset' holds the input data for this script
.libPaths(.libPaths()[3])
require(gdata)
write.table(trim(dataset), file="C:\\Users\\Username\\OneDrive\\Folder\\Inventory Log.csv", sep=",", row.names=FALSE, append=TRUE, col.names=FALSE)
plot(dataset);
While all my other data is connected to PBI via OneDrive or online sources, this is still connected to my local machine. I have personal gateway setup but that still requires my local machine to be physically on during the scheduled refresh on the PBI service.
I have tried used the Microsoft365R Package but my R knowledge and experience is still limited so I wasn't able to come up with a solution that would allow file="OneDrive Path" within the write.table() function to successfully execute on Power BI Desktop, let alone Power BI Service.
The goal is to fully automate and not require me to have my computer on during the weekends or a non work day.
Is it possible to write a csv to a OneDrive file? If so, what are some ways that have worked successfully?
Any ideas? Thank you for any help you can provide!
Microsoft365R author here. Disclaimer: I'm not familiar with PowerBI, but I assume you can run R scripts inside it and connect to the Internet etc.
There's a few things needed to get this to run unattended.
A function to upload a data frame as CSV to Onedrive, without requiring you to login, is as follows:
upload <- function(dataset, upload_path, drive_id, ...)
{
outfile <- tempfile()
on.exit(unlink(outfile))
write.table(dataset, outfile, ...)
library(Microsoft365R)
library(AzureGraph)
gr <- create_graph_login(
tenant="{yourtenant}",
app="{client_id}",
password="{client_secret}",
auth_type="client_credentials"
)
gr$get_drive(drive_id)$upload_file(outfile, upload_path)
}
On the AAD side, create an app registration and give it a client secret (password). You then give it the Microsoft Graph application permissions necessary to read drives--most likely "Files.Readwrite.All".
Note down the client ID and client secret for your app registration. These are the values you plug into the R function above.
You can get your drive ID with the following R code.
drv <- get_business_onedrive(tenant="yourtenant")
drv$properties$id
You'll probably need the help of your friendly local admin to get all this done, if only because most orgs lock down the ability to register apps.

Showing Volume Details using python openstacksdk, python novaclient, python cinderclient

Hello guys and I hope you're having a great day. I have a question about using Openstack API in Python.
I'm using python-novaclient for getting server details and flavor details. And I want to get the volume details too but I don't know how to do it, I've tried to collect volume details but it failed somehow and I need to ask you guys if you have any idea.
This information is what I want to get:
volume_id, attached to (w/c volume), name, status and volume_type (CEPH or LVM)
I used python-cinderclient, but I only got the volume_id.
Here's the code:
volumes = cinder.volumes.list()
Can someone help me to get the other data? Other than running Openstack command-line in the server, I just need some Python module to get these data.
Thanks in advance.
I've finally figured it out, and I'm going to answer this for anyone who is interested in Openstack SDK or other Python API for Openstack.
First, for authentication you need to use Keystone API, the documentation is all over the internet so no need to worry, you could just oversee in your Openstack for credentials needed. And for my question, I use the function get_volume from Connection class. Please see the documentation
for this. You can read other documentation as well on the internet.
So, here is the example of how to get volumes details:
vol = conn.get_volume(volume_id)
print(vol)

Is there a way to use both test keys localhost and live keys remote with firebase functions

I have a project were I set up keys as such.
Live keys
functions:config:set stripe.secret="sk_live_..." stripe.publishable="pk_live_..."
Test keys
functions:config:set stripe.secret="sk_test_..." stripe.publishable="pk_test_..."
The application is in its beta stage but live. So there's a lot more changes still done in code.
So I want to avoid setting the keys each time I want to test out some new feature on localhost.
Is there a way to configure firebase functions, to correspond to different Environments?
When on localhost, it should validate with test keys and with on remote live keys?
There isn't a special per-environment configuration. What you can do instead is use the unique id of the project to determine which settings it should apply. Functions can read the deployed project id out of the process environment with GCP_PROJECT
const project_id = process.env.GCP_PROJECT
The values you should use during development is a matter of opinion - do whatever suits you the best.
I believe you can make a .runtimeconfig.json file in your functions directory, which the emulators will read.
For example, first set your local values with `firebase functions config:set stripe.secret="sk_test_...",
Then, run firebase functions config:get > .runtimeconfig.json
When that file is present, from my experience, your firebase emulators will read from that, and you won't keep overwriting production config variables.
Docs: https://firebase.google.com/docs/functions/local-emulator#set_up_functions_configuration_optional

Running AWS commands from commandline on a ShellCommandActivity

My original problem was that I want to increase my DynamoDB write throughput before I run the pipeline, and then decrease it when I'm done uploading (doing it max once a day, so I'm fine with the decreasing limitations).
They only way I found to do it is through a shell script that will issue the API commands to alter the throughput. How does it work with my AMI access_key and secret_key when it's a resource that pipeline creates for me? (I can't log in to set the ~/.aws/config file and don't really want to create an AMI just for this).
Should I write the script in bash? can I use ruby/python AWS SDK packages for example? (I prefer the latter..)
How do I pass my credentials to the script? do I have runtime variables (like #startedDate) that I can pass as arguments to the activity with my key and secret? Do I have any other way to authenticate with either the commandline tools or the SDK package?
If there is another way to solve my original problem - please let me know. I've only got to the ShellActivity solution because I couldn't find anything else in documentations/forums.
Thanks!
OK. found it - http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-concepts-roles.html
The resourceRole in the default object in your pipeline will be the one assigned to resources (Ec2Resource) that are created as a part of the pipeline activation.
The default one in configured to have all your permissions and AWS commandline and SDK packages are automatically looking for those credentials so no need to update ~/.aws/config of pass credentials manually.

Resources