While configuring BPS DB in wso2 is 5.9.0 , which scripts do i have to import in MySQL? - wso2-api-manager

I am following this document-https://is.docs.wso2.com/en/5.9.0/setup/changing-datasource-bpsds/
deployment.toml Configurations.
[bps_database.config]
url = "jdbc:mysql://localhost:3306/IAMtest?useSSL=false"
username = "root"
password = "root"
driver = "com.mysql.jdbc.Driver"
Executing database scripts.
Navigate to <IS-HOME>/dbscripts. Execute the scripts in the following files, against the database created.
<IS-HOME>/dbscripts/bps/bpel/create/mysql.sql
<IS-HOME>/dbscripts/bps/bpel/drop/mysql-drop.sql
<IS-HOME>/dbscripts/bps/bpel/truncate/mysql-truncate.sql
Now create/mysql.sql creates table and the rest two file are responsible for deleting and trucating the same table..............what do i do?????????
Can anyone also tell the use case of BPS datasource??????
Please Help...........

You should only change your bps database if you have a requirement of using the workflow feature[1] in the wso2 identity server. It is mentioned in this documentation https://is.docs.wso2.com/en/5.9.0/setup/changing-to-mysql/
The document supposed to menstion the related db script. But it seems like mis leading. As it has requested to execute all three scripts. if you are using the workflow feature just use the
/dbscripts/bps/bpel/create/mysql.sql
script to create tables in you mysql database.
[1]. https://is.docs.wso2.com/en/5.9.0/learn/workflow-management/

Related

Using R and paws: How to set credentials using profile in config file?

I use SSO and a profile as defined in ~/.aws/config (MacOS) to access AWS services, for instance:
aws s3 ls --profile myprofilename
I would like to access AWS services from within R, using the paws() package. In order to do this, I need to set my credentials in the R code. I want to do this through accessing the profile in the ~/.aws/config file (as opposed to listing access keys in the code), but I haven't been able to figure out how to do this.
I looked at the extensive documentation here, but it doesn't seem to cover my use case.
The best I've been able to come up with is:
x = s3(config = list(credentials = list(profile = "myprofilename")))
x$list_objects()
... which throws an error: "Error in f(): No credentials provided", suggesting that the first line of code above does not connect to my profile as stored in ~/.aws/config.
An alternative is to generate a user/key with programmatic access to your S3 data. Then, assuming that ~/.aws/env contains the values of the generated key:
AWS_ACCESS_KEY_ID=abc
AWS_SECRET_ACCESS_KEY=123
AWS_REGION=us-east-1
insert the following line at the beginning of your file:
readRenviron("~/.aws/env")
This AWS blog provides details about how to get the temporary credentials for programatic access. If you can get the credentials and set the appropriate environment variables, then the code should work fine without the profile name.
Or You can also try the following if you can get temporary credentials using aws cli
Check if you can generate temporary credentials
aws sts assume-role --role-arn <value> --role-session-name <some-meaningful-session-name> --profile myprofilename
If you can execute the above successfully, then you can use this method to automate the process of generating credentials before your code runs.
Put the above command in a bash script get-temp-credentials.sh and generate a JSON containing the temporary credentials as per the documentation.
Add a new profile programmatic-access in the ~/.aws/config
[profile programmatic-access]
credential_process = "/path/to/get-temp-credentials.sh"
Finally update the code to use the profile name as programmatic-access
If you have AWS cli credentials set up as a bash profile eg. ~/.aws/config:
[profile myprof]
region=eu-west-2
output=json
.. and credentials eg. ~/.aws/credentials:
[myprof]
aws_access_key_id = XXX
aws_secret_access_key = xxx
.. paws will use these if you add a line to ~/.Renviron:
AWS_PROFILE=myprof

create cluster for existing mariadb database

I have an existing database for which i was looking to create a new clustered environment. I tried the following steps:
Create a new database instance (OS & DB Server).
Take a backup / snapshot from existing database server for all the databases.
Import the snapshot to the new server.
Configure the cluster - referred to various sites but all giving same solution. Example reference site - https://vexxhost.com/resources/tutorials/how-to-configure-a-galera-cluster-with-mariadb-on-ubuntu-12-04/
Ran the command (sudo galera_new_cluster) on the primary server. (Primary server - no issue starting up). But when we tried starting the secondary server - it actually crashed for some reason.
Unfortunately at this point, dont have the logs stored / backed up with me where it failed. But it seemed like it tried to sync in with the primary server - had some failure with that.
As for additional part of the actions performed above. Both the server with same username / password - created a passwordless ssh connection between both the machines. Also, the method of syncing is set to rsync.
Am i missing something or doing it wrong? Is there a better way available on it?

wso2 api analytics schema

I have installed wso2 APIManager and APIAnalytics,I want to change the APIANalytics datasource from h2 to mysql
In the tutorial they mentioned to create the equivalent database schema for WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB
Where can I find the schemas for the database
Thanks
prabhat
You need to perform following steps to configure your wso2-am-analytics on mysql.
step 1. create all required databases in mysql db. just create it nothing else
create database wso2am_stats_db; --use existing one of wso2-am
create database wso2metrics_db; --use existing one of wso2-am
create database wso2_processed_data_store;
create database wso2carbon_db;
create database wso2_geolocation_db;
create database wso2_event_store;
step 2: configure these database in configuraton files under /wso2am-analytics-2.0.0/repository/conf/datasources/. there are 5 of them which you need to edit and change the database details.
analytics-datasources.xml
geolocation-datasources.xml
master-datasources.xml
metrics-datasources.xml
stats-datasources.xml
Step 3 :
once you are done, then start your server with -Dsetup (only once) this will create all tables needed. for example: sh bin/wso2server.sh -Dsetup
fore more read this article: Setting up MySQL

how to take the backup of sql compact database

I have created a C# windows application with vs2010 and I'm using a SQL Server CE database. I'm looking for a way to backup my database programmatically.
Is there a way to export/import my entire database (as a .sdf file) or just copy it to another location and then import it and replace the current one?Could anyone provide me the code in order to do this?
I'm relatively new to this but I'm guessing this is not something as difficult as it sounds. I couldn't find a clear answer anywhere so any help would be appreciated!
For this task we use the SQL Server Management Objects (SMO) objects that have access to backup/restore functions using asp.net code.
Base on the article How to: Back Up Databases and Transaction Logs here is a part of the sample that make the backup:
// Create the Backup SMO object to manage the execution
Backup backup = new Backup();
// Add the file to backup to
backup.Devices.Add(new BackupDeviceItem(backupPath, DeviceType.File));
// Set the name of the database to backup
backup.Database = databaseName;
// Tell SMO that we are backing up a database
backup.Action = BackupActionType.Database;
backup.Incremental = false;
// Specify that the log must be truncated after the backup is complete.
backup.LogTruncation = BackupTruncateLogType.Truncate;
// Begin execution of the backup
backup.SqlBackup(server);

How To Query A Database That's Being Used By Asp.Net

I have a Sql Server 2008 Express database file that's currently being used by an ASP.NET application, and I'm not sure how to query the database without taking the website down.
I'm unable to copy the database files (.mdf and .ldf files) to another directory, since they're in use by the web server. Also, if I attach the databases to an instance of the sql server (using the 'Create Database [DB name] on (filename = '[DB filename.mdf]') for attach;' command at the sqlcmd prompt), then the application pool user becomes unable to access the database (i.e. the webpages start producing http 500 errors. I think this might have to do with the username for the application pool becoming somehow divorced from the login credentials in the sql server database).
Any suggestions? I realize this is probably a newbie question, since it seems like a rather fundamental task. However, due to my inexperience, I really don't know what the answer is, and I'm pretty stumped at this point, since I've tried a couple of different things.
Thanks!
Andrew
if I attach the databases to an instance of the sql server (using the 'Create Database [DB name] on (filename = '[DB filename.mdf]') for attach;' command at the sqlcmd prompt),
Don't do this to a live database - it's attempting to be setup an MDF to be written to by two different databases...
Use Backup/Restore
As you've found, Attach/ReAttach requires the database to be offline - use the Backup/Restore functionality:
MSDN: Using SSMS to Backup the Database
MSDN: Using SSMs to Restore the Backup
Be aware that the backup/restore doesn't maintain logins (& jobs if you have any associated with the database) - you'll have to recreate & sync if using an account other than those with uber access.
Maybe Linked Server would work?
Another alternative would be to setup another SQL Server Express/etc instance on a different box, and use the Linked Server functionality to create a connection to the live/prod data. Use a different account than the one used for the ASP application...

Resources