I am setting up an identity provider on WSO2 AM to use the tokens generated by WSO2 IS and, as far as I know, it needs to have the same name as IDTokenIssuerID. In old versions I usually changed the value of IDTokenIssuerID in <IS_HOME>/repository/conf/identity/identity.xml.
How can I change using deployment.toml in newer version?
I'm running everything on docker environment and if I change the value in identity.xml, in volume, it is overwritten when docker starts.
Just put this in deployment.toml:
[oauth.oidc]
id_token.issuer = "ID_NAME"
Related
when I try to run terraform apply, I see Error:
aws_glue_catalog_database.test: Provider doesn't support resource: aws_glue_catalog_database
Looks like my provider is old because terraform version shows provider.aws v1.6.0 and in fact I can launch glue in another folder with the same terraform version but with a newer provider.aws v2.0.0
Terraform v0.11.10
+ provider.archive v1.0.0
+ provider.aws v1.6.0
+ provider.null v1.0.0
+ provider.template v1.0.0
I tried to upgrade provider but keep the terraform version v0.11.10 the same. to do that, I run terraform init -upgrade but see the below warning:
terraform init -upgrade
Initializing the backend...
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
Do you want to migrate all workspaces to "s3"?
Both the existing "s3" backend and the newly configured "s3" backend
support workspaces. When migrating between backends, Terraform will copy
all workspaces (with the same names). THIS WILL OVERWRITE any conflicting
states in the destination.
Terraform initialization doesn't currently migrate only select workspaces.
If you want to migrate a select number of workspaces, you must manually
pull and push those states.
If you answer "yes", Terraform will migrate all states. If you answer
"no", Terraform will abort.
I decided to say "no" because the above warning scared me.
I do have a backend "s3" resource to remote store the status in s3 and have several workspaces. I do not understand why and how the backend and workspace will be changed if I upgrade provider using above command; whether it will break my system.
Does anybody know whether it is ok for me to say yes without messing things? or which terraform cmd I should run to upgrade provider without change terraform version? Thanks.
You're doing a major release upgrade so there is always risk I'm afraid.
Here are some links that could help you to highlight risks (if you haven't seen them already):
https://www.terraform.io/docs/providers/aws/guides/version-2-upgrade.html#provider-version-configuration
https://github.com/terraform-providers/terraform-provider-aws/blob/master/CHANGELOG.md
Before doing anything, I would recommend backing up the state files that are stored on the remote S3 bucket first. If your infrastructure is fairly static (or you have a mechanism to make it static), you could always put the old backups of the state back into S3 if it goes horribly wrong without causing an issue as no applies would have occurred during your upgrade process.
I don't know your setup but ideally you would be doing this in a development environment first which should hopefully ease your nerves about changing state files.
I had created Message Mediation policy on an API published in WSO2 API Manager 1.10.0. Due to new requirements, I modified the policy in the /synapse/default/sequences/API-Name.xml .It works as expected but it gets reverted to the initial version when wso2am is restarted.
I am facing issue using the WSO2 Plugin with eclipse and hence updating the sequence in this manner. Is this the right way to update or Is there any other change i am missing?
It seems like the file is getting replaced with the value in the registry. This looks like a bug.
As a workaround, you can edit sequence saved in the registry.
Navigate to carbon console.
Home->Resources->Browse.
Go to path /_system/governance/apimgt/customsequences/in/
Edit as xml.
Save and restart.
I am following https://docs.wso2.com/display/IS530/Upgrading+from+a+Previous+Release#UpgradingfromaPreviousRelease-step11 To upgrade Idenity server to upgrade Identity server from 5.2.0 to 5.3.0 .
In the old version p[5.2.0] , custom database used where i pointed conf/datasources/master-datasources.xml , repository/conf/user-mgt.xml Changes to my own cloud database.
Shouldnt i be doing that in the migration ?Should the same files have to be pointed to my cloud database?
Should I do that before i run
sh wso2server.sh -Dmigrate -Dcomponent=identity
One more question. Should i always have to start server using the option Dmigrate -Dcomponent=identity . Or is it just one time?
Also should we go through https://docs.wso2.com/display/AM210/Configuring+WSO2+Identity+Server+as+a+Key+Manager#ConfiguringWSO2IdentityServerasaKeyManager-Step2-DownloadWSO2API-MandWSO2IS And do each steps even if we are migrating?
I think you would only run -Dmigrate once. To my knowledge, I'd think you would need to configure your master-datasources.xml and user-mgt.xml to point to the same paths that you initially defined in v5.2.0. There aren't a lot of changes that happen in minor updates, so it should be fine.
I've created a VM with a VNET attached on Opennebula, after a while I changed the params of the VNET but those changes do not persist on the VM after my (physical)host is restarted.
I’ve changed the /var/lib/one/vms/{$VM_ID}/context.sh file but still no luck persisting the changes.
Do you know what it could be?
I'm using OpenNebula with KVM on a Debian8 host.
After a while I figure out how to do this myself.
It seems that when the VM is started, the file /var/lib/one/datastores/0/$VM_ID/disk.1 is attached as /dev/sr0.
During boot process /usr/sbin/one-contextd mounts this unit an uses the variables inside it, they usually look like this:
DISK_ID='1'
ETH0_IP='192.168.168.217'
ETH0_MAC='02:00:c0:a8:a8:d9'
ETH0_DNS='192.168.168.217'
ETH0_GATEWAY='192.168.168.254'
This info are used to export ENV variables (the exported variables can be found on /tmp/one_env) which are used by the script /etc/one-context.d/00-network to set network configuration.
OpenNebula doesn't provide a simple way of replacing this configs after the VM is created, but you can do the following:
Edit /var/lib/one/datastores/0/$VM_ID/disk.1 and make the required
changes
Restart opennebula service
Restart the VM
Hope this is useful to someone :)
Yes, the issue is that this functionality is not supported in current versions of OpenNebula. This will be supported in the upcoming 5.0 version.
You can power off the VM and change most of the parameters(not network parameters as they are linked to a vnet) in the conf tab of the VM.
For a network-specific change only, you can simply log-in to the VM and mv the file /etc/one-context.d/00-network to some other place and your changes to the network configuration of VM won't be overwritten by the network context script.
I love using RStudio for it's built-in integration with version control systems. However with RStudio on Windows is there a way to change the Git protocol from http to ssh or vice versa for a project already under version control without first having to delete and recreate the project?
I might be missing something, but I originally cloned my repo using http which I subsequently found to be a massive pain because every time I want to push project changes to GitHub I have to re-enter my username and password. So I removed the project from version control(Project -> Project Option -> Git/SVN -> Version Control System: none) and then tried to re-add version control hoping to use ssh but it will only allow you to go back to the original protocol you selected when creating the project in the first place.
The only way I have found to change protocol it is to delete the project and then create a new project from GitHub using the correct ssh parameters. I'd really like to be able to change projects version control protocol from http to ssh without deleting and re-cloning first.
Is this possible?
Check out git config and the whole configuration stuff. You can configure several remotes to make the "distributed" aspect of git work.
You can try just copying the whole repository (or just .git/config, keep a copy!) and check what happens with your specific case when you change the configuration. It depends on lots of things that aren't under git's control, like firewall configurations en route, and the configuration on the other end.