I am following https://docs.wso2.com/display/IS530/Upgrading+from+a+Previous+Release#UpgradingfromaPreviousRelease-step11 To upgrade Idenity server to upgrade Identity server from 5.2.0 to 5.3.0 .
In the old version p[5.2.0] , custom database used where i pointed conf/datasources/master-datasources.xml , repository/conf/user-mgt.xml Changes to my own cloud database.
Shouldnt i be doing that in the migration ?Should the same files have to be pointed to my cloud database?
Should I do that before i run
sh wso2server.sh -Dmigrate -Dcomponent=identity
One more question. Should i always have to start server using the option Dmigrate -Dcomponent=identity . Or is it just one time?
Also should we go through https://docs.wso2.com/display/AM210/Configuring+WSO2+Identity+Server+as+a+Key+Manager#ConfiguringWSO2IdentityServerasaKeyManager-Step2-DownloadWSO2API-MandWSO2IS And do each steps even if we are migrating?
I think you would only run -Dmigrate once. To my knowledge, I'd think you would need to configure your master-datasources.xml and user-mgt.xml to point to the same paths that you initially defined in v5.2.0. There aren't a lot of changes that happen in minor updates, so it should be fine.
Related
I had a perfectly working instance of a WP-CLI wordpress plugin to upload files to S3 using the AmazonS3FullAccess policy. I migrated servers, and the copy started failing. "Failed to copy or write".
I even included the Full Administrator access to the IAM policy just to see what's going on when there are no restrictions, and the copy is still failing. Any idea what might be wrong?
Things I have tried: ensure time (via NTPD synchronization) on the new server is correct. Cross check the environment: php version, etc. The application files are exactly the same. I also used the host files method to check the previous server and it is working well.
Solved the problem by creating new access keys. For some reason, it seems that migrating a server will make the old access keys stop working? Ah, well.
P.S. I also downgraded the policies right back, to only what the application needs.
My Elastic Beanstalk installation won't deploy through Visual Studio due to this error:
2016-07-01 20:45:02,627 ERROR 1 AWSBeanstalkCfnDeploy.DeploymentUtils - Exception during deployment.
Microsoft.Web.Deployment.DeploymentDetailedClientServerException: Web Deploy cannot modify the file 'msvcr100.dll' on the destination because it is locked by an external process. In order to allow the publish operation to succeed, you may need to either restart your application to release the lock, or use the AppOffline rule handler for .Net applications on your next publish attempt. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_FILE_IN_USE.
The link suggests that I create a pubxml file with settings to enable AppOffline, but this file only seems to be relevant for publishing through VS using the built-in Publish feature. I haven't found any documentation suggesting that this will work for AWS.
How do I enable AppOffline for an Elastic Beanstalk deployment?
Thanks!
Sorry that this is only general advice and not the code you need, but the solution is to use hooks via .ebextensions. Please see http://docs.aws.amazon.com/codedeploy/latest/userguide/app-spec-ref-hooks.html.
You can add the execution of a powershell script to add app_offline.htm before the update is extracted and remove it once the update is deployed.
We had a similar issue, but the DLL in question (abcPDF, v9) was only blocked because we were initializing the licensing of it during application_start(), which EB did not like. So we moved applying the license elsewhere.
However, I think this approach would work.
--
Oh, maybe this container command will work for you. It recycles the IIS app pool right before the It didn't for us because of the aforementioned licensing locking the DLL.
/.ebextensions/recycleapppool.config
container_commands:
__recycle_app_pool:
____command: c:\windows\system32\inetsrv\appcmd.exe recycle apppool DefaultAppPool
After quite a lot of experimentation, the only working solution I could find for this problem was
// in Project/.ebextensions/reset.config
container_commands:
00_nuke:
command: IISReset
waitAfterCompletion: 0
The cost was about 4 seconds of downtime (on a t2.micro), during which you get a 503, which certainly isn't great.
Note there's a Github issue for this (open at the time of writing).
If you have the option, deploy your service to Azure rather than AWS and there are configuration options to work around the issue (such as an environment variable MSDEPLOY_RENAME_LOCKED_FILES) - related Azure specific question.
How do I apply EF7 migrations on an Azure database?
According to this link, you simply tick a box in the Publish Profile settings. Well, I don't have that checkbox - I'm not sure if the profile configuration has changed since then but I don't even have a databases section.
According to this link, EF7 doesn't support database initializers and you have to use nuget package manager or k (dnx) migrations. I'm not sure about the nuget option, so going with the dnx option: how do I connect to my Azure (hosted) project/website using a dnx console window or the Package Manager Console in VS?
Are there any other options (hopefully easier!) for doing this?
Here's the 'new' way:
_context.Database.EnsureCreated();
_context.Database.Migrate();
Simple.
My Azure database somehow had some migrations applied, but nothing in the __EFMigrationsHistory table, so I dropped all other tables and then ran all the migrations to get it back to where I wanted it.
I've managed to apply the migrations by changing my connection string on my local project, opening the firewall on Azure to my IP address, and running a dnx . ef migration apply command.
However, that doesn't seem like a good solution to me: now I have to store my live connection string in my dev project and keep switching between the two. There must be a better way...?
I develop Meteor application on my local computer, and deploy it to meteor.com. I want to have an opportunity to use remote production MongoDB database for local development.
So, I get url to my DB with meteor mongo --url myapp.meteor.com, then I add it to my MONGO_URL environment variable:
export MONGO_URL=mongodb://client-5345a08c:5f63edff-8cec-a818-7f35-c05021bb6d91#production-db-d1.meteor.io:27017/34377_ru
The inconvinience is that this url is invalid in one minute, so I need to generate another one and modify my MONGO_URL every time I want to start my application. I suspect some permanent url to my MongoDB is out there. I ran meteor mongo myapp.meteor.com and noticed greeting:
MongoDB shell version: 2.4.9
connecting to: production-db-d1.meteor.io:27017/34377_ru
I tried to use this url:
export MONGO_URL=mongodb://production-db-d1.meteor.io:27017/34377_ru
and even
export MONGO_URL=mongodb://myneteorcomusername:mymeteorcompassproduction-db-d1.meteor.io:27017/34377_ru
but I had no luck.
Are there ways to simplify my workflow and make meteor use my remote database by default?
I guess for scaling purposes, there is no dedicated mongo ever for each subdomain. I could be wrong so better ask this question over at the Meteor Talk Google group.
I am trying a test to move all my development to Nitrous.io IDE, but with limited space in my Nitrous box I want to permanently host my Mongo databases at MongoHQ.com. Currently each day I need to set my MONGO_URL by running:
export MONGO_URL='mongodb://<user>:<pass>#paulo.mongohq.com:12345/<db>'
If I fire up another console or logout of Nitrous my MONGO_URL needs to set again.
How can I set the development MONGO_URL for good per meteor app? I cannot find a config file anywhere.
Nitrous support helped me find a quick solution. Just wanted to answer it here for others with the same issue.
Open ~/.bash_profile and enter your DB information.
example:
export MONGO_URL='mongodb://jimmy:criket#paulo.mongohq.com:12345/mynitrobox'
Next in the console run source ~/.bash_profile to load the settings.
This sets the DB for your entire node.js box, not individual meteor apps, so you may want to structure your Mongo collections accordingly with subcollections.
you can do this in one line like so:
MONGO_URL='mongodb://<user>:<pass>#paulo.mongohq.com:12345/<db>' meteor
I don't know much about Nitrous.io but in AWS EC2 I have an upstart job that runs this for me when the server starts.
I gist'd my approach a while back, I've since changed it a bit but this still works:
https://gist.github.com/davidworkman9/6466734
I don't know that this will help you in Nitrous.io though, good luck!