I'm relative new to Openstack/Microstack platforms.
I have a Microstack environment with three projects:
Two projects that created by default: "admin" and "service".
One project that I created in my own: "myProject"
I noticed that when I am creating a new server using Microstack CLI, the server is automatically allocated in the "admin" Project, even if I specified (using --nic port-id) ports that belong to "myProject" project.
In contrast to creating new networks, subnetworks or ports, the command "microstack.openstack server create" have no "--project" flag.
Is there any way to create a new server in "myProject" project using Microstack CLI?
Thanks
You need to take on the myProject identity. First, you need a user that has a role in myProject; let's call it myUser.
You can then log on to the GUI as Myuser and you will automatically be in myProject, if this is the only project where this user has a role.
In case myUser has roles in other projects, the GUI has a small menu, by default in the upper left corner, that allows you to switch projects.
On the command line, you need to change a few environment variables, namely OS_USERNAME, OS_PROJECT_NAME and of course OS_PASSWORD, before launching your server. Reference: https://docs.openstack.org/python-openstackclient/pike/cli/authentication.html.
Related
We are implementing our first project in Next.js and need some recommendations on the below scenario.
We built docker images and use Kubernetes for our deployments. Also, we follow branch based deployments.
Code from develop branch --> deploys to --> DEV cluster and it needs to built with 'dev' specific environment variables.
For example, backend service domain variable like NEXT_PUBLIC_BACKEND_API_DEV.
Similarly, we have stage --> deploys to --> STAGE cluster and must be built with NEXT_PUBLIC_BACKEND_API_STAGE.
Currently, we are building the app using next build command while building the Docker container image.
Looking at the documentation, I see that Next supports multiple .env files like .env.local, .env.dev, .end.stage etc. . But I'm not clear on how to access the branch name of the code that is being built inside my build script of Dockerfile and then pass appropriate .env to next build command.
Any thoughts/suggestions?
Amplify does not support CLI option --profile. It always uses profile specified when application was generated. Different team members use different AWS profiles.
How to change/configure/use profile different than profile used during application generation?
Aim is to publish changes from different computer. Final goal is to use CI server to publish application to different regions.
Amplify does not work like "other" development tools where tool is detached from git. Amplify goes hand in hand with Git and requires initialization after cloning. Running amplify init and choosing existing environment (which is pushed by other developer), it is possible to select different AWS profile.
If you install the dotnetcore3 SDK and create the dotnetcore/react project, it compiles and runs fine. Modifications to use external identity providers are straightforward and work as documented. You will need to add packages for the providers you wish to support, such as Microsoft.AspNetCore.Authentication.MicrosoftAccount.
At this point you might try dotnet publish but the resultant package produces the following (truncated) stack trace:
info: IdentityServer4.Startup[0]
Starting IdentityServer4 version 3.0.0.0
crit: Microsoft.AspNetCore.Hosting.Diagnostics[6]
Application startup exception
System.InvalidOperationException: Key type not specified.
at Microsoft.AspNetCore.ApiAuthorization.IdentityServer.ConfigureSigningCredentials.LoadKey()
at Microsoft.AspNetCore.ApiAuthorization.IdentityServer.ConfigureSigningCredentials.Configure(ApiAuthorizationOptions options)
Service Worker
The template is set up with a service worker. This is a jolly nuisance while debugging our configuration so turn it off by commenting out registerServiceWorker(); in ClientApp/src/index.js and if you've already run the app then you will need to flush your cache to dislodge it.
Certificate
A certificate is required. The project template uses OIDC implemented with IdentityServer4, and therefore requires a PFX. On Windows you can create one of these using CertReq. It would be poor security practice to add this to the project so I made the PFX file sibling to the project folder. The registration in appSettings.json looks like this:
"IdentityServer": {
"Key": {
"Type": "File",
"FilePath": "../cert-name.pfx",
"Password": "cert-password"
}
},
Secrets
dotnet add secret is strictly a development mode thing. We are expected to manually transcribe all the secrets to Azure environment variables and modify the program to include them in its configuration loading process.
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((hostingContext, config) =>
{
config.AddEnvironmentVariables();
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
The names in dotnet secrets are full of colons. You'll also need to escape these colons as double underscores for cross platform compatibility.
Since dotnet add secret isn't exactly the most convenient tool ever it occurs to me that it might be less bother to just use environment variables all the way through.
Core version madness
Silly me trying to use the LTS version (3.1).
Creating a Classic CI pipeline from the Azure portal, it is impossible to select dotnet core 3.1 because it's not in the list. The list does contain LTS and Latest but both of these selections produce validation errors when you try to finalise the deployment. Choosing 3.0 allows finalisation which results in the deployment running but although it manages to publish to the Web App on Azure, the Web App is set to dotnet core 3.0 and since the project specifies 3.1 it won't start.
You can manually change this in the Web App Configuration blade in the Azure portal, but it just gets mangled on every deployment. Changing the project to use 3.0 and compatible packages seems to work.
Am I using the tools incorrectly, or is the Azure CICD set up really crap?
npm
And now it starts but can't find 'npm'. Installing npm using ssh looks like this (it's already root so sudo is not involved)
curl -sL https://deb.nodesource.com/setup_13.x | bash
apt-get install -y nodejs
and this seems to work but it doesn't survive a restart of the Web App (presumably it is installed outside /home)
Everything works without auth
If I deploy a project created with dotnet new react without the -au Individual qualifier, it works perfectly. The site loads, the web APIs are called, the data returns etc.
What's the difference? There are a couple.
IdentityServer4
SQLite
Generation of the SQLite database
Rummaging in the .csproj I find this
<PackageReference Include="Microsoft.EntityFrameworkCore.Sqlite" Version="3.0.0" />
<PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="3.0.0">
and this is the first thing used in ConfigureServices
services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlite(Configuration.GetConnectionString("DefaultConnection")));
but this doesn't trigger the exception. That occurs later when IdentityServer is created. Further up the stack trace we find this:
Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore
.MigrationsEndPointMiddleware.Invoke(HttpContext context)
from which I conclude that EF uses Node to do migration, at least for SQLite.
I don't think it would help to add npm to package.json because that would just bundle it for delivery to the browser. It appears that npm is required at the server for the migration process.
But Node and npm are simply not part of the dotnet core Web App stack.
One suggestion (via Reddit) is to use a Node stack Web App and deploy a self-contained build of the dotnet core server code. This is my next port of call. In the spirit of solving one problem at a time I shall first learn to do self-contained build deployment with a minimal Core/React project (no auth).
This almost works. Using SSH I was able to run the app and it started without throwing any errors but listened on port 5000 rather than 8080 which is where it needs to be if you want it surfaced on port 80 on the public interface.
On the Node stack, the startup script is unsurprisingly configured to to launch a Node app, and it barfs before it gets to the startup command you supply. Because it's a Node startup script it also doesn't set up ASPNETCORE_URLS=http://*:$PORT which is required to make the core project serve on port 8080.
Taking a step back, npm is a development thing. Why would anyone deliberately introduce it as a production dependency? That would be crazy, it would create mayhem.
The key word in that question is "deliberately". What if it weren't deliberate? How could you do it accidentally? Well, you could write a script to gather all your environment variables and plonk them into Azure, and this might capture ASPNETCORE_ENVIRONMENT=Development
Lo and behold, there it was. Deleting it restarts the app and HURRAH! no more demands for NPM. Looks like the stack isn't broken after all. Which makes me a happy camper since I didn't want to give up CICD.
This could also be defined in appsettings.json.
The important takeaway is that if you see demands for npm after deployment to Azure, your app is trying to run in development mode in a production environment.
I want to create a new node without redeploying my existing nodes in Corda environment . Is it possible to add another node from within the application without deploying it again .
If yes then how we will specify its ports for rpc and database .
For example : In my application I have a system in which there are different merchants and I want to add a new merchant to the system without redeployment .
Yes it is possible (Imagine a configuration where nodes / actors couldn't join or leave the distributed ledger on demand ? That would be madness right?). All the active nodes communicate with the network map service so all your new node needs to do is announce itself to this and voila - the existing nodes are now informed.
I'm simplifying the process a little bit as we have gone through a revision of how this is done recently (and I don't want to give you the wrong answer), but depending on what milestone release you are running I can elucidate further.
Yes. Prior to Corda 2, you would be this as follows:
Create a new folder containing the Corda jar and node.conf file, or make a copy of an existing node folder
Modify the node.conf file to have its own web, RPC and P2P ports. Make sure you don't change the network map information
Start the node by running java -jar corda.jar
You can optionally also start a node webserver by placing the corda-webserver jar in the same folder and running java -jar corda-webserver.jar
As long as your nodes are in dev mode, they'll auto-generate certificates if none are provided in their certificates folder. They'll connect to the same network map and be able to speak to the other nodes.
In Corda 3, you need to stop all the nodes and re-run the bootstrapper after adding a node or modifying a node's node.conf file. See the instructions here
Can you throw some points on how it is a best way, best practice
to install web application on Unixes?
Like:
where to place app and its bases and so for,
how to configure to be secure and easy to backup,
etc
For example I know such suggestion -- to set uniq user for each app.
App in question is Jira on FreeBSD, but more general suggestions are also welcomed.
Here's what I did for my JIRA install on Fedora Linux:
Create a separate user to run JIRA
Install JIRA under the JIRA user's home directory
Made a soft link "/home/jira/jira" pointing to the JIRA installation directory (the directory as installed contains the version number, something like /home/jira/atlassian-jira-enterprise-4.0-standalone)
Created an /etc/init.d script to run JIRA as a service, and added it to chkconfig so that it runs at system startup - see these instructions
Created a MySQL database for JIRA on a separate data volume
Set up scheduled XML backups via the JIRA admin interface
Set up a remote backup script to dump the MySQL database and copy the DB dump and XML backups to a separate backup server
In order to avoid having to open extra firewall ports, set up an Apache virtual host "jira.myhost.com" and used mod_proxy to forward requests to the JIRA URL.
I set everything up on a virtual machine (an Amazon EC2 instance in my case) and cloned the machine image so that I can easily restart a new instance if the current one goes down.