Remote apache karaf bundle management via jolokia? - apache-karaf

I need to remotly manage the bundles that run on my karaf instances, ideally via HTTP calls or python scripts.
I set up my karaf instance and can access to it at http://mykarafserver:8040/jolokia.
I found just one example of usage in the jolokia website :
{
"type":"read",
"mbean":"java.lang:type=Memory",
"attribute":"HeapMemoryUsage",
"path":"used"
}
and I get a result, but I can't find the urls of Json syntax to start, stop, restart and get status of my bundles. I believe this is possible as some tools like Hawtio can manage Camel and Karaf stuf.

Related

How to deploy dotnetcore/react -au Individual to Azure

If you install the dotnetcore3 SDK and create the dotnetcore/react project, it compiles and runs fine. Modifications to use external identity providers are straightforward and work as documented. You will need to add packages for the providers you wish to support, such as Microsoft.AspNetCore.Authentication.MicrosoftAccount.
At this point you might try dotnet publish but the resultant package produces the following (truncated) stack trace:
info: IdentityServer4.Startup[0]
Starting IdentityServer4 version 3.0.0.0
crit: Microsoft.AspNetCore.Hosting.Diagnostics[6]
Application startup exception
System.InvalidOperationException: Key type not specified.
at Microsoft.AspNetCore.ApiAuthorization.IdentityServer.ConfigureSigningCredentials.LoadKey()
at Microsoft.AspNetCore.ApiAuthorization.IdentityServer.ConfigureSigningCredentials.Configure(ApiAuthorizationOptions options)
Service Worker
The template is set up with a service worker. This is a jolly nuisance while debugging our configuration so turn it off by commenting out registerServiceWorker(); in ClientApp/src/index.js and if you've already run the app then you will need to flush your cache to dislodge it.
Certificate
A certificate is required. The project template uses OIDC implemented with IdentityServer4, and therefore requires a PFX. On Windows you can create one of these using CertReq. It would be poor security practice to add this to the project so I made the PFX file sibling to the project folder. The registration in appSettings.json looks like this:
"IdentityServer": {
"Key": {
"Type": "File",
"FilePath": "../cert-name.pfx",
"Password": "cert-password"
}
},
Secrets
dotnet add secret is strictly a development mode thing. We are expected to manually transcribe all the secrets to Azure environment variables and modify the program to include them in its configuration loading process.
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((hostingContext, config) =>
{
config.AddEnvironmentVariables();
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
The names in dotnet secrets are full of colons. You'll also need to escape these colons as double underscores for cross platform compatibility.
Since dotnet add secret isn't exactly the most convenient tool ever it occurs to me that it might be less bother to just use environment variables all the way through.
Core version madness
Silly me trying to use the LTS version (3.1).
Creating a Classic CI pipeline from the Azure portal, it is impossible to select dotnet core 3.1 because it's not in the list. The list does contain LTS and Latest but both of these selections produce validation errors when you try to finalise the deployment. Choosing 3.0 allows finalisation which results in the deployment running but although it manages to publish to the Web App on Azure, the Web App is set to dotnet core 3.0 and since the project specifies 3.1 it won't start.
You can manually change this in the Web App Configuration blade in the Azure portal, but it just gets mangled on every deployment. Changing the project to use 3.0 and compatible packages seems to work.
Am I using the tools incorrectly, or is the Azure CICD set up really crap?
npm
And now it starts but can't find 'npm'. Installing npm using ssh looks like this (it's already root so sudo is not involved)
curl -sL https://deb.nodesource.com/setup_13.x | bash
apt-get install -y nodejs
and this seems to work but it doesn't survive a restart of the Web App (presumably it is installed outside /home)
Everything works without auth
If I deploy a project created with dotnet new react without the -au Individual qualifier, it works perfectly. The site loads, the web APIs are called, the data returns etc.
What's the difference? There are a couple.
IdentityServer4
SQLite
Generation of the SQLite database
Rummaging in the .csproj I find this
<PackageReference Include="Microsoft.EntityFrameworkCore.Sqlite" Version="3.0.0" />
<PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="3.0.0">
and this is the first thing used in ConfigureServices
services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlite(Configuration.GetConnectionString("DefaultConnection")));
but this doesn't trigger the exception. That occurs later when IdentityServer is created. Further up the stack trace we find this:
Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore
.MigrationsEndPointMiddleware.Invoke(HttpContext context)
from which I conclude that EF uses Node to do migration, at least for SQLite.
I don't think it would help to add npm to package.json because that would just bundle it for delivery to the browser. It appears that npm is required at the server for the migration process.
But Node and npm are simply not part of the dotnet core Web App stack.
One suggestion (via Reddit) is to use a Node stack Web App and deploy a self-contained build of the dotnet core server code. This is my next port of call. In the spirit of solving one problem at a time I shall first learn to do self-contained build deployment with a minimal Core/React project (no auth).
This almost works. Using SSH I was able to run the app and it started without throwing any errors but listened on port 5000 rather than 8080 which is where it needs to be if you want it surfaced on port 80 on the public interface.
On the Node stack, the startup script is unsurprisingly configured to to launch a Node app, and it barfs before it gets to the startup command you supply. Because it's a Node startup script it also doesn't set up ASPNETCORE_URLS=http://*:$PORT which is required to make the core project serve on port 8080.
Taking a step back, npm is a development thing. Why would anyone deliberately introduce it as a production dependency? That would be crazy, it would create mayhem.
The key word in that question is "deliberately". What if it weren't deliberate? How could you do it accidentally? Well, you could write a script to gather all your environment variables and plonk them into Azure, and this might capture ASPNETCORE_ENVIRONMENT=Development
Lo and behold, there it was. Deleting it restarts the app and HURRAH! no more demands for NPM. Looks like the stack isn't broken after all. Which makes me a happy camper since I didn't want to give up CICD.
This could also be defined in appsettings.json.
The important takeaway is that if you see demands for npm after deployment to Azure, your app is trying to run in development mode in a production environment.

How to get list of installed features in Karaf using REST API?

I know using command line it can be get by running feature:list -i but is there any API/JSON available to fetch this?
You can use jolokia and hawtio to retrieve that information. Quite easily. I believe you can easily add the hawtio repo from the native karaf repos in features (repo-add hawtio). Then you need to install jolokio, hawtio, and the karaf web console. From the karaf webconsole alone you can see a full list of features, but I find the hawtio interface to be a god send.
A REST API can be installed without the need for Hawtio, which uses jolokia for accessing the bundle list under the hood.
The jolokia project provides web applications called agents serving a REST API. For quick experiments you can deploy the war jolokia-war-unsecured into the hot deploy folder of a running karaf instance. This installs a A REST web service at e.g. http://localhost/jolokia-war-unsecured/ which does not require any authentications.

Why do we need to deploy a meteor app instead of just starting it?

As we all know, we can run a meteor app by just typing meteor in a terminal.
By default it will start a server and use port 3000.
So why do I need to deploy it using MUP etc.
I can configure it to use port 80 or use nginx to route to port 80 for the app. So the port is not the point.
Edit:
Assume meteor is running on a VPS or cloud server with public IP address, not a personal computer.
MUP does a few extra things you can do yourself:
it 'bundles' the code into a single file, using meteor build bundle
the javascript is one file, and css another; it's minified, and obfuscated so it's smaller and faster to load, and less easy to decipher on the client.
some packages are also meant to be removed when running in production. For example meteorToys, the utility toolset to look up collections and much more, is not bundled into the production bundle, as per the instructions in its package. This insures you don't deploy code with security vulnerabilities (Meteor toys basically opens up client side delete / updates etc... if you're not careful)
So, in short, it installs a minimal version of your site, making sure that what's meant for development only doesn't get push to a production environment.
EDIT: On other reason to do this, is that you don't need all the Meteor build tools on your production server; that can add up to a lot of stuff, especially if you keep caches going for a while...
I believe it also takes care of hooking up to a remote MongoDB Instance (at least it used to be the case on the free meteor site) which is more scalable and fault tolerant than running on the same instance as the web server, as well as provision storage etc... if needed.
basically, to deploy a Meteor app yourself manually, you need to:
on your dev box:
meteor build bundle your app to a tar file (using the architecture flag corresponding to the OS you will use)
on the server:
install node v0.10 (or whatever is the current version of node required by Meteor)
you might have to install Fiber#1.0.5 (but I believe this is now part of meteor install already)
untar the bundle, get into bundle/programs/server/ and run npm install
run the server with node main.js in the bundle folder.
The purpose of deploying an application is that you are situating your project on hardware outside of your local machine. For example if you deploy an application on Heroku app you create a repository on heroku's systems and that code based is used to serve your application off of their servers.
If you just start an application on your personal system, you will suffer a lack of network and resource availability as well as under use of computer time at non-peak hours as your system will need to remain attentive for additional users without having alternative tasks. Hosting providers provide resources as needed, and their diverse client base allows their systems to work around the clock on a global scale.

Is nexus able to provide artifacts of not configured repositories?

I am using nexus on our companies build server as proxy. Sometimes developers add new dependencies to their projects without telling me. Hence, the list of proxy repositories is sometimes not in sync what is really required. As a result, the jobs in our jenkins build server fail because of missing artificats. The jenkins is configured to use the nexus proxy repositories.
Is it possible to tell nexus to download the artifacts from the original repository if it is not found in the proxied ones?
I assume you mean that developers add repository entries into their Maven pom file to get further dependencies and/or modify their settings.xml.
On the other hand the CI server is configured to get everything from Nexus with mirrorOf *.
There is no automatic addition of repositories based on this setup. You can do two things imho
create scripts that do that for you using the Nexus REST API
or educate your developers to tell you to add the proxy repos to Nexus
Potentially you can even use the Maven enforcer rule to disallow repositories in the POM and set up an explicit message and allow them to create proxy repositories in Nexus. Just dont forget to have them added to the group you are using on the CI server.

Deploying an ASP.NET web site to a remote VPS with Jenkins

I am just starting to get my head wrapped around continuous deployment with Jenkins, but I am running into some roadblocks and I haven't really found very many good, definitive resources on the topic in regards to ASP.NET applications.
I have set up a local build server than successfully pulls down code from a SVN repo, and builds it OK with MSBuild. This works well so far, but now I'd like to automate pushing this compiled code to a development server.
My problem is this - from what I gather based on what I read (which may be an incorrect assumption...) is that the staging server is typically within the same network as the build server, meaning you can share network resources, servers, etc.
In my case, I want to run the Jenkins server on a remote VPS, then deploy to other remote VPSes (so, essentially individual isolated machines communicating with each other).
I have seen alot of terms, but I am very new in my Sys Admin / DevOps type skills.
So, my question is this:
Is it even possible to, using Jenkins on a VPS, to then deploy to any particular server I choose? (I have full access to all of them, so if its a security thing, I can fix that... but they are not within the same network/domain)
What is the method to achieve this? I've seen xcopy, Web Deployment Packages (msdeploy), batch scripts, etc. mentioned, but not really a guidance behind what to use in what situations. Are any of these methods useful to achieve my goal?
Thanks for any help or guidance!
How is your Powershell? ;) You should check out psake.
psake is a build automation tool written in PowerShell. It avoids the
angle-bracket tax associated with executable XML by leveraging the
PowerShell syntax in your build scripts. psake has a syntax inspired
by rake (aka make in Ruby) and bake (aka make in Boo), but is easier
to script because it leverages your existent command-line knowledge.
psake is pronounced sake – as in Japanese rice wine. It does NOT rhyme
with make, bake, or rake.
You can deploy your files to the target server through SSH. Jenkins do support transfers through SSH. All you need to do is setting up a SSH server ex : CopSSH and a user account with admin permissions. and configuring the Jenkins to transfer through SSH.
Create host configurations in the main Jenkins configuration
Add an SSH Server
Add the public key to the remote server (the build server)
Click "Test Configuration"
Save
Configure a job to Publish Over SSH (Post Build Action)
Add Transfer Set.
Refer Publish Over SSH For More details

Resources