How to execute a JAR file from MICRONAUT application, through PICOCLI, with a different environment - jar

I'm trying to run this code from Picocli tutorial: https://micronaut-projects.github.io/micronaut-picocli/latest/guide/#generate
Everything works fine, but the application always run with the "CLI" environment. I need to specify which environment I want..."MOCK" or "PROD", for example.
Similar to Spring that has spring.profiles.active, Micronaut has MICRONAUT_ENVIRONMENTS, but how to instruct this through Picocli?
Thanks!

Similar to Spring that has spring.profiles.active, Micronaut has
MICRONAUT_ENVIRONMENTS, but how to instruct this through Picocli?
See the project at github.com/jeffbrown/douglasclienv.
src/main/java/douglasclienv/SomeResolver.java
package douglasclienv;
public interface SomeResolver {
String getDescription();
}
src/main/java/douglasclienv/DefaultResolver.java
package douglasclienv;
import jakarta.inject.Singleton;
#Singleton
public class DefaultResolver implements SomeResolver {
#Override
public String getDescription() {
return "Default Resolver";
}
}
src/main/java/douglasclienv/AlphaResolver.java
package douglasclienv;
import io.micronaut.context.annotation.Requires;
import jakarta.inject.Singleton;
#Singleton
#Requires(env="alpha")
public class AlphaResolver implements SomeResolver {
#Override
public String getDescription() {
return "Alpha Resolver";
}
}
src/main/java/douglasclienv/BetaResolver.java
package douglasclienv;
import io.micronaut.context.annotation.Requires;
import jakarta.inject.Singleton;
#Singleton
#Requires(env="beta")
public class BetaResolver implements SomeResolver {
#Override
public String getDescription() {
return "Beta Resolver";
}
}
src/main/java/douglasclienv/DouglasclienvCommand.java
package douglasclienv;
import io.micronaut.configuration.picocli.PicocliRunner;
import io.micronaut.context.ApplicationContext;
import jakarta.inject.Inject;
import picocli.CommandLine;
import picocli.CommandLine.Command;
import picocli.CommandLine.Option;
import picocli.CommandLine.Parameters;
import java.util.List;
#Command(name = "douglasclienv", description = "...",
mixinStandardHelpOptions = true)
public class DouglasclienvCommand implements Runnable {
#Inject
List<SomeResolver> resolvers;
public static void main(String[] args) throws Exception {
PicocliRunner.run(DouglasclienvCommand.class, args);
}
public void run() {
System.out.println("Resolvers:");
resolvers.stream().forEach(someResolver -> System.out.println("\t"+someResolver.getDescription()));
}
}
That all appears to work as expected.
~ $ mkdir demo
~ $ cd demo
demo $
demo $ git clone git#github.com:jeffbrown/douglasclienv.git
demo $
demo $ cd douglasclienv
douglasclienv (main)$ ./gradlew assemble
douglasclienv (main)$
douglasclienv (main)$ cd build/libs
libs (main)$
libs (main)$
libs (main)$ java -jar douglasclienv-0.1-all.jar
09:18:32.238 [main] INFO i.m.context.env.DefaultEnvironment - Established active environments: [cli]
Resolvers:
Default Resolver
libs (main)$
libs (main)$
libs (main)$ export MICRONAUT_ENVIRONMENTS=alpha
libs (main)$ java -jar douglasclienv-0.1-all.jar
09:18:44.623 [main] INFO i.m.context.env.DefaultEnvironment - Established active environments: [alpha, cli]
Resolvers:
Alpha Resolver
Default Resolver
libs (main)$
libs (main)$
libs (main)$ export MICRONAUT_ENVIRONMENTS=beta
libs (main)$ java -jar douglasclienv-0.1-all.jar
09:18:52.535 [main] INFO i.m.context.env.DefaultEnvironment - Established active environments: [beta, cli]
Resolvers:
Default Resolver
Beta Resolver
libs (main)$
libs (main)$
libs (main)$ export MICRONAUT_ENVIRONMENTS=alpha,beta
libs (main)$ java -jar douglasclienv-0.1-all.jar
09:19:03.881 [main] INFO i.m.context.env.DefaultEnvironment - Established active environments: [alpha, beta, cli]
Resolvers:
Alpha Resolver
Beta Resolver
Default Resolver
EDIT:
Note that if you want to set the environment programatically, you could make your command class look something like this:
package douglasclienv;
import io.micronaut.configuration.picocli.MicronautFactory;
import io.micronaut.context.ApplicationContext;
import jakarta.inject.Inject;
import picocli.CommandLine;
import picocli.CommandLine.Command;
import java.util.List;
import java.util.concurrent.Callable;
#Command(name = "douglasclienv", description = "...",
mixinStandardHelpOptions = true)
public class DouglasclienvCommand implements Callable<Object> {
#Inject
List<SomeResolver> resolvers;
public static void main(String[] args) {
try (ApplicationContext context = ApplicationContext.builder().environments("beta").start()) {
new CommandLine(DouglasclienvCommand.class, new MicronautFactory(context)).
setCaseInsensitiveEnumValuesAllowed(true).
setUsageHelpAutoWidth(true).
execute(args);
}
}
#Override
public Object call() throws Exception {
System.out.println("Resolvers:");
resolvers.stream().forEach(someResolver -> System.out.println("\t" + someResolver.getDescription()));
return null;
}
}

Related

How can I use a Webdriver Testcontainer in Bitbucket Pipelines?

When trying to use a Webdriver Testcontainer in Bitbucket Pipelines, I get the following error messages:
[main] WARN 🐳 [selenium/standalone-chrome:4.1.1] - Unable to mount a file from test host into a running container. This may be a misconfiguration or limitation of your Docker environment. Some features might not work.
[main] ERROR 🐳 [selenium/standalone-chrome:4.1.1] - Could not start container
com.github.dockerjava.api.exception.DockerException: Status 403: {"message":"authorization denied by plugin pipelines: -v only supports $BITBUCKET_CLONE_DIR and its subdirectories"}
My testcontainers version is 1.17.6
Here is the code I'm using while trying to troubleshoot:
package com.byzpass.demo;
import org.junit.*;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeOptions;
import org.openqa.selenium.remote.RemoteWebDriver;
import org.testcontainers.containers.BrowserWebDriverContainer;
public class SeleniumTest {
public ChromeOptions chromeOptions = new ChromeOptions().addArguments("--no-sandbox").addArguments("--headless").addArguments("--disable-dev-shm-usage");
#Rule
public BrowserWebDriverContainer<?> driverContainer = new BrowserWebDriverContainer<>().withCapabilities(chromeOptions);
#Test
public void openWikipedia() {
WebDriver driver = new RemoteWebDriver(driverContainer.getSeleniumAddress(), chromeOptions);
driver.navigate().to("https://www.wikipedia.org/");
String subtitleText = driver.findElement(By.cssSelector("#www-wikipedia-org h1 strong")).getText();
assert subtitleText.equals("The Free Encyclopedia");
driver.quit();
System.out.println("Finished opening wikipedia. 📖 🤓 🔍 👍 ✨");
}
}
Here is my bitbucket-pipelines.yml:
pipelines:
default:
- step:
image: amazoncorretto:11
services:
- docker
script:
- export TESTCONTAINERS_RYUK_DISABLED=true
- cd selenium-test ; bash ./mvnw --no-transfer-progress test
definitions:
services:
docker:
memory: 2048
By setting a breakpoint in my test method and using docker inspect -f '{{ .Mounts }}' I was able to discover that the container for the selenium/standalone-chrome:4.1.1 image has [{bind /dev/shm /dev/shm rw true rprivate}]
I thought that using the --disable-dev-shm-usage argument in my chrome options would prevent that, but it didn't. I don't know whether that's what's causing my issue in Bitbucket Pipelines though.
I found that it worked after setting shm size to zero. Here's the code that worked:
package com.byzpass.demo;
import org.junit.*;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeOptions;
import org.openqa.selenium.remote.RemoteWebDriver;
import org.testcontainers.containers.BrowserWebDriverContainer;
import java.util.ArrayList;
public class SeleniumTest {
#Test
public void openWikipedia() {
ChromeOptions chromeOptions = new ChromeOptions().addArguments("--no-sandbox").addArguments("--headless").addArguments("--disable-dev-shm-usage");
BrowserWebDriverContainer driverContainer = new BrowserWebDriverContainer<>().withRecordingMode(BrowserWebDriverContainer.VncRecordingMode.SKIP, null);
driverContainer.setShmSize(0L);
driverContainer.start();
WebDriver driver = new RemoteWebDriver(driverContainer.getSeleniumAddress(), chromeOptions);
driver.navigate().to("https://www.wikipedia.org/");
String subtitleText = driver.findElement(By.cssSelector("#www-wikipedia-org h1 strong")).getText();
assert subtitleText.equals("The Free Encyclopedia");
driver.quit();
driverContainer.stop();
System.out.println("Finished opening wikipedia. 📖 🤓 🔍 👍 ✨");
}
}

>GraalVM native-image failing to compile a simple JavaFX app on Windows 10

Summary: GraalVM native-image reports the following error when compiling a simple JavaFX application:
Error occurred during initialization of boot layer
java.lang.module.FindException: Module javafx.controls not found
Error: Image build request failed with exit status 1
Command line used:
"%GRAAL_BIN%"\native-image.cmd --module-path "c:\eclipse-javafx-sdk-19\lib" --add-modules javafx.controls,javafx.fxml,. -cp . application.GraalvmFX
Details:
Simple JavaFX application
package application;
import javafx.application.Application;
import javafx.stage.Stage;
import javafx.scene.Scene;
import javafx.scene.control.Button;
import javafx.scene.control.Separator;
import javafx.scene.control.ToolBar;
import javafx.scene.layout.BorderPane;
public class GraalvmFX extends Application {
#Override
public void start(Stage primaryStage) {
try {
ToolBar toolbar = new ToolBar();
Button b1 = new Button("New");
Button b2 = new Button("Open");
Button b3 = new Button("Save");
Button b4 = new Button("Exit");
toolbar.getItems().addAll(b1, b2, b3, new Separator(), b4);
BorderPane root = new BorderPane();
root.setTop(toolbar);
Scene scene = new Scene(root,400,400);
primaryStage.setScene(scene);
primaryStage.show();
} catch(Exception e) {
e.printStackTrace();
}
}
public static void main(String[] args) {
launch(args);
}
}
Running this app using command line:java.exe or javac.exe is not an issue:
C:\eclipse-workspace2\graalvmFX\src>java --version
java 17.0.4.1 2022-08-18 LTS
Java(TM) SE Runtime Environment (build 17.0.4.1+1-LTS-2)
Java HotSpot(TM) 64-Bit Server VM (build 17.0.4.1+1-LTS-2, mixed mode, sharing)
C:\eclipse-workspace2\graalvmFX\src>javac --module-path "C:\eclipse-javafx-sdk-19\lib" --add-modules javafx.controls,javafx.fxml -cp . application\GraalvmFX.java
C:\eclipse-workspace2\graalvmFX\src>java --module-path "C:\eclipse-javafx-sdk-19\lib" --add-modules javafx.controls,javafx.fxml -cp . application.GraalvmFX
But trying to compile the app using native-image fails:
C:\eclipse-workspace2\graalvmFX\src>"%GRAAL_BIN%"\native-image.cmd --version
GraalVM 22.2.0 Java 17 CE (Java Version 17.0.4+8-jvmci-22.2-b06)
C:\eclipse-workspace2\graalvmFX\src>"%GRAAL_BIN%"\native-image.cmd --module-path "c:\eclipse-javafx-sdk-19\lib" --add-modules javafx.controls,javafx.fxml application.GraalvmFX
Error occurred during initialization of boot layer
java.lang.module.FindException: Module javafx.controls not found
Error: Image build request failed with exit status 1
I have tried everything that I can think of!

Infinispan cluster with Karaf instances

we are very new to Infinispan and also quite new to Apache Karaf. Installing Infinispan in Karaf was easy, we did write two OSGi Bundles to form a cluster with two nodes that run on one host. We tried it with the tutorial for a distributed cache from the Infinispan website (tutorial). Unfortunately the cluster seems not to be build and we can't determine why. Any help or push in the right direction would be very appreciated.
The code of the bundle that writes something in the cache looks like that:
import org.infinispan.Cache;
import org.infinispan.configuration.cache.CacheMode;
import org.infinispan.configuration.cache.ConfigurationBuilder;
import org.infinispan.configuration.global.GlobalConfigurationBuilder;
import org.infinispan.manager.DefaultCacheManager;
import org.infinispan.context.Flag;
import org.osgi.framework.BundleActivator;
import org.osgi.framework.BundleContext;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class CacheProducer implements BundleActivator{
private static Logger LOG = LoggerFactory.getLogger(CacheProducer.class );
private static DefaultCacheManager cacheManager;
#Override
public void start( BundleContext context ) throws Exception{
LOG.info( "Start Producer" );
GlobalConfigurationBuilder global = GlobalConfigurationBuilder.defaultClusteredBuilder();
global.transport().clusterName("ClusterTest");
// Make the default cache a distributed synchronous one
ConfigurationBuilder builder = new ConfigurationBuilder();
builder.clustering().cacheMode(CacheMode.DIST_SYNC);
// Initialize the cache manager
cacheManager = new DefaultCacheManager(global.build(), builder.build());
// Obtain the default cache
Cache<String, String> cache = cacheManager.getCache();
cache.put( "message", "Hello World!" );
LOG.info( "Producer: whole cluster content!" );
cache.entrySet().forEach(entry -> LOG.info(entry.getKey()+ ": " + entry.getValue()));
LOG.info( "Producer: current cache content!" );
cache.getAdvancedCache().withFlags(Flag.SKIP_REMOTE_LOOKUP)
.entrySet().forEach(entry -> LOG.info(entry.getKey()+ ": " + entry.getValue()));
}
#Override
public void stop( BundleContext context ) throws Exception{
cacheManager.stop();
}
}
And the one that tries to print out what is in the cache like that:
package metdoc81.listener;
import org.infinispan.configuration.cache.CacheMode;
import org.infinispan.configuration.cache.ConfigurationBuilder;
import org.infinispan.configuration.global.GlobalConfigurationBuilder;
import org.osgi.framework.BundleActivator;
import org.osgi.framework.BundleContext;
import org.infinispan.Cache;
import org.infinispan.manager.DefaultCacheManager;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class Activator implements BundleActivator{
private static Logger LOG = LoggerFactory.getLogger(Activator.class);
private static DefaultCacheManager cacheManager;
public void start( BundleContext bundleContext ) throws Exception{
LOG.info("start cluster listener");
GlobalConfigurationBuilder global = GlobalConfigurationBuilder.defaultClusteredBuilder();
global.transport().clusterName("ClusterTest");
// Make the default cache a distributed synchronous one
ConfigurationBuilder builder = new ConfigurationBuilder();
builder.clustering().cacheMode(CacheMode.DIST_SYNC);
// Initialize the cache manager
cacheManager = new DefaultCacheManager(global.build(), builder.build());
// Obtain the default cache
Cache<String, String> cache = cacheManager.getCache();
LOG.info("After configuration");
cache.entrySet().forEach(entry -> LOG.info(entry.getKey()+ ": " + entry.getValue()));
LOG.info("After logging");
}
public void stop( BundleContext bundleContext ) throws Exception{
}
}
The printing from the CacheProducer works, printing from the Listener does not.
We found the solution ourselves.
The problem just occurs when you try to run the code on MacOS, on Windows it's working. According to a discussion at JBossDeveloper there was a problem with the multicast routing on MacOS. Even though they added a workaround into the example code, you still have to add the -Djava.net.preferIPv4Stack=true Flag when running it or you have to add these two lines of code:
Properties properties = System.getProperties();
properties.setProperty( "java.net.preferIPv4Stack", "true" );

react-native-firebase install - package io.invertase.firebase does not exist?

I am getting the following issues when running "react-native run-android" in my project. I've gone through the react-native-firebase per normal but in this case I can't quite see what I might have done wrong?
:app:incrementalDebugJavaCompilationSafeguard UP-TO-DATE
:app:compileDebugJavaWithJavac
:app:compileDebugJavaWithJavac - is not incremental (e.g. outputs have changed, no previous execution, etc.).
/source/myapp/android/app/src/main/java/com/gctodo/MainApplication.java:11: error: package io.invertase.firebase does not exist
import io.invertase.firebase.RNFirebasePackage;
^
/source/myapp/android/app/src/main/java/com/gctodo/MainApplication.java:12: error: package io.invertase.firebase.auth does not exist
import io.invertase.firebase.auth.RNFirebaseAuthPackage;
^
/source/myapp/android/app/src/main/java/com/gctodo/MainApplication.java:13: error: package io.invertase.firebase.firestore does not exist
import io.invertase.firebase.firestore.RNFirebaseFirestorePackage;
^
/source/myapp/android/app/src/main/java/com/gctodo/MainApplication.java:30: error: cannot find symbol
new RNFirebasePackage(),
^
symbol: class RNFirebasePackage
/source/myapp/android/app/src/main/java/com/gctodo/MainApplication.java:31: error: cannot find symbol
new RNFirebaseAuthPackage(),
^
symbol: class RNFirebaseAuthPackage
/source/myapp/android/app/src/main/java/com/gctodo/MainApplication.java:32: error: cannot find symbol
new RNFirebaseFirestorePackage()
^
symbol: class RNFirebaseFirestorePackage
6 errors
:app:compileDebugJavaWithJavac FAILED
MainApplication.java is for exampe:
package com.gctodo;
import android.app.Application;
import com.facebook.react.ReactApplication;
import com.facebook.react.ReactNativeHost;
import com.facebook.react.ReactPackage;
import com.facebook.react.shell.MainReactPackage;
import com.facebook.soloader.SoLoader;
import io.invertase.firebase.RNFirebasePackage;
import io.invertase.firebase.auth.RNFirebaseAuthPackage;
import io.invertase.firebase.firestore.RNFirebaseFirestorePackage;
import java.util.Arrays;
import java.util.List;
public class MainApplication extends Application implements ReactApplication {
private final ReactNativeHost mReactNativeHost = new ReactNativeHost(this) {
#Override
public boolean getUseDeveloperSupport() {
return BuildConfig.DEBUG;
}
#Override
protected List<ReactPackage> getPackages() {
return Arrays.<ReactPackage>asList(
new MainReactPackage(),
new RNFirebasePackage(),
new RNFirebaseAuthPackage(),
new RNFirebaseFirestorePackage()
);
}
#Override
protected String getJSMainModuleName() {
return "index";
}
};
#Override
public ReactNativeHost getReactNativeHost() {
return mReactNativeHost;
}
#Override
public void onCreate() {
super.onCreate();
SoLoader.init(this, /* native exopackage */ false);
}
}
It appears that RNFirebasePackage is missing and cannot be found during compile step.
Try running
react-native link
after install, followed by clean and then run the build.
running react-native link did not work for me. I had to delete the android and ios project folders and then ran
react-native eject
which recreated both the ios and android folders. I was able to run the builds now.
For anyone else that gets stuck here, I needed to do npm i #react-native-firebase/messaging

How do I use a custom realm with GlassFish 3.1?

I would like to use a custom realm with glassfish 3.1
I took the two file from this topic to try. Custom Glassfish Security Realm does not work (unable to find LoginModule)
The CustomRealm.java
package com.company.security.realm;
import com.sun.appserv.security.AppservRealm;
import com.sun.enterprise.security.auth.realm.BadRealmException;
import com.sun.enterprise.security.auth.realm.InvalidOperationException;
import com.sun.enterprise.security.auth.realm.NoSuchRealmException;
import com.sun.enterprise.security.auth.realm.NoSuchUserException;
import java.util.Enumeration;
import java.util.Properties;
import java.util.Vector;
public class CustomRealm extends AppservRealm
{
Vector<String> groups = new Vector<String>();
private String jaasCtxName;
private String startWith;
#Override
public void init(Properties properties)
throws BadRealmException, NoSuchRealmException {
jaasCtxName = properties.getProperty("jaas-context", "customRealm");
startWith = properties.getProperty("startWith", "z");
groups.add("dummy");
}
#Override
public String getAuthType()
{
return "Custom Realm";
}
public String[] authenticate(String username, char[] password)
{
// if (isValidLogin(username, password))
return (String[]) groups.toArray();
}
#Override
public Enumeration getGroupNames(String username)
throws InvalidOperationException, NoSuchUserException
{
return groups.elements();
}
#Override
public String getJAASContext()
{
return jaasCtxName;
}
public String getStartWith()
{
return startWith;
}
}
And the custom login module
package com.company.security.realm;
import com.sun.appserv.security.AppservPasswordLoginModule;
import com.sun.enterprise.security.auth.login.common.LoginException;
import java.util.Set;
import org.glassfish.security.common.PrincipalImpl;
public class CustomLoginModule extends AppservPasswordLoginModule
{
#Override
protected void authenticateUser() throws LoginException
{
_logger.info("CustomRealm : authenticateUser for " + _username);
final CustomRealm realm = (CustomRealm)_currentRealm;
if ( (_username == null) || (_username.length() == 0) || !_username.startsWith(realm.getStartWith()))
throw new LoginException("Invalid credentials");
String[] grpList = realm.authenticate(_username, getPasswordChar());
if (grpList == null) {
throw new LoginException("User not in groups");
}
_logger.info("CustomRealm : authenticateUser for " + _username);
Set principals = _subject.getPrincipals();
principals.add(new PrincipalImpl(_username));
this.commitUserAuthentication(grpList);
}
}
I added as well the module to the conf file
customRealm {
com.company.security.realm.CustomLoginModule required;
};
And I copy my 2 .class in the glassfish3/glassfish/domains/domain1/lib/classes/
as well as glassfish3/glassfish/lib
Everytime I want to create a new realm I have got the same error.
./asadmin --port 4949 create-auth-realm --classname com.company.security.realm.CustomRealm --property jaas-context=customRealm:startWith=a customRealm
remote failure: Creation of Authrealm customRealm failed. com.sun.enterprise.security.auth.realm.BadRealmException: java.lang.ClassNotFoundException: com.company.security.realm.CustomRealm not found by org.glassfish.security [101]
com.sun.enterprise.security.auth.realm.BadRealmException: java.lang.ClassNotFoundException: com.company.security.realm.CustomRealm not found by org.glassfish.security [101]
Command create-auth-realm failed.
I think i dont really understand how to add in the proper way my two files to glassfish.
This two files are created and compile from eclipse. I create a java project suctom login.
Someone can help ?
Thx a lot in advance,
loic
Did you package it as an OSGi module (see the answer in the post you referenced)? If so, don't copy the jar file into $GF_HOME/lib or anything, instead deploy it as an OSGi module:
asadmin deploy --type osgi /path/to/CustomRealm.jar
Then add the login.conf settings. To be on the safe side, I'd restart GF (asadmin restart-domain), then you can create the realm with the command you have there.

Resources