Before using a cli I would have a starter class which calls my ApplicationPropertiesProvider class (which reads my properties file) and then kicks off the business logic. So there was a separation, the ApplicationPropertiesProvider just had one job.
Now with picocli, the guide/documentation states I have to use CommandLine.run(objectToPopulate, args) or CommandLine.call(objectToPopulate, args). Therefore the class being populated with the cli parameters (ApplicationPropertiesProvider) has to implement Runnable or Callable. Now I could just paste my kick-off code of the Starter class into the run() or call() method and abandon the Starter class then.
But I don't like that, I want to separate between a class just holding the properties and my Starter class.
A kind of dirty workaround I thought ofand shown in my example below would be to pass the arguments from the main method to my Starter class' constructor, populate the ApplicationPropertiesProvider with CommandLine.run() but only implement an empty run() or call() method there so it will immediately return to my Starter class where I kick off the business logic then.
That would be the result I ask for (separation), but that way it seems really stupid.
Also another question which just came up: If I have the standard case of having multiple classes containing business code and also their own properties (instead of a single property providing class): Is it possible to populate multiple different classes with one cli call, i.e. calling "test.jar command --a --b" where parameter "a" goes straight to an instance of class "X" and "b" goes to an instance of "Y"?
public class Starter {
public static void main(String[] args) {
new Starter(args);
}
public Starter(String[] args) {
app = ApplicationPropertiesProvider.getInstance();
CommandLine.run(app, args);
//then kick off the business logic of the application
}
}
#Command(...)
public class ApplicationPropertiesProvider implements Runnable {
//annotated properties
#Option(...)
private String x;
#Override
public void run() { }
The run and call methods are convenience methods to allow applications to reduce their boilerplate code. You don't need to use them. Instead, you can use the parse or parseArgs method. This looks something like this:
1 #Command(mixinStandardHelpOptions = true)
2 public class ApplicationPropertiesProvider { // not Runnable
3 //annotated properties
4 #Option(...)
5 private String x;
6 // ...
7 }
8
9 public class Starter {
10 public static void main(String[] args) {
11 ApplicationPropertiesProvider app = ApplicationPropertiesProvider.getInstance();
12 try {
13 ParseResult result = new CommandLine(app).parseArgs(args);
14 if (result.isUsageHelpRequested()) {
15 cmd.usage(System.out);
16 } else if (result.isVersionHelpRequested()) {
17 cmd.printVersionHelp(System.out);
18 } else {
19 new Starter(app); // run the business logic
20 }
21 } catch (ParameterException ex) {
22 System.err.println(ex.getMessage());
23 ex.getCommandLine().usage(out, ansi);
24 }
25 }
26
27 public Starter(ApplicationPropertiesProvider app) {
28 // kick off the business logic of the application
29 }
30 }
This is fine, it is just that lines 11-24 are boilerplate code. You can omit this and let picocli do this work for you by letting the annotated object implement Runnable or Callable.
I understand your point about separation of concerns and have different classes for the business logic and the class that has the properties. I have a suggestion, but first let me answer your seconds question:
Is it possible to populate multiple different classes with one cli call?
Picocli supports "Mixins" that allow you to do this. For example:
class A {
#Option(names = "-a") int aValue;
}
class B {
#Option(names = "-b") int bValue;
}
class C {
#Mixin A a;
#Mixin B b;
#Option(names = "-c") int cValue;
}
// populate C's properties as well as the nested mixins
C c = CommandLine.populate(new C(), "-a=11", "-b=22", "-c=33");
assert c.a.aValue == 11;
assert c.b.bValue == 22;
assert c.cValue == 33;
Now, let's put all this together:
class A {
#Option(names = "-a") int aValue;
#Option(names = "-b") int bValue;
#Option(names = "-c") int cValue;
}
class B {
#Option(names = "-x") int xValue;
#Option(names = "-y") int yValue;
#Option(names = "-z") int zValue;
}
class ApplicationPropertiesProvider {
#Mixin A a;
#Mixin B b;
}
class Starter implements Callable<Void> {
#Mixin ApplicationPropertiesProvider properties = ApplicationPropertiesProvider.getInstance();
public Void call() throws Exception {
// business logic here
}
public static void main(String... args) {
CommandLine.call(new Starter(), args);
}
}
This gives you separation of concerns: properties are located in the ApplicationPropertiesProvider, business logic is in the Starter class.
It also allows you to group properties that logically belong together into separate classes, instead of having a single dumping ground in ApplicationPropertiesProvider.
The Starter class implements Callable; this allows you to omit the boilerplate logic above and start your application in a single line of code in main.
Related
I am working on Mono.Cecil codegen util, and I want to preform following operation:
Loop through types
If type contains X attribute:
- Add ITestInterface implementation (where ITestInterface has defined some methods)
// For reference
public interface ITestInterface
{
void Something();
int IntSomething();
}
// Expected result, if type contains X attribute:
// Before codegen:
[X]
public class CodeGenExample
{
}
// After codegen
[X]
public class CodeGenExample : ITestInterface
{
public void Something()
{
// some stuff
}
public int IntSomething()
{
// do some stuff
return 0;
}
}
I have seen that .NET Reflection has a AddInterfaceImplementation method (https://learn.microsoft.com/pl-pl/dotnet/api/system.reflection.emit.typebuilder.addinterfaceimplementation?view=net-5.0).
Is there a Mono.Cecil equivalent or a workaround for this & how to use it?
That can be achieved by:
Iterating over all types defined in the assembly
Checking which types have the attribute applied to
Injecting the methods.
As an example you can do something like:
using System.Linq;
using Mono.Cecil;
using Mono.Cecil.Cil;
namespace inject
{
interface IMyInterface
{
int Something();
}
class MarkerAttribute : Attribute {}
[Marker]
class Foo
{
}
class Program
{
static void Main(string[] args)
{
if (args.Length == 1)
{
using var a = AssemblyDefinition.ReadAssembly(typeof(Program).Assembly.Location);
var interfaceToImplement = a.MainModule.GetType("inject.IMyInterface");
foreach(var t in a.MainModule.Types)
{
if (t.HasCustomAttributes && t.CustomAttributes.Any(c => c.Constructor.DeclaringType.Name == "MarkerAttribute"))
{
System.Console.WriteLine($"Adding methods to : {t}");
var something = new MethodDefinition("Something", MethodAttributes.Public | MethodAttributes.HideBySig | MethodAttributes.NewSlot | MethodAttributes.Virtual, a.MainModule.TypeSystem.Int32);
something.Body = new Mono.Cecil.Cil.MethodBody(something);
var il = something.Body.GetILProcessor();
il.Emit(OpCodes.Ldc_I4, 42);
il.Emit(OpCodes.Ret);
t.Methods.Add(something);
// Add the interface.
t.Interfaces.Add(new InterfaceImplementation(interfaceToImplement));
var path = typeof(Program).Assembly.Location + ".new";
a.Write(path);
System.Console.WriteLine($"Modified version written to {path}");
}
}
}
else
{
object f = new Foo();
IMyInterface itf = (IMyInterface) f;
System.Console.WriteLine($"Something() == {itf.Something()}");
}
}
}
}
Another potential solution is to have the methods implemented in an internal class and copying over their method bodies.
As a side note, these are 2 online tools you can use to explore/learn more about CIL, Mono.Cecil, C#:
Sharplab.io
Cecilifier (disclaimer: I'm the author of this one)
That being said if you can use C# 9.0 you may be able to leverage the new Source Generators feature.
I have been browsing stackoverflow for some days, trying to find how to re-run a whole test class, and not just an #Test step. Many say that this is not supported with TestNG and IRetryAnalyzer, whereas some have posted workarounds, that don't really work.
Has anyone manage to do it?
And just to clarify the reasons for this, in order to avoid answers that say that is not supported in purpose: TestNG is a tool not only for developers. Meaning that is also used from sw testers for e2e testing. E2e tests can have steps that depend each from the previous one. So yes it's valid to re-run whole test class, rather than simple #Test, which is easily can be done via IRetryAnalyzer.
An example of what I want to achieve would be:
public class DemoTest extends TestBase {
#Test(alwaysRun = true, description = "Do this")
public void testStep_1() {
driver.navigate().to("http://www.stackoverflow.com");
Assert.assertEquals(driver.getCurrentUrl().contains("stackoverflow)"));
}
#Test(alwaysRun = true, dependsOnMethods = "testStep_1", description = "Do that")
public void testStep_2() {
driver.press("button");
Assert.assertEquals(true, driver.elementIsVisible("button"));
}
#Test(alwaysRun = true, dependsOnMethods = "testStep_2", description = "Do something else")
public void testStep_3() {
driver.press("button2");
Assert.assertEquals(true, driver.elementIsVisible("button"));
}
}
Let's say that testStep_2 fails, I want to rerun class DemoTest and not just testStep_2
Okay, I know that you probably want some easy property you can specify in your #BeforeClass or something like that, but we might need to wait for that to be implemented. At least I couldn't find it either.
The following is ugly as hell but I think it does the job, at least in a small scale, it is left to see how it behaves in more complex scenarios. Maybe with more time, this can be improved into something better.
Okay, so I created a Test Class similar to yours:
public class RetryTest extends TestConfig {
public class RetryTest extends TestConfig {
Assertion assertion = new Assertion();
#Test( enabled = true,
groups = { "retryTest" },
retryAnalyzer = TestRetry.class,
ignoreMissingDependencies = false)
public void testStep_1() {
}
#Test( enabled = true,
groups = { "retryTest" },
retryAnalyzer = TestRetry.class,
dependsOnMethods = "testStep_1",
ignoreMissingDependencies = false)
public void testStep_2() {
if (fail) assertion.fail("This will fail the first time and not the second.");
}
#Test( enabled = true,
groups = { "retryTest" },
retryAnalyzer = TestRetry.class,
dependsOnMethods = "testStep_2",
ignoreMissingDependencies = false)
public void testStep_3() {
}
#Test( enabled = true)
public void testStep_4() {
assertion.fail("This should leave a failure in the end.");
}
}
I have the Listener in the super class just in the case I'd like to extend this to other classes, but you can as well set the listener in your test class.
#Listeners(TestListener.class)
public class TestConfig {
protected static boolean retrySuccessful = false;
protected static boolean fail = true;
}
Three of the 4 methods above have a RetryAnalyzer. I left the testStep_4 without it to make sure that what I'm doing next doesn't mess with the rest of the execution. Said RetryAnalyzer won't actually retry (note that the method returns false), but it will do the following:
public class TestRetry implements IRetryAnalyzer {
public static TestNG retryTestNG = null;
#Override
public boolean retry(ITestResult result) {
Class[] classes = {CreateBookingTest.class};
TestNG retryTestNG = new TestNG();
retryTestNG.setDefaultTestName("RETRY TEST");
retryTestNG.setTestClasses(classes);
retryTestNG.setGroups("retryTest");
retryTestNG.addListener(new RetryAnnotationTransformer());
retryTestNG.addListener(new TestListenerRetry());
retryTestNG.run();
return false;
}
}
This will create an execution inside of your execution. It won't mess with the report, and as soon as it finishes, it will continue with your main execution. But it will "retry" the methods within that group.
Yes, I know, I know. This means that you are going to execute your test suite forever in an eternal loop. That's why the RetryAnnotationTransformer. In it, we will remove the RetryAnalyzer from the second execution of those tests:
public class RetryAnnotationTransformer extends TestConfig implements IAnnotationTransformer {
#SuppressWarnings("rawtypes")
#Override
public void transform(ITestAnnotation annotation, Class testClass, Constructor testConstructor, Method testMethod) {
fail = false; // This is just for debugging. Will make testStep_2 pass in the second run.
annotation.setRetryAnalyzer(null);
}
}
Now we have the last of our problems. Our original test suite knows nothing about that "retry" execution there. This is where it gets really ugly. We need to tell our Reporter what just happened. And this is the part that I encourage you to improve. I'm lacking the time to do something nicer, but if I can, I will edit it at some point.
First, we need to know if the retryTestNG execution was successful. There's probably a million ways to do this better, but for now this works. I set up a listener just for the retrying execution. You can see it in TestRetry above, and it consists of the following:
public class TestListenerRetry extends TestConfig implements ITestListener {
(...)
#Override
public void onFinish(ITestContext context) {
if (context.getFailedTests().size()==0 && context.getSkippedTests().size()==0) {
successful = true;
}
}
}
Now the Listener of the main suite, the one you saw above in the super class TestConfig will see if it the run happened and if it went well and will update the report:
public class TestListener extends TestConfig implements ITestListener , ISuiteListener {
(...)
#Override
public void onFinish(ISuite suite) {
if (TestRetry.retryTestNG != null) {
for (ITestNGMethod iTestNGMethod : suite.getMethodsByGroups().get("retryTest")) {
Collection<ISuiteResult> iSuiteResultList = suite.getResults().values();
for (ISuiteResult iSuiteResult : iSuiteResultList) {
ITestContext iTestContext = iSuiteResult.getTestContext();
List<ITestResult> unsuccessfulMethods = new ArrayList<ITestResult>();
for (ITestResult iTestResult : iTestContext.getFailedTests().getAllResults()) {
if (iTestResult.getMethod().equals(iTestNGMethod)) {
iTestContext.getFailedTests().removeResult(iTestResult);
unsuccessfulMethods.add(iTestResult);
}
}
for (ITestResult iTestResult : iTestContext.getSkippedTests().getAllResults()) {
if (iTestResult.getMethod().equals(iTestNGMethod)) {
iTestContext.getSkippedTests().removeResult(iTestResult);
unsuccessfulMethods.add(iTestResult);
}
}
for (ITestResult iTestResult : unsuccessfulMethods) {
iTestResult.setStatus(1);
iTestContext.getPassedTests().addResult(iTestResult, iTestResult.getMethod());
}
}
}
}
}
}
The report should show now 3 tests passed (as they were retried) and one that failed because it wasn't part of the other 3 tests:
I know it's not what you are looking for, but I help it serves you until they add the functionality to TestNG.
I want to retrive object in reverse insertion order.
For example i have collection object where i have inserted following object
mango
apple
orange
while retriving it should come in reverse insert order i.e orange,apple,mango and this collection class should now allow duplicate object also. Is there any inbuilt API
is there in JDK 1.6 to do this?otherwise please tell me the logic to implement to do this.
Go for java.util.Stack which uses First in Last out policy. See docs for Stack
But read this too
Here is a example for you. I hope this will help you.
public class ReverseCollection {
ArrayList<String> al = new ArrayList<String>();
//add elements to the ArrayList
public static void main(String args[]){
ReverseCollection rc = new ReverseCollection();
rc.createList();
System.out.println(" ------ simple order ---------");
rc.print();
Collections.reverse(rc.getAl());
System.out.println(" ------ reverse order -------- ");
rc.print();
}
private void print() {
// TODO Auto-generated method stub
for (int i = 0; i < al.size(); i++) {
System.out.println(al.get(i));
}
}
public ArrayList<String> getAl() {
return al;
}
public void setAl(ArrayList<String> al) {
this.al = al;
}
private void createList() {
// TODO Auto-generated method stub
al.add("JAVA");
al.add("C++");
al.add("PERL");
al.add("PHP");
}
}
here I have used a inbuilt method of collectionc that is reverse
Note that this method will change in original variable value.
i have installed Academic version of pex and roles .
I wrote the following code in Visual Studio 2010.but pex just gave a null pointer as the input. doesn't the pex support the class type? please help me.
the test inferface is Test.
source code:
public class ClassForPex
{
public int a;
public int b;
ClassForPex(int x, int y)
{
a = x;
b = y;
}
};
public static class StringExtensions
{
public static int Test(ClassForPex cjh)
{
if (cjh.a > cjh.b)
return cjh.a;
else
{
return cjh.b;
}
}
}
You'll need to use a factory for supplying your ClassForPex instances to the tests. Look at this article to see how to do that.
Using of Factory in Pex - http://developers.de/blogs/damir_dobric/archive/2009/04/13/using-of-factory-in-pex.aspx
Any idea how to do what the title says? Only thing I found was on the original Velocity site, and I don't think
ve.setProperty( RuntimeConstants.RUNTIME_LOG_LOGSYSTEM_CLASS,
"org.apache.velocity.runtime.log.Log4JLogChute" );
ve.setProperty("runtime.log.logsystem.log4j.logger",
LOGGER_NAME);
will work wonderfully well on .NET. I am using log4net, which should make it quite easy, but the documentation on NVelocity is really a mess.
Implement NVelocity.Runtime.Log.ILogSystem (you could write a simple implementation that bridges to log4net) and set this impl type in the property RuntimeConstants.RUNTIME_LOG_LOGSYSTEM_CLASS
How I got this information:
Get the code.
Search for "log" in the codebase
Discover the classes in NVelocity.Runtime.Log.
Read those classes' source, they're very simple and thoroughly documented.
Update:
Currently, NVelocity does not support logging. The initializeLogger() and Log() methods in RuntimeInstance Class are commented out.
If you need to log, uncomment the two methods, add a private ILogSystem logSystem; property
Here's our on-the-fly implementation:
public class RuntimeInstance : IRuntimeServices
{
private ILogSystem logSystem;
...
...
private void initializeLogger()
{
logSystem = LogManager.CreateLogSystem(this);
}
...
...
private void Log(LogLevel level, Object message)
{
String output = message.ToString();
logSystem.LogVelocityMessage(level, output);
}
...
}
Then, we implemented ILogSystem for log4net
using log4net;
using NVelocity.Runtime;
using NVelocity.Runtime.Log;
namespace Services.Templates
{
public class Log4NetILogSystem : ILogSystem
{
private readonly ILog _log;
public Log4NetILogSystem(ILog log )
{
_log = log;
}
public void Init(IRuntimeServices rs)
{
}
public void LogVelocityMessage(LogLevel level, string message)
{
switch (level)
{
case LogLevel.Debug:
_log.Debug(message);
break;
case LogLevel.Info:
_log.Info(message);
break;
case LogLevel.Warn:
_log.Warn(message);
break;
case LogLevel.Error:
_log.Error(message);
break;
}
}
}
}
Then, when creating the engine:
var engine = new VelocityEngine();
var props = new ExtendedProperties();
props.SetProperty(RuntimeConstants.RUNTIME_LOG_LOGSYSTEM,
new Log4NetILogSystem(LogManager.GetLogger(typeof(NVelocityEngine))));
engine.Init(props);