Failed to meta-introspect annotation interface org.springframework.web.bind.annotation - spring-mvc

error message
21:13:46,666 DEBUG AnnotationUtils:1889 - Failed to meta-introspect annotation interface org.springframework.web.bind.annotation.RequestBody: java.lang.NullPointerException
com.alibaba.dubbo.rpc.RpcException: No provider available from registry 172.16.33.23:2181 for service com.itheima.service.CheckItemService on consumer 172.16.33.29 use dubbo version 2.6.0, may be providers disabled or not registered ?
at com.alibaba.dubbo.registry.integration.RegistryDirectory.doList(RegistryDirectory.java:572)
I have checked my Dubbo annotation package and controller's import but find that is right.
import com.alibaba.dubbo.config.annotation.Reference;
import com.itheima.constant.MessageConstant;
import com.itheima.entity.Result;
import com.itheima.pojo.CheckItem;
import com.itheima.service.CheckItemService;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
#RestController
#RequestMapping("/checkitem")
public class CheckItemController {
#Reference
private CheckItemService checkItemService;
#RequestMapping("/add")
public Result add(#RequestBody CheckItem checkItem){
try{
checkItemService.add(checkItem);
}catch (Exception e){
e.printStackTrace();
return new Result(false, MessageConstant.ADD_CHECKITEM_FAIL);
}
return new Result(true, MessageConstant.ADD_CHECKITEM_SUCCESS);
}
Also, my zookeeper started.
[root#localhost bin]# ./zkServer.sh start
JMX enabled by default
Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
And the front end request successful.
it seems that have no problems
[zk: 127.0.0.1:2181(CONNECTED) 1] ls /dubbo/com.itheima.service.CheckItemService
[consumers, configurators, routers, providers]
anyone who can give a direction?
I can almost understand where the mistake is,but I don't know what to do.
RegisteryDirectoty.class

Related

openqa selenium Session Not Created Exception. Flutter Automation

I am trying to automate flutter Apk by using valuekey locators. I used following code to automate Apk. I am trying to use Appium and flutter finder for the automation.
package io.github.ashwith.flutter.example;
import java.net.MalformedURLException;
import java.net.URL;
import java.time.Duration;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.remote.DesiredCapabilities;
import org.openqa.selenium.remote.RemoteWebDriver;
import io.appium.java_client.android.AndroidDriver;
import io.github.ashwith.flutter.FlutterFinder;
public class Flutter_Finder {
public static RemoteWebDriver driver;
public static void main(String[] args) throws MalformedURLException {
DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setCapability("deviceName", "Android");
capabilities.setCapability("platformName", "Android");
capabilities.setCapability("noReset", true);
capabilities.setCapability("app", "E:\\Testsigma.apk");
capabilities.setCapability("automationName", "flutter");
driver = new AndroidDriver(new URL("http://localhost:4723/wd/hub"), capabilities);
driver.manage().timeouts().implicitlyWait(Duration.ofSeconds(30));
FlutterFinder finder = new FlutterFinder(driver);
WebElement element = finder.byValueKey("incrementButton");
element.click();
}
}
When I am trying to run the code I am getting following error code.
Exception in thread "main" org.openqa.selenium.SessionNotCreatedException:
Could not start a new session.
Response code 500.
Message: An unknown server-side error occurred while processing the command.
Original error: Cannot read property 'match' of undefined
I have used following Appium java client version for this automation as my dependencies.
<dependency>
<groupId>io.appium</groupId>
<artifactId>java-client</artifactId>
<version>8.3.0</version>
</dependency>
Please help me to resolve this error.
Thank you very much!

How can I use a Webdriver Testcontainer in Bitbucket Pipelines?

When trying to use a Webdriver Testcontainer in Bitbucket Pipelines, I get the following error messages:
[main] WARN 🐳 [selenium/standalone-chrome:4.1.1] - Unable to mount a file from test host into a running container. This may be a misconfiguration or limitation of your Docker environment. Some features might not work.
[main] ERROR 🐳 [selenium/standalone-chrome:4.1.1] - Could not start container
com.github.dockerjava.api.exception.DockerException: Status 403: {"message":"authorization denied by plugin pipelines: -v only supports $BITBUCKET_CLONE_DIR and its subdirectories"}
My testcontainers version is 1.17.6
Here is the code I'm using while trying to troubleshoot:
package com.byzpass.demo;
import org.junit.*;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeOptions;
import org.openqa.selenium.remote.RemoteWebDriver;
import org.testcontainers.containers.BrowserWebDriverContainer;
public class SeleniumTest {
public ChromeOptions chromeOptions = new ChromeOptions().addArguments("--no-sandbox").addArguments("--headless").addArguments("--disable-dev-shm-usage");
#Rule
public BrowserWebDriverContainer<?> driverContainer = new BrowserWebDriverContainer<>().withCapabilities(chromeOptions);
#Test
public void openWikipedia() {
WebDriver driver = new RemoteWebDriver(driverContainer.getSeleniumAddress(), chromeOptions);
driver.navigate().to("https://www.wikipedia.org/");
String subtitleText = driver.findElement(By.cssSelector("#www-wikipedia-org h1 strong")).getText();
assert subtitleText.equals("The Free Encyclopedia");
driver.quit();
System.out.println("Finished opening wikipedia. πŸ“– πŸ€“ πŸ” πŸ‘ ✨");
}
}
Here is my bitbucket-pipelines.yml:
pipelines:
default:
- step:
image: amazoncorretto:11
services:
- docker
script:
- export TESTCONTAINERS_RYUK_DISABLED=true
- cd selenium-test ; bash ./mvnw --no-transfer-progress test
definitions:
services:
docker:
memory: 2048
By setting a breakpoint in my test method and using docker inspect -f '{{ .Mounts }}' I was able to discover that the container for the selenium/standalone-chrome:4.1.1 image has [{bind /dev/shm /dev/shm rw true rprivate}]
I thought that using the --disable-dev-shm-usage argument in my chrome options would prevent that, but it didn't. I don't know whether that's what's causing my issue in Bitbucket Pipelines though.
I found that it worked after setting shm size to zero. Here's the code that worked:
package com.byzpass.demo;
import org.junit.*;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeOptions;
import org.openqa.selenium.remote.RemoteWebDriver;
import org.testcontainers.containers.BrowserWebDriverContainer;
import java.util.ArrayList;
public class SeleniumTest {
#Test
public void openWikipedia() {
ChromeOptions chromeOptions = new ChromeOptions().addArguments("--no-sandbox").addArguments("--headless").addArguments("--disable-dev-shm-usage");
BrowserWebDriverContainer driverContainer = new BrowserWebDriverContainer<>().withRecordingMode(BrowserWebDriverContainer.VncRecordingMode.SKIP, null);
driverContainer.setShmSize(0L);
driverContainer.start();
WebDriver driver = new RemoteWebDriver(driverContainer.getSeleniumAddress(), chromeOptions);
driver.navigate().to("https://www.wikipedia.org/");
String subtitleText = driver.findElement(By.cssSelector("#www-wikipedia-org h1 strong")).getText();
assert subtitleText.equals("The Free Encyclopedia");
driver.quit();
driverContainer.stop();
System.out.println("Finished opening wikipedia. πŸ“– πŸ€“ πŸ” πŸ‘ ✨");
}
}

Inject custom AmazonS3 client object into Spring Cloud Config server instead of using the one provided by S3 backend

With the below Spring Cloud Config configuration (S3 backend), I am able fetch the config from S3 with my personal AWS account from personal laptop. However, within our client environment I am unable to use this configuration because my client has custom logic in creating the AmazonS3 object with corporate proxy and other security configurations.
Question:
Is there a way we can inject custom AmazonS3 object into Spring Cloud Config server? If yes, please let me know how I could inject it.
If #1 is not possible, is there a way I can pass custom AWSCredentialsProvider with Http proxy?
cloud:
aws:
region:
static: us-east-1
stack:
auto: false
credentials:
accessKey: XXXX
secretKey: YYYY
instance-profile: false
useDefaultAwsCredentialsChain: true
spring:
profiles:
active: awss3
cloud:
config:
server:
awss3:
region: us-east-1
bucket: test-bucket-name/bucket-folder
I did it using a configuration bean as shown below. Please check that I am creating an AmazonS3 object programmatically. There you have full control on how to create this object (with proxy parameters if needed)
package com.mypackage.product.config.server.config;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.auth.AWSCredentialsProvider;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.cloud.config.server.config.ConfigServerProperties;
import org.springframework.cloud.config.server.environment.AwsS3EnvironmentProperties;
import org.springframework.cloud.config.server.environment.AwsS3EnvironmentRepository;
import org.springframework.cloud.config.server.environment.AwsS3EnvironmentRepositoryFactory;
import org.springframework.cloud.config.server.support.AwsCodeCommitCredentialProvider;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
public class ScalityEnvironmentRepository {
#Autowired
private ConfigServerProperties server;
#Autowired
AwsS3EnvironmentRepositoryFactory awsS3EnvironmentRepositoryFactory;
#Autowired
AwsS3EnvironmentProperties awsS3EnvironmentProperties;
// read aws/scality creds from vault
#Value("${data.AccessKeyId}")
String accessKeyId;
#Value("${data.SecretAccessKey}")
String secretAccessKey;
#Bean
AwsS3EnvironmentRepository getAwsS3EnvironmentRepository() {
BasicAWSCredentials awsCreds = new BasicAWSCredentials(accessKeyId, secretAccessKey);
AwsClientBuilder.EndpointConfiguration epc = new AwsClientBuilder.EndpointConfiguration(
"https://corporate.scality.url.com",
awsS3EnvironmentProperties.getRegion());
AWSCredentialsProvider awsCredentialsProvider =
new AWSStaticCredentialsProvider(awsCreds);
ClientConfiguration clientConfiguration = new ClientConfiguration()
.withConnectionTimeout(10000)
.withMaxErrorRetry(3)
.withSocketTimeout(50000)
.withClientExecutionTimeout(50000);
AmazonS3 client = AmazonS3ClientBuilder.standard()
.withCredentials(awsCredentialsProvider)
.withClientConfiguration(clientConfiguration)
.withEndpointConfiguration(epc)
.build();
AwsS3EnvironmentRepository repository = new AwsS3EnvironmentRepository(client,
awsS3EnvironmentProperties.getBucket(), server);
return repository;
}
}

How to fix remote ejb lookup in Websphere Liberty?

I am trying to access the ejb deployed on websphere liberty 18.0.0.3
The binding location is: java:global/ITSORemote/ITSORemoteEJB/HelloRemoteEJB!com.ibm.itso.ejbRemote.view.HelloRemoteEJBRemote
My ORB configuration in the server.xml is:
<orb nameService="corbaname::<ipaddress>:2809" iiopEndpointRef="defaultIiopEndpoint">
<iiopEndpoint host= id="defaultIiopEndpoint" iiopPort="2809">
</iiopEndpoint>
</orb>
I have also added ejbRemote-3.2 in feature manager
I have two scenarios:
1. Access ejb from a client code running on the same server - This works using the url java:global/ITSORemote/ITSORemoteEJB/HelloRemoteEJB!com.ibm.itso.ejbRemote.view.HelloRemoteEJBRemote
2. Access ejb from a client code running on the different server - This does not work using the url
corbaname::(ipaddress):2809#ejb/global/ITSORemote/ITSORemoteEJB/HelloRemoteEJB!com.ibm.itso.ejbRemote.view.HelloRemoteEJBRemote
I am using the following code for lookup:
package com.ibm.remoteaccess;
import java.io.IOException;
import java.io.PrintWriter;
import java.util.Hashtable;
import javax.ejb.EJB;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.rmi.PortableRemoteObject;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.ibm.itso.ejbRemote.view.HelloRemoteEJBRemote;
/**
* Servlet implementation class RemoteAccess
*/
#WebServlet("/RemoteAccess")
public class RemoteAccess extends HttpServlet {
private static final long serialVersionUID = 1L;
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
PrintWriter out = response.getWriter();
try {
out.println("Hi");
Context ctx = new InitialContext();
Object ejbBusIntf = ctx.lookup("java:global/ITSORemote/ITSORemoteEJB/HelloRemoteEJB!com.ibm.itso.ejbRemote.view.HelloRemoteEJBRemote");
HelloRemoteEJBRemote bean = (HelloRemoteEJBRemote)PortableRemoteObject.narrow(ejbBusIntf, HelloRemoteEJBRemote.class);
out.println(bean.hello());
}
catch (NamingException e) { // Error getting the business interface
out.println(e);
}
}
}
There is no error thrown in the console also. What could be the problem?
There is a functional acceptance test (FAT) in open-liberty that looks up a remote EJB from one liberty server to an EJB on a second liberty server. The specific test can be found here:
https://github.com/OpenLiberty/open-liberty/blob/master/dev/com.ibm.ws.ejbcontainer.remote_fat/test-applications/RemoteClientWeb.war/src/com/ibm/ws/ejbcontainer/remote/client/web/RemoteTxAttrServlet.java
Each server process includes the ejbRemote-3.2 feature and an iiopEndpoint configuration (different ports since the test runs both serves on the same host).
https://github.com/OpenLiberty/open-liberty/blob/master/dev/com.ibm.ws.ejbcontainer.remote_fat/publish/servers/com.ibm.ws.ejbcontainer.remote.fat.RemoteServerClient/server.xml
If you are not seeing any errors, then perhaps the iiopEndpoint is not configured properly in the client side server (as the ORB will not start without it). For example, the default iiop port is 2809, and if both servers are on the same host, then they cannot both use that port. Setting both servers to the same port would result in the ORB not starting properly on one of the servers, and lookups would fail.
A lookup across servers would use corbaname, and the value you have specified appears to be correct.

How can I enable spring boot 1.2.5, using jersey, to print the raw http request and response to the console?

I have a spring boot 1.2.5 service that uses jersey 2. I see the requests in my own logs but I'd like to see the raw http request and response in the console as well. How can you turn on printing http traffic to the console?
import java.util.logging.Logger;
import org.glassfish.jersey.filter.LoggingFilter;
import javax.ws.rs.ApplicationPath;
import org.springframework.stereotype.Component;
#Component
#ApplicationPath("/")
public class JerseyConfiguration extends ResourceConfig {
private static final Logger log = Logger.getLogger(JerseyConfiguration.class.getName());
public JerseyConfiguration() {
...
register(new LoggingFilter(log, true));
}
}

Resources