There is a project that will involve sftp file transfers to a server. I have tried using ftp protocol and chose winscp.exe - sftp as my program to record. There were events being recorded however there is nothing generated after I finished recording.
If ftp protocol is not the answer to sftp file transfers can anybody give me some insight or tips on how I can do loadtests using LR on sftp file transfers?
I am using Loadrunner 11 for this.
Thanks in advance.
LoadRunner FTP Protocol doesn't support SFTP. It only supports FTP and FTPS (FTP over SSL). You could try to load-test an SFTP server using PuTTY' psftp tool or similar console client and C funcions like popen/write/read, which are available in any LoadRunner C script regardless of protocol. You might use CVuser Protocol to write such script.
Check this old but still somewhat useful forum discussion for further information.
Not everything can be recorded. Some have to be programmed. Seek an API solution for sftp which runs in C, VB, JavaScript or Java, all of which may be implemented in LoadRunner
You can do sftp using Java vuser protocol. You need to write java code to transfer files. The following code snippet might be helpful.
import lrapi.lr;
import com.jcraft.jsch.*;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
public class Actions
{
public int init() throws Throwable {
return 0;
}//end of init
public int action() throws Throwable {
JSch jsch = new JSch();
Session session = null;
try {
session = jsch.getSession("userId", "HOST",PORT);
session.setConfig("StrictHostKeyChecking", "no");
session.setPassword("Password");
session.connect();
Channel channel = session.openChannel("sftp");
channel.connect();
ChannelSftp sftpChannel = (ChannelSftp) channel;
sftpChannel.cd("Directory to upload files");
System.out.println("Connection Established");
File file = new File("Local folder to upload files");
sftpChannel.put(new FileInputStream(file),file.getName());
sftpChannel.exit();
session.disconnect();
}
catch (JSchException e)
{
e.printStackTrace();
}
return 0;
}
public int end() throws Throwable {
return 0;
}//end of end
}
Related
This is a question about the project https://github.com/DiUS/pact-jvm.
Problem
When I am validating pacts I need to be able to use client side authentication, as the providers actually require client side authentication. I'll prefix what I am saying with a declaration that I am not very familiar with groovy: I mostly program in scala, java or javascript. Having looked at the code I think that client side authentication is not currently supported, so I'd like to make a pull request with that support in it.
What I've done so far
I have managed to get Https working with a truststore: I copied the HttpTarget and created a HttpsTarget, and in the HttpsTarget specified the truststore in the providerinfo. Unfortunately looking at the code there doesn't seem to be a way of specifying the client certificate, so I need to change the providerinfo class to be able to specify where it is (in the same way the the truststore is provided).
My problem is that I've got the code compiling using the advice in the 'for contributors', but when I publish locally, I am only publishing for scala version 2_12. Because of version issues and binary incompatibilities between scala versions, I need to publish to scala 2_11. My skills with gradle are even less than my skills with groovy. I've done a search for all the references to scalaVersion, and found that there is quite a lot of logic around it, but I've not managed to track down where it is specified.
Question
If I can use client side authentication with the current pact validator could you let me know. If not, could you tell me how to publish the project with support for scala 2_11?
Thanks
In the end I made my own Http Target. My need is to run from Junit, not the general case, and this is good enough:
public class HttpsTarget extends HttpTarget {
public HttpsTarget(final int port) {
super("https", "localhost", port, "/", false);
}
static class HttpsClientFactory implements IHttpClientFactory {
#NotNull
#Override
public CloseableHttpClient newClient(Object o) {
SSLContext sslContext = // put here code to make ssl context
CloseableHttpClient httpClient = HttpClients
.custom()
.setSSLContext(sslContext)
.build();
return httpClient;
}
}
#Override
public void testInteraction(final String consumerName, final Interaction interaction, PactSource source) {
ProviderInfo provider = getProviderInfo(source);
ConsumerInfo consumer = new ConsumerInfo(consumerName);
ProviderVerifier verifier = setupVerifier(interaction, provider, consumer);
Map<String, Object> failures = new HashMap<>();
ProviderClient client = new ProviderClient(provider, new HttpsClientFactory());
verifier.verifyResponseFromProvider(provider, interaction, interaction.getDescription(), failures, client);
reportTestResult(failures.isEmpty(), verifier);
try {
if (!failures.isEmpty()) {
verifier.displayFailures(failures);
throw getAssertionError(failures);
}
} finally {
verifier.finialiseReports();
}
}
}
I have a Socket handler in Vert.x and I know how to send data through the EventBus in a client-to-server (from Web Browser to Web Server) and server-component-to-server-component fashions.
Now I have a JavaFX-Client connected to the Vert.x Socket handler through websockets:
public void start() {
vertx.createHttpClient()
.setHost(Main.SOCKET_SERVER)
.setPort(8080)
.connectWebsocket("/chat/service", new Handler<WebSocket>() {
#Override
public void handle(WebSocket websocket) {
ws = websocket;
websocket.dataHandler(new Handler<Buffer>() {
#Override
public void handle(Buffer data) {
System.out.println("Received Data");
}
});
//...
// use ws for authentification
ws.writeTextFrame("doAuthentification");
//...
}
}
}
The Socket is connected to "/chat/service".
Now I want to use this Websocket to call different Services from Vert.x. I know that EventBus is not working from JavaFX-Client.
On the server:
ws.dataHandler(new Handler<Buffer>() {
#Override
public void handle(final Buffer data) {
String text = data.toString();
if(text.contentEquals("doAuthentification")){
logger.info("doAuthentification()");
doAuthentification();
}
// ...
}
}
I can now send "commands" like doAuthentification through the WebSocket, then, on server side and when that command is received, I can use the EventBus to process it further.
What would be the correct way using it from a client. Ideas?
Since you application is packaged as a standalone application is not deployed as in a Vert.x instance, you won't be able to call the event-bus since it is a Vert.x specific feature.
Your method to go would be, as you already tyried, to communicate to your Vert.x application in a standard way, through socket, or http for example (I would recommend HTTP and a RESTful application style), and send messages through an entry point that will be later on transferred to the appropriate verticles.
You may need to configure many path based handlers, maybe using a regex capture group inside, and let each handler choose the appropriate schema to delegate events, instead of having a single handler based on hardcoded messages.
I am trying to implement apple push notification (dev sandbox) within my spring project, using ( javapns ). I have created all the certificates & private key correctly ( and have checked the same from my local machine). Now when I upload "myck.p12" at the root of my spring project on ec2 instance and then when my code calls Push.alert(msg,"location of .p12 under root","password",token) - the program directly jumps into finally ( without giving any error ).
I also checked connection from my ec2 instance with telnet to apple's sandbox gateway and the connection is also fine.
Any help is appreciated. Could the issue be in locating keystore (.p12 file) ?
Code:
import javapns.Push;
import javapns.devices.Device;
import javapns.notification.PushNotificationPayload;
import javapns.notification.PushedNotification;
import javapns.notification.ResponsePacket;
#Override
public void executePush () throws NetworkIOException {
try {
List<PushedNotification> notifications = Push.alert(message,"/env/tomcat/apache-tomcat-6.0.32/webapps/tapcliqweb/pushtestq.p12","mads",false,"fc382beb521a43859bdc8ce8ed9f636f3b2f20972c712d58f15e15704fe153f7");
}
catch(Exception e) {
e.printStackTrace();
e.getMessage();
}
finally {
logger.debug("push attempt completed: finally");
}
}
It turned out that I was missing required jar file : bcprov-jdk15-146.jar
We have created a test suite and in order to run it we are using embedded Grizzly Web Server with JerseyTest framework.
We are extending a custom class from JerseyTest and in its constructor we are creating ApplicationDescriptor and then call superclass setupTestEnvironment() which essentially starts embedded grizzly web server.
Few of our test cases are extending this custom class to start grizzly server directly. However, we are not stopping this embedded server anywhere in the code.
The test cases run fine on windows but on Unix they fail with java.net.BindException port 9998 is in use by another process.
It becomes obvious these tests should fail with similar error on windows too if we are not stopping embedded web server in the code. How they are running fine on windows and failing on unix. Has this something to do with how Unix spawns threads or processes?
P.S. We have also tested whether port 9998 is in use by some other process using netstat -a | grep 9998 but no other process using that port could be found.
i had a similar problem and i did fix it by not using the default port if already used. just add following code to your test case:
#Override
protected int getPort(int defaultPort) {
ServerSocket server = null;
int port = -1;
try {
server = new ServerSocket(defaultPort);
port = server.getLocalPort();
} catch (IOException e) {
// ignore
} finally {
if (server != null) {
try {
server.close();
} catch (IOException e) {
// ignore
}
}
}
if ((port != -1) || (defaultPort == 0)) {
return port;
}
return getPort(0);
}
I had the same problem, when I was writing my integration tests. I didn't get to test on a Windows machine, but on my Unix machine I found the problem was that by default the JerseyTest class utilizes #After on it's tearDown method to close the embedded server. Since I had overridden this method to do clean up on my side, I had to call super.tearDown()
#After
public void tearDown() throws Exception{
super.tearDown();
...
}
After doing this, everything worked as expected.
I'm using HttpClient to execute a PostMethod against a remote servlet and for some reason a lot of my connections are hanging open and hogging up all of my server's connections.
Here's more info about the architecture
GWT client calls into a GWT Service
GWT service instantiates a HttpClient, creates a PostMethod and has the client execute the method
it then gets the input stream by calling method.getResponseBodyAsStream() and writes it out to a byte array
it then closes the input stream and flushes the byte array output stream, does a few more lines of code and then calls method.releaseConnection()
There has to be something obvious I'm overlooking that's causing this. If I perform a GET in a browser to the same service, the connections close immediately but something about HTTPClient is causing them to hang open.
You need to call HttpMethodBase#releaseConnection(). If you return a InputStream to be used later, a simple way is to wrap it by a anonymous FilterInputStream overwriting close():
final HttpMethodBase method = ...;
return new FilterInputStream(method.getResponseBodyAsStream())
{
public void close() throws IOException
{
try {
super.close();
} finally {
method.releaseConnection();
}
}
};