Jmeter: Response code: Non HTTP response code: javax.net.ssl.SSLHandshakeException - css

I have an application URL. I need to run login test using Jmeter. I recorded the login steps using blazemeter extension of chrome. But when I run it I get below error. I know there have been questions like this, I have tried few and it seems my case is different.
I have tried:
Added these two lines in jmeter.bat
set JAVA_HOME=C:\Program Files\Java\jdk1.8.0_65
set PATH=%JAVA_HOME%\bin;%PATH%
Run Jmeter using "Run as Administrator"
Download the certificate from here https://gist.github.com/borisguery/9ef114c53b83e553b635 and install it this way
https://www.youtube.com/watch?v=2k581jcWk9M
Restart the Jmeter but and try again but no luck.
When I expand the error in Jmeter View tree listener I get error on this particular css file: https://abcurl.xyzsample.com/assets/loginpage/css/okta-sign-in.min.7c7cfd15fa939095d61912dd8000a2a8.css
Error:
Thread Name: Thread Group 1-1
Load time: 268
Connect Time: 0
Latency: 0
Size in bytes: 2256
Headers size in bytes: 0
Body size in bytes: 2256
Sample Count: 1
Error Count: 1
Response code: Non HTTP response code: javax.net.ssl.SSLHandshakeException
Response message: Non HTTP response message: Received fatal alert: handshake_failure
Response headers:
HTTPSampleResult fields:
ContentType:
DataEncoding: null

If you are getting error for only one .css file and it does not belong to the application under test (i.e. it is an external stylesheet) the best thing you could do is just to exclude it from the load test via URLs must match section which lives under "Advanced" tab of the HTTP Request Defaults configuration element.
If you need to load this .css by any means you could also try the following approaches:
Play with https.default.protocol and https.socket.protocols properties (look for the above lines in jmeter.properties) file
Install Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files into /jre/lib/security folder of your JRE or JDK home (replace existing files with the downloaded ones)

If your url needs a client certificate, then copy your cert to /bin folder and from the jmeter console if you go to options -> SSL Manager and select your cert , it would prompt you for the certificate password . And if you run your tests again , that should work .
Additionally you can also do keystore configuraion (http://jmeter.apache.org/usermanual/component_reference.html#Keystore_Configuration) , if you haven't done already .
Please Note that my jmeter version is 4.0 . Hope this helps .

Related

Robotframework and Saucelabs integration

I am trying to integrate my robotframework test with saucelabs. I just used open browser keyword and passed the remote url and desired capabilities values.
Open Browser https://saucelabs.com/ ie remote_url=${REMOTE_URL} desired_capabilities=browserName:internet explorer,version:9.0,platform:Windows 7
Got the below error
Opening browser 'ie' to base url 'https://saucelabs.com/' through
remote server at 'desired_capabilities=browserName:internet
explorer,version:9.0,platform:Windows 7' failed [ WARN ] Can't take
screenshot. No open browser found | FAIL |
java.net.MalformedURLException: no protocol:
desired_capabilities=browserName:internet
explorer,version:9.0,platform:Windows 7
I tried to pass the desired capabilities in different format and got the same error. This framework is set up in eclipse using jython.
Open Browser https://saucelabs.com/ browserName=ie remoteUrl=${REMOTE_URL} desiredCapabilities=platform:Windows 7,browserName:internet explorer,version:9.0,username:${username},accessKey:${accessKey}
Define/pass the values of the variables before using the above statement.

After upgrade attempting to get artifact results in "Could not process download request: Binary provider has no content for"

I recently upgraded our artifactory repository from 2.6.5 to the current version 5.4.6.
However, something seems to have gone wrong in the process. There are some artifacts that throw a HTTP 500 error when attempting to access them. Here is an example using wget:
wget http://xyz.server.com:8081/artifactory/gradle/org/jfrog/buildinfo/build-info-extractor-gradle/2.0.12/build-info-extractor-gradle-2.0.12.pom
--2017-09-12 12:17:13--
http://xyz.server.com:8081/artifactory/gradle/org/jfrog/buildinfo/build-info-extractor-gradle/2.0.12/build-info-extractor-gradle-2.0.12.pom
Resolving xyz.server.com (xyz.server.com)... 10.125.1.28
Connecting to xyz.server.com (xyz.server.com)|10.125.1.28|:8081... connected.
HTTP request sent, awaiting response... 500 Internal Server Error
2017-09-12 12:17:13 ERROR 500: Internal Server Error.
I verified this by going to the artifactory site, browsing to the object in question, and trying to download it. The result was the following:
{
"errors" : [ {
"status" : 500,
"message" : "Could not process download request: Binary provider has no content for 'e52a9a9a58d6829b7b13dd841af4b027c88bb028'"
} ]
}
The problem seems to be in the final step of the upgrade process, upgrading from 3.9.5 to 5.4.6. The wget command above works on 3.9.5, but not on the 5.4.6 instance.
I found a reference to a "Zap Cache" function in older documentation and thought it might fix things, but I don't seem to be able to find that function in the current site.
Is anyone able to point me to: a way to fix this issue, or what I need to do/look for in the upgrade process in order to prevent it from occurring?
As a further data point, we're using an Oracle database for the full file store, if that matters in any way (using the tag: <chain template="full-db"> in binarystore.xml)
Thanks in advance....

Ethereum: could not open database: resource temporarily unavailable

I am getting started with Ethereum and building a Dapp (what the hell does this mean by the way?). On the basic installation of the application (https://github.com/ethereum/wiki/wiki/Dapp-using-Meteor#connect-your-%C3%90app), I get this error upon attempting to connect.
geth --rpc --rpccorsdomain "http://localhost:3000"
I0804 23:48:24.987448 ethdb/database.go:82] Alloted 128MB cache and 1024 file handles to /Users/( . )Y( . )/Library/Ethereum/chaindata
Fatal: Could not open database: resource temporarily unavailable
I literally just got started, I set up ethereum through homebrew and made an account with geth. Can't get past right here.
Thank you!
Your geth client is already running in the background. You can attach to it by typing:
$ geth attach
in your command line. This will allow you to run commands on the geth client console.

Spring Integration Sftp Outbound Gateway to IBM mainframe

I have reviewed spring integration sftp mainframe :failed to write file; nested exception is 3: Permission denied, but still cannot sftp a file to a remote mainframe.
If I use a command line to sftp to my account, my login directory is:
/home/users/snoopy
From here, I can issue the command "put filename //#12345" and the file is transferred. I cannot figure out how to specify "//#12345" in my outbound-gateway. Are there some sftp-options you can add to specify this same command? Does it get added to the expression?, ie expression="payload.filename + ???"
The current remote-directory is /home/users/snoopy, so I can put to that directory without any issues, I just can't get it to //#12345
If I try remote-directory set to /home/users/snoopy//#12345, or /home/users/snoopy/#12345, those paths do not exist
Here is my gateway configuration:
<sftp outbound-gateway id="sftpOutbound"
session-factory="sftpSessionFactory"
request-channel="sftpOut"
command="put"
expression="payload.filename"
remote-directory="/home/users/snoopy"
remote-filename-generator="fileNameGenerator"
use-temporary-file-name="false"
reply-channel="successChannel"/>
With this configuration, I can send the file to /home/users/snoopy, I just haven't figured out how to get it to //#12345

DOMDocument->load external entity fail for local file with PHP-FPM

The following PHP "failed to load external entity", even though it is trying to load a local XML file:
<?php
$path = "/usr/share/pear/www/horde/config";
#libxml_disable_entity_loader(false);
$dom = new DOMDocument();
$v = $dom->load($path . '/conf.xml');
echo "status = ".($v?'success':'error')."\n";
?>
The basic question is how can this be fixed?
Log file:
2014/03/10 20:07:10 [error] 26117#0: *24 FastCGI sent in stderr: "PHP message: PHP Warning: DOMDocument::load(): I/O warning : failed to load external entity "/usr/share/pear/www/horde/config/conf.xml" in /usr/share/nginx/html/test.php on line 5
PHP message: PHP Stack trace:
PHP message: PHP 1. {main}() /usr/share/nginx/html/test.php:0
PHP message: PHP 2. DOMDocument->load() /usr/share/nginx/html/test.php:5" while reading response header from upstream, client: x.x.x.x, server: example.com, request: "GET /test.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "example.com"
Uncommenting the libxml_disable_entity_loader line works, but this is not an acceptable solution for a few reasons, e.g. it is systemwide for php-fpm.
Running the PHP from shell returns "status = success". Doing a file_get_contents and then $dom->loadXML($string) also works (i.e. file exists and not a permissions issue). This might be an acceptable workaround, but shouldn't be necessary and doesn't explain why the error is occurring.
The XML file itself is the Horde config, but the problem does not seem to be the contents of the file, since it also occurs with this XML content:
<?xml version="1.0"?>
<configuration></configuration>
Environment is php and php-fpm 5.3.3, nginx 1.4.6, libxml2 2.7.6. My first guess is something to do with php-fpm, but I can't find any config setting that affects this. Any pearls of wisdom appreciated!
EDIT TO ADD
Restarting php-fpm causes it to work briefly. Disabling APC did not seem to help. Seems like something with php-fpm - but what?
FURTHER TESTING
Some additional info:
I tried hitting the server repeatedly, and get an error about 80% of the time. The pattern isn't random - a few seconds of successes followed by a series of errors;
I added a phpinfo() to the end of the above php and doing a diff on the success and failure runs - there is no difference;
If I put libxml_disable_entity_loader(true), I seem to always get an error, which suggests that bug #64938 is at work.
It seems that I need to find why the XML is considered to have external entities.
I think that, if the external entity loader is disabled, it should be obvious that external entities can't be loaded. The only solution is to enable loading of external entities with libxml_disable_entity_loader(false). Since this setting is not thread-safe, I can see two approaches:
Enable it globally and use some other feature to prevent loading of unwanted entities (typically from a network):
Register your own entity loader with libxml_set_external_entity_loader. I think that's the safest solution.
Use the parse option LIBXML_NONET. This should be enough if you simply want to disable network access of libxml2. But you have to make sure to always pass it to calls like DOMDocument::load.
Use locks to protect calls to libxml_disable_entity_loader. This is probably impractical and potentially unsafe.

Resources