How to fix Unsafe implementantion of TrustManager? - android-security

My app was rejected in Google Play because some unsafe implementation of TrustManager.
But in my library I have only one implementation of TrustManager (this is my SSLUtil class).
import android.content.Context;
import java.io.InputStream;
import java.security.KeyStore;
import java.security.cert.Certificate;
import java.security.cert.CertificateFactory;
import javax.net.ssl.HostnameVerifier;
import javax.net.ssl.HttpsURLConnection;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLSession;
import javax.net.ssl.SSLSocketFactory;
import javax.net.ssl.TrustManagerFactory;
public class SSLUtil {
/**
* #param ctx
* #param certRaw File from /res/raw
* #return
* #throws Exception
*/
public static SSLSocketFactory trustCert(Context ctx, int certRaw) throws Exception {
// Load CAs from an InputStream
CertificateFactory cf = CertificateFactory.getInstance("X.509");
// File at /res/raw
InputStream caInput = FileUtils.readRawFile(ctx, certRaw);
Certificate ca;
try {
ca = cf.generateCertificate(caInput);
} finally {
caInput.close();
}
// Create a KeyStore containing our trusted CAs
String keyStoreType = KeyStore.getDefaultType();
KeyStore keyStore = KeyStore.getInstance(keyStoreType);
keyStore.load(null, null);
keyStore.setCertificateEntry("ca", ca);
// Log.d(TAG, "KeyStore: " + keyStore);
// Create a TrustManager that trusts the CAs in our KeyStore
String tmfAlgorithm = TrustManagerFactory.getDefaultAlgorithm();
TrustManagerFactory tmf = TrustManagerFactory.getInstance(tmfAlgorithm);
tmf.init(keyStore);
// Create an SSLContext that uses our TrustManager
SSLContext context = SSLContext.getInstance("TLS");
context.init(null, tmf.getTrustManagers(), null);
// Create all-trusting host name verifier
HostnameVerifier allHostsValid = new HostnameVerifier() {
public boolean verify(String hostname, SSLSession session) {
return true;
}
};
// Install the all-trusting host verifier
HttpsURLConnection.setDefaultHostnameVerifier(allHostsValid);
SSLSocketFactory socketFactory = context.getSocketFactory();
HttpsURLConnection.setDefaultSSLSocketFactory(socketFactory);
return socketFactory;
}
}
I wrote this class after read the following docs from Android developer site:
https://developer.android.com/training/articles/security-ssl.html
If I understand it correctly, this code is ok. Is this implementation of TrustManager right?
I didn't understand why my application was rejected.

No, your code is not secure. As you can tell from the name allHostsValid, the code blindly accepts all hostnames, meaning that the connection can be man-in-the-middled. You should remove this class.

Related

How to change client TLS preferences in Java?

I'm trying to make a POST request to an endpoint in Java, and when I try to send the request, I get the following error:
Caused by: javax.net.ssl.SSLHandshakeException: The server selected protocol version TLS10 is not accepted by client preferences [TLS13, TLS12]
This is what I have so far
Map<Object, Object> data = new HashMap<>();
data.put("username","foo");
data.put("password","bar");
String url = "https://google.com";
HttpRequest request = HttpRequest.newBuilder()
.POST(buildFormDataFromMap(data))
.uri(URI.create(url))
.build();
try{
HttpResponse<String> response = httpClient.send(request,
HttpResponse.BodyHandlers.ofString());
System.out.println(response.statusCode());
System.out.println(response.body());
} catch (Exception e){
e.printStackTrace();
}
Then when I run the code, the error gets thrown when sending the request/making the response object. My question is, if the TLS preferences are different for the server than the client, how can I change the preferences within Java so it can still make the request?
To solve this problem in jdk 11, I had to create an javax.net.ssl.SSLParameters object to enable "TLSv1", etc:
SSLParameters sslParameters = new SSLParameters();
sslParameters.setProtocols(new String[]{"TLSv1", "TLSv1.1", "TLSv1.2", "TLSv1.3"});
Then create the HttpClient and add the sslParamters object:
HttpClient httpClient = HttpClient.newBuilder()
.sslParameters(sslParameters)
.build();
If you also want to disable hostname verification, add following code BEFORE HttpClient initialization;
final Properties props = System.getProperties();
props.setProperty("jdk.internal.httpclient.disableHostnameVerification", Boolean.TRUE.toString());
Also you can add a new TrustManager to trust all certificates (self signed).
To do so, add following code into your Class:
TrustManager[] trustAllCerts = new TrustManager[] {
new X509TrustManager() {
public java.security.cert.X509Certificate[] getAcceptedIssuers() {
return new X509Certificate[0];
}
public void checkClientTrusted(
java.security.cert.X509Certificate[] certs, String authType) {
}
public void checkServerTrusted(
java.security.cert.X509Certificate[] certs, String authType) {
}
}
};
After this, you have to create an SSLContext object and add the TrustManger object:
SSLContext sslContext = SSLContext.getInstance("TLS");
sslContext.init(null, trustAllCerts, new java.security.SecureRandom());
And finally alter the HttpClient initialization like this:
httpClient = HttpClient.newBuilder()
.sslContext(sslContext)
.sslParameters(sslParameters)
.build()
Here is a complete Class example:
import java.net.http.HttpClient;
import java.security.KeyManagementException;
import java.security.NoSuchAlgorithmException;
import java.security.cert.X509Certificate;
import java.util.Properties;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLParameters;
import javax.net.ssl.TrustManager;
import javax.net.ssl.X509TrustManager;
public class HttpSSLClient {
private SSLContext sslContext;
private SSLParameters sslParameters;
private HttpClient httpClient;
public HttpSSLClient() throws KeyManagementException, NoSuchAlgorithmException {
sslParameters = new SSLParameters();
sslParameters.setProtocols(new String[]{"TLSv1", "TLSv1.1", "TLSv1.2", "TLSv1.3"});
sslContext = SSLContext.getInstance("TLS");
sslContext.init(null, trustAllCerts, new java.security.SecureRandom());
final Properties props = System.getProperties();
props.setProperty("jdk.internal.httpclient.disableHostnameVerification", Boolean.TRUE.toString());
httpClient = HttpClient.newBuilder()
.sslContext(sslContext)
.sslParameters(sslParameters)
.build();
}
public HttpClient getHttplClient() {
return httpClient;
}
TrustManager[] trustAllCerts = new TrustManager[] {
new X509TrustManager() {
public java.security.cert.X509Certificate[] getAcceptedIssuers() {
return new X509Certificate[0];
}
public void checkClientTrusted(
java.security.cert.X509Certificate[] certs, String authType) {
}
public void checkServerTrusted(
java.security.cert.X509Certificate[] certs, String authType) {
}
}
};
}
You can use the getHttplClient() function while calling your HttpRequest.
I had the same issue and this solution does not work for me.
Instead I saw this answer Android Enable TLSv1.2 in OKHttp and I tried this code:
ConnectionSpec spec = new ConnectionSpec
.Builder(ConnectionSpec.MODERN_TLS)
.tlsVersions(TlsVersion.TLS_1_2,TlsVersion.TLS_1_0,TlsVersion.TLS_1_1,TlsVersion.TLS_1_3).build();
client =client.newBuilder().connectionSpecs(Collections.singletonList(spec)).build();
And it worked for me:)
I think mmo's answer should be highlighted in bold. I had similar issue, but found out that the open-jdk jvm I was using has TLSv1 and TLSv1.1 as disabled in the jdk.tls.disabledAlgorithms line in java.security. So as soon as I removed it and restarted the JVM, I was able to connect usingthe older TLS protocols.
But please pay ATTENTION, This is not advisable in Production since it degrades the secure communication. So I'd say change it if you want at YOUR OWN RISK!!!

How to post an image to azure-blob-storage from front-end(Angular8) while passing the file to backend(C#)

I am trying to allow users to upload an image for their profile pic. I want to store it inside of my azure-blob-storage. So after doing some research and going through different theories about doing this solely within the front end, I have decided to just pass the file to the backend and make the backend post to my azure blob. However, upon doing so, I get a 500 Internal Server error while attempting to upload a selected file.
I am using Angular 8 for my frontend code and using C#/ASP.NetCore for my backend. I have been able to successfully post an image to my azure-blob-storage with just my backend by using PostMan to see if my controller works. The main issue is getting my frontend code to pass this file to my controller which will handle posting to the azure-blob-storage.
I am using a service to provide a linkage between my upload-picture-component and the backend controller.
FrontEnd(Angular8)
'upload-profile-service.ts' snippet:
import { HttpClient, HttpEvent, HttpParams, HttpRequest } from '#angular/common/http';
import { Observable } from 'rxjs';
import { Injectable } from '#angular/core';
import { environment } from '#env/environment';
#Injectable({
providedIn: 'root'
})
export class UploadProfileImageService {
// dependency injection
constructor(private _httpClient: HttpClient) { }
private baseurl = environment.api + '/myupload';
// change from any to type image.
public getImages(): Observable<any> {
return this._httpClient.get(this.baseurl);
}
// Form Data as image to pass to API to be pushed to Azure Storage
public postImages(formData: FormData): Observable<any> {
const saveImageUrl = this.baseurl + '/SaveFile';
return this._httpClient.post<any>(saveImageUrl, formData);
}
'upload-profile-component.ts' snippet:
constructor(
private consultantStore: ConsultantStore,
private notification: NotificationsService,
private dialogRef: MatDialogRef<Upload-Picture-Component>,
private _uploadProfileImageService: UploadProfileImageService,
private formBuilder: FormBuilder
) { }
ngOnInit(){}
selectedFile: File = null;
imageChangedEvent: any = '';
fileChangeEvent(event: any): void {
this.imageChangedEvent = event;
this.selectedFile = <File>this.imageChangedEvent.target.files[0];
}
UploadImageToBlob(){
const formData = new FormData();
formData.append(this.selectedFile.name, this.selctedFile, this.selectedFile.name);
this._uploadProfileImageService.postImages(formData)
.subscribe(res => {
console.log(res);
})
}
BackEnd(C#)
'UploadPicController.cs' snippet
[Route("myupload")]
[ApiController]
public class FileUploadController : Controller
{
private string _conn = <my_key_to_azure_blob_storage>;
private CloudBlobClient _blobClient;
private CloudBlobContainer _container;
private CloudStorageAccount _storageAccount;
private CloudBlockBlob _blockBlob;
[HttpPost("[Action]")]
public async Task<IActionResult> SaveFile(IFormFile files)
{
_storageAccount = CloudStorageAccount.Parse(_conn);
_blobClient = _storageAccount.CreateCloudBlobClient();
_container = _blobClient.GetContainerReference("profileimages");
//Get a reference to a blob
_blockBlob = _container.GetBlockBlobReference(files.FileName);
//Create or overwrite the blob with contents of a local file
using (var fileStream = files.OpenReadStream())
{
await _blockBlob.UploadFromStreamAsync(fileStream);
}
return Json(new
{
name = _blockBlob.Name,
uri = _blockBlob.Uri,
size = _blockBlob.Properties.Length
});
}
}
I want my azure blob to be able to receive the image via httpPost when the UploadImageToBlob function is called, but instead, I receive this error...
zone.js:3372 POST http://localhost:5000/myupload/SaveFile 500 (Internal Server Error)...
core.js:5847 ERROR HttpErrorResponse {headers: HttpHeaders, status: 500, statusText: "Internal Server Error", url: "http://localhost:5000/myupload/SaveFile", ok: false, …}error: "
↵http://localhost:5000/myupload/SaveFile: 500 Internal Server Error"name: "HttpErrorResponse"ok: falsestatus: 500statusText: "Internal Server Error"url: "http://localhost:5000/myupload/SaveFile"proto: HttpResponseBase...
Here is an update on what I get in the error log in Developer Tools
'NullReferenceException: Object reference not set to an instance of an object.'
Developer Tools -> 'Preview'
Here is my HttpPost method which worked for me:
[HttpPost]
public async Task<IActionResult> UploadFileAsync([FromForm]IFormFile file)
{
CloudStorageAccount storageAccount = null;
if (CloudStorageAccount.TryParse(_configuration.GetConnectionString("StorageAccount"), out storageAccount))
{
var client = storageAccount.CreateCloudBlobClient();
var container = client.GetContainerReference("fileupload");
await container.CreateIfNotExistsAsync();
CloudBlockBlob blob = container.GetBlockBlobReference(file.FileName);
await blob.UploadFromStreamAsync(file.OpenReadStream());
return Ok(blob.Uri);
}
return StatusCode(StatusCodes.Status500InternalServerError);
}
I was also getting the same issue when i tried to use below method:
GetBlockBlobReference() or `GetBlobReferenceFromServerAsync`
Also i would suggest you to add below line after getting the container reference:
await container.CreateIfNotExistsAsync();
When I debugged the code in network tab in developer tools while i was using GetBlobReferenceFromServerAsync()
here is the reason which was causing 500:
Please try to debug it from your end and see if it helps.
Let me know if you any assistance, will share my code base.

How to detect weather the endpoint (KAA SDK) is connected to KAA server or not from application

Is there any mechanism or method or steps to detect the endpoint(KAA SDK) connectivity to the KAA server from the application.
If no, then how can we identifies failure devices through remotely?? or How can we identifies devices that are not able to communicate with the KAA Server after deploying devices in the field??
How one can achieve this requirement to unlock the power of IOT??
If your endpoint will meet some problems connecting to Kaa server a "failover" will happen.
So you must define your own failover strategy and set it for your Kaa client. Every time failover happens strategy's onFialover() method will be called.
Below you can see the code example for the Java SDK.
import org.kaaproject.kaa.client.DesktopKaaPlatformContext;
import org.kaaproject.kaa.client.Kaa;
import org.kaaproject.kaa.client.KaaClient;
import org.kaaproject.kaa.client.SimpleKaaClientStateListener;
import org.kaaproject.kaa.client.channel.failover.FailoverDecision;
import org.kaaproject.kaa.client.channel.failover.FailoverStatus;
import org.kaaproject.kaa.client.channel.failover.strategies.DefaultFailoverStrategy;
import org.kaaproject.kaa.client.exceptions.KaaRuntimeException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.util.concurrent.TimeUnit;
/**
* A demo application that shows how to use the Kaa credentials API.
*/
public class CredentialsDemo {
private static final Logger LOG = LoggerFactory.getLogger(CredentialsDemo.class);
private static KaaClient kaaClient;
public static void main(String[] args) throws InterruptedException, IOException {
LOG.info("Demo application started");
try {
// Create a Kaa client and add a startup listener
kaaClient = Kaa.newClient(new DesktopKaaPlatformContext(), new SimpleKaaClientStateListener() {
#Override
public void onStarted() {
super.onStarted();
LOG.info("Kaa client started");
}
}, true);
kaaClient.setFailoverStrategy(new CustomFailoverStrategy());
kaaClient.start();
// ... Do some work ...
LOG.info("Stopping application.");
kaaClient.stop();
} catch (KaaRuntimeException e) {
LOG.info("Cannot connect to server - no credentials found.");
LOG.info("Stopping application.");
}
}
// Give a possibility to manage device behavior when it loses connection
// or has other problems dealing with Kaa server.
private static class CustomFailoverStrategy extends DefaultFailoverStrategy {
#Override
public FailoverDecision onFailover(FailoverStatus failoverStatus) {
LOG.info("Failover happen. Failover type: " + failoverStatus);
// See enum DefaultFailoverStrategy from package org.kaaproject.kaa.client.channel.failover
// to list all possible values
switch (failoverStatus) {
case CURRENT_BOOTSTRAP_SERVER_NA:
LOG.info("Current Bootstrap server is not available. Trying connect to another one.");
// ... Do some recovery, send notification messages, etc. ...
// Trying to connect to another bootstrap node one-by-one every 5 seconds
return new FailoverDecision(FailoverDecision.FailoverAction.USE_NEXT_BOOTSTRAP, 5L, TimeUnit.SECONDS);
default:
return super.onFailover(failoverStatus);
}
}
}
}
UPDATED (2016/10/28)
From the server side you can check endpoint credentials status as shown in method checkCredentialsStatus() in code below. The status IN_USE shows that endpoint has at least one successful connection attempt.
Unfortunately in current Kaa version there are no ways to directly check if endpoint is connected to server or not. I describe them after code example.
package org.kaaproject.kaa.examples.credentials.kaa;
import org.kaaproject.kaa.common.dto.ApplicationDto;
import org.kaaproject.kaa.common.dto.admin.AuthResultDto;
import org.kaaproject.kaa.common.dto.credentials.CredentialsStatus;
import org.kaaproject.kaa.examples.credentials.utils.IOUtils;
import org.kaaproject.kaa.server.common.admin.AdminClient;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.List;
public class KaaAdminManager {
private static final Logger LOG = LoggerFactory.getLogger(KaaAdminManager.class);
private static final int DEFAULT_KAA_PORT = 8080;
private static final String APPLICATION_NAME = "Credentials demo";
public String tenantAdminUsername = "admin";
public String tenantAdminPassword = "admin123";
private AdminClient adminClient;
public KaaAdminManager(String sandboxIp) {
this.adminClient = new AdminClient(sandboxIp, DEFAULT_KAA_PORT);
}
// ...
/**
* Check credentials status for getting information
* #return credential status
*/
public void checkCredentialsStatus() {
LOG.info("Enter endpoint ID:");
// Reads endpoint ID (aka "endpoint key hash") from user input
String endpointId = IOUtils.getUserInput().trim();
LOG.info("Getting credentials status...");
try {
ApplicationDto app = getApplicationByName(APPLICATION_NAME);
String appToken = app.getApplicationToken();
// CredentialsStatus can be: AVAILABLE, IN_USE, REVOKED
// if endpoint is not found on Kaa server, exception will be thrown
CredentialsStatus status = adminClient.getCredentialsStatus(appToken, endpointId);
LOG.info("Credentials for endpoint ID = {} are now in status: {}", endpointId, status.toString());
} catch (Exception e) {
LOG.error("Get credentials status for endpoint ID = {} failed. Error: {}", endpointId, e.getMessage());
}
}
/**
* Get application object by specified application name
*/
private ApplicationDto getApplicationByName(String applicationName) {
checkAuthorizationAndLogin();
try {
List<ApplicationDto> applications = adminClient.getApplications();
for (ApplicationDto application : applications) {
if (application.getName().trim().equals(applicationName)) {
return application;
}
}
} catch (Exception e) {
LOG.error("Exception has occurred: " + e.getMessage());
}
return null;
}
/**
* Checks authorization and log in
*/
private void checkAuthorizationAndLogin() {
if (!checkAuth()) {
adminClient.login(tenantAdminUsername, tenantAdminPassword);
}
}
/**
* Do authorization check
* #return true if user is authorized, false otherwise
*/
private boolean checkAuth() {
AuthResultDto.Result authResult = null;
try {
authResult = adminClient.checkAuth().getAuthResult();
} catch (Exception e) {
LOG.error("Exception has occurred: " + e.getMessage());
}
return authResult == AuthResultDto.Result.OK;
}
}
You can see an more examples of using AdminClient in class KaaAdminManager in Credentials Demo Application from Kaa sample-apps project on GitHub.
Knowing workarounds
Using Kaa Notifications in conjunction with Kaa Data Collection feature. Server sends specific unicast notification to endpoint (using endpoint ID), then endpoint replies sending data with Data Collection feature. Server wait a bit and checks timestamp of the last appender record (typically in database) for your endpoint (by endpoint ID). All messages go asynchronously, so you must select response-wait time according to your real environment.
Using Kaa Data Collection feature only. This method is simpler but has certain performance drawbacks. You can use it if your endpoints must send data to Kaa server by theirs nature (measuring sensors, etc.). Endpoint just sends data to server at regular intervals. When server needs to check if endpoint is "on-line", it query saved data logs (typically database) to get last record by endpoint ID (key hash) and analyze the timestamp field.
* To make effective use of Kaa Data Collection feature, you must add such metadata in settings of selected Log appender (in Kaa Admin UI): "Endpoint key hash" (the same as "Endpoint ID"), "Timestamp". This will automatically add needed fields to every log record received from endpoints.
I'm new to Kaa myself and unsure whether there is a method to determine that directly in the SDK, but a work-around is that you could have an extra endpoint from which you periodically send an event to all the other endpoints and expect a reply. When an endpoint does not reply, you know there's a problem.

Enabling Cross Origin Requests for WebSockets in Spring

I have a OpenShift Wildfly server. I am building a website with the Spring MVC framework. One of my webpages also uses a WebSocket connection. On the server side, I have used the #ServerEndpoint annotation and javax.websocket.* library to create my websocket:
package com.myapp.spring.web.controller;
import java.io.IOException;
import javax.websocket.OnClose;
import javax.websocket.OnError;
import javax.websocket.OnMessage;
import javax.websocket.OnOpen;
import javax.websocket.Session;
import javax.websocket.server.ServerEndpoint;
import org.springframework.web.socket.server.standard.SpringConfigurator;
#ServerEndpoint(value="/serverendpoint", configurator = SpringConfigurator.class)
public class serverendpoint {
#OnOpen
public void handleOpen () {
System.out.println("JAVA: Client is now connected...");
}
#OnMessage
public String handleMessage (Session session, String message) throws IOException {
if (message.equals("ping")) {
// return "pong"
session.getBasicRemote().sendText("pong");
}
else if (message.equals("close")) {
handleClose();
return null;
}
System.out.println("JAVA: Received from client: "+ message);
MyClass mc = new MyClass(message);
String res = mc.action();
session.getBasicRemote().sendText(res);
return res;
}
#OnClose
public void handleClose() {
System.out.println("JAVA: Client is now disconnected...");
}
#OnError
public void handleError (Throwable t) {
t.printStackTrace();
}
}
OpenShift gives a default URL, so all of my webpages (html files) have the common (canonical) hostname. For the sake of simplicity, I am calling this URL URL A (projectname-domainname.rhclound.com). I created an alias, CNAME, of URL A, which is called URL B (say https://www.mywebsite.tech). URL B is secure, as it has the https.
I am using a JavaScript client to connect to the WebSocket at the path /serverendpoint. The URI I am using in my html webpage file, test.html, is the following:
var wsUri = "wss://" + "projectname-domainname.rhclound.com" + ":8443" + "/serverendpoint";
When I open up URL A (projectname-domainname.rhclound.com/test), the WebSocket connects and everything works fine. However, when I try to connect to the websocket using URL B (https://mywebsite.tech/test), the JavaScript client immediately connects and disconnects.
Here is the message from the console that I receive:
Here is my JavaScript code that connects to the WebSocket:
/****** BEGIN WEBSOCKET ******/
var connectedToWebSocket = false;
var responseMessage = '';
var webSocket = null;
function initWS() {
connectedToWebSocket = false;
var wsUri = "wss://" + "projectname-domainname.rhcloud.com" + ":8443" + "/serverendpoint";
webSocket = new WebSocket(wsUri); // Create a new instance of WebSocket using usUri
webSocket.onopen = function(message) {
processOpen(message);
};
webSocket.onmessage = function(message) {
responseMessage = message.data;
if (responseMessage !== "pong") { // Ping-pong messages to keep a persistent connection between server and client
processResponse(responseMessage);
}
return false;
};
webSocket.onclose = function(message) {
processClose(message);
};
webSocket.onerror = function(message) {
processError(message);
};
console.log("Exiting initWS()");
}
initWS(); //Connect to websocket
function processOpen(message) {
connectedToWebSocket = true;
console.log("JS: Server Connected..."+message);
}
function sendMessage(toServer) { // Send message to server
if (toServer != "close") {
webSocket.send(toServer);
} else {
webSocket.close();
}
}
function processClose(message) {
connectedToWebSocket = false;
console.log("JS: Client disconnected..."+message);
}
function processError(message) {
userInfo("An error occurred. Please contact for assistance", true, true);
}
setInterval(function() {
if (connectedToWebSocket) {
webSocket.send("ping");
}
}, 4000); // Send ping-pong message to server
/****** END WEBSOCKET ******/
After a lot of debugging and trying various things, I concluded that this was problem was occurring because of the Spring Framework. This is because before I introduced the Spring Framework in my project, URL B could connect to the WebSocket, but after introducing Spring, it cannot.
I read on spring's website about WebSocket Policy. I came across their same origin policy which states that an alias, URL B, cannot connect to the WebSocket because it is not the same origin as URL A is. To solve this problem I disabled the same origin policy with WebSockets as said in the documentation, so I added the following code. I thought that doing so would fix my error. Here is what I added:
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.socket.AbstractSecurityWebSocketMessageBrokerConfigurer;
#Configuration
public class WebSocketSecurityConfig extends AbstractSecurityWebSocketMessageBrokerConfigurer {
#Override
protected boolean sameOriginDisabled() {
return true;
}
}
However, this did not fix the problem, so I added the following method to my ApplicationConfig which extends WebMvcConfigurerAdapter:
#Override
public void addCorsMappings(CorsRegistry registry) {
registry.addMapping("/**").allowedOrigins("https://www.mywebsite.com");
}
This also didn't work either. Then I tried this:
package com.myapp.spring.security.config;
import org.springframework.boot.web.servlet.FilterRegistrationBean;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.cors.CorsConfiguration;
import org.springframework.web.cors.UrlBasedCorsConfigurationSource;
import org.springframework.web.filter.CorsFilter;
#Configuration
public class MyCorsFilter {
// #Bean
// public FilterRegistrationBean corsFilter() {
// System.out.println("Filchain");
// UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
// CorsConfiguration config = new CorsConfiguration();
// config.setAllowCredentials(true);
// config.addAllowedOrigin("https://www.mymt.tech");
// config.addAllowedHeader("*");
// config.addAllowedMethod("*");
// source.registerCorsConfiguration("/**", config);
// FilterRegistrationBean bean = new FilterRegistrationBean(new CorsFilter(source));
// bean.setOrder(0);
// System.out.println("Filchain");
// return bean;
// }
#Bean
public CorsFilter corsFilter() {
System.out.println("Filchain");
UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
CorsConfiguration config = new CorsConfiguration();
config.setAllowCredentials(true); // you USUALLY want this
config.addAllowedOrigin("*");
config.addAllowedHeader("*");
config.addAllowedMethod("*");
config.addAllowedMethod("*");
source.registerCorsConfiguration("/**", config);
System.out.println("Filchain");
return new CorsFilter(source);
}
}
This also did not work.
I even changed the var wsURI in the JS code to the following:
var wsUri = "wss://" + "www.mywebsite.com" + ":8443" + "/serverendpoint";
Then var wsUri = "wss://" + "mywebsite.com" + ":8443" + "/serverendpoint";
When I did this, the Google Chrome gave me an error, saying that the handshake failed. However, when I have this URL, var wsUri = "wss://" + "projectname-domianname.rhcloud.com" + ":8443" + "/serverendpoint";, I did not get the error that the handshake didn't occur, but I get a message that the connection opened and closed immediately (as seen above).
So how can I fix this?
Have you tried implementing the WebMvcConfigurer and overriding the method addCorsMappings()? If not try this and see.
#EnableWebMvc
#Configuration
#ComponentScan
public class WebConfig implements WebMvcConfigurer {
#Override
public void addCorsMappings(CorsRegistry registry) {
registry.addMapping("/**")
.allowedOrigins("*")
.allowedMethods("GET", "POST")
.allowedHeaders("Origin", "Accept", "Content-Type", "Authorization")
.allowCredentials(true)
.maxAge(3600);
}
}
I don't think it's a CORS issue because it's connected successully before being disconnected. If that's CORS, you can't even connect.
I think it's a communication problem between your DNS & openshift because WebSocket need a persistent connection (long-live) which keeps opening between client & server. If your DNS (e.g. CloudFlare or something like that) does not support / not configured to use WebSocket, the client would be disconnected immediately as in your issue.

Spnego auth not working for response code driven http client

I am trying to write a http client which connects to a kerberos enabled tomcat(tested to be correct using browsers). It first gets the response code (which will be 401) and as according continue with its work.
The code is
import java.io.BufferedReader;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.*;
public class SampleHTTP2 {
static final String kuser = "correctusername"; // your account name
static final String kpass = "correctpassword"; // your password for the account
static class MyAuthenticator extends Authenticator {
public PasswordAuthentication getPasswordAuthentication() {
//System.out.println("I am reaching here");
// I haven't checked getRequestingScheme() here, since for NTLM
// and Negotiate, the username and password are all the same.
System.err.println("Feeding username and password for "
+ getRequestingScheme());
return (new PasswordAuthentication(kuser, kpass.toCharArray()));
}
}
public static void main(String[] args) throws Exception {
URL url = new URL("http://mycompname:6008/examples/");
HttpURLConnection h1 = (HttpURLConnection) url.openConnection();
int rescode = h1.getResponseCode();
System.out.println(rescode);
System.setProperty("sun.security.krb5.debug", "true");
System.setProperty("java.security.auth.login.config", "C:\\login2.conf");
System.setProperty("javax.security.auth.useSubjectCredsOnly","false");
System.setProperty("java.security.krb5.conf", "C:\\krb5.ini");
if(rescode == 401){
Authenticator.setDefault(new MyAuthenticator());
URL url2 = new URL("http://mycompname/examples/");
URLConnection h2 = url2.openConnection();
InputStream ins2 = h2.getInputStream();
BufferedReader reader = new BufferedReader(new InputStreamReader(ins2));
String str;
while((str = reader.readLine()) != null)
System.out.println(str);
}
}
}
Now when i comment the line:-
int rescode = h1.getResponseCode();
and put if(true) instead of if(rescode ==401), it works.
I am not sure what is going wrong. getResponseCode() internally calls getinputStream and thus I have used a separate url connection. Even still it does not work
P.S - Server is perfectly set up and the Authenticator class is also correct.

Resources