Graceful recovery from policy file load failure - apache-flex

First off: This is not another question about how to load a policy file.
I have an app in development that connects to a socket server, gets the policy file and works just dandy. However, when the socket server is down for whatever reason, I need to gracefully fallback to an alternative method of getting messages from the server (polling, basically).
This is not a problem, except for one thing:
Error: Request for resource at xmlsocket://[ip]:4770 by requestor from http://[ip]/cooking/Client.swf has failed because the server cannot be reached.
There doesn't appear to be a way to catch this. I have these event listeners on my socket:
addEventListener(Event.CLOSE, closeHandler);
addEventListener(Event.CONNECT, connectHandler);
addEventListener(IOErrorEvent.IO_ERROR, ioErrorHandler);
addEventListener(SecurityErrorEvent.SECURITY_ERROR, securityErrorHandler);
addEventListener(ProgressEvent.SOCKET_DATA, socketDataHandler);
SecurityErrorEvent is what you might think fires, but it doesn't. The docs say it fires for these reasons:
Local untrusted SWF files may not communicate with the Internet. You can work around this limitation by reclassifying the file as local-with-networking or as trusted.
You cannot specify a socket port
higher than 65535.
In the HTML page that contains the
SWF content, the allowNetworking
parameter of the object and embed
tags is set to "none".
So none of those apply. It appears what I really want to catch is the failure of the policy file to load, but even doing an explicit Security.loadPolicyFile() won't help, since that load is deferred to the first socket request AND doesn't fire any events.
For completeness, I also surrounded the call to connect() with a try{}catch (e:*){}, no result.
There's got to be a way to sort this. Any ideas? I simply need a way to tell when the connection has failed because of networking issues and try an alternate path.
EDIT: Despite my previous tests and the docs, it appears SecurityErrorEvent does fire - only it does it about 20 seconds after the load fails, so it's not obvious. I guess that's as immediate as I'm going to get from Flash.

Don't forget to retry connecting :)
private function onIOError(e:IOErrorEvent):void {
e.stopPropagation();
++this.retryCount;
if( this.retryCount >= 12 ){
this.connectTimer.stop();
this.dispatchEvent( new Event( 'TIMIEDOUT' ) );
}else{
this.err = 'IO-ERROR-EVENT - ' + e.text + '\r\nAttempting to reconnect';
}
}

Related

Failed to load resource: net::ERR_INSECURE_RESPONSE

IS there a way to trick the server so I don't get this error:
Content was blocked because it was not signed by a valid security certificate.
I'm pulling an iframe of an html website into another website but I keep getting the console (chrome) error in the title of this question and in internet explorer it says:
Content was blocked because it was not signed by a valid security certificate.
Your resource probably use a self-signed SSL certificate over HTTPS protocol.
Chromium, so Google Chrome block by default this kind of resource considered unsecure.
You can bypass this this way :
Assuming your frame's URL is https://www.domain.com, open a new tab in chrome and go to https://www.domain.com.
Chrome will ask you to accept the SSL certificate. Accept it.
Then, if you reload your page with your frame, you could see that now it works
The problem as you can guess, is that each visitor of your website has to do this task to access your frame.
You can notice that chrome will block your URL for each navigation session, while chrome can memorise for ever that you trust this domain.
If your frame can be accessed by HTTP rather than HTTPS, I suggest you to use it, so this problem will be solved.
Sometimes Google Chrome throws this error, even if it should not.
I experienced it when Chrome had a new version, and it needed to be restarted.
After restarting the same page worked without any errors.
The error in the console was:
net::ERR_INSECURE_RESPONSE
I still experienced the problem described above on an Asus T100 Windows 10 test device for both (up to date) Edge and Chrome browser.
Solution was in the date/time settings of the device; somehow the date was not set correctly (date in the past). Restoring this by setting the correct date (and restarting the browsers) solved the issue for me. I hope I save someone a headache debugging this problem.
Offering another potential solution to this error.
If you have a frontend application that makes API calls to the backend, make sure you reference the domain name that the certificate has been issued to.
e.g.
https://example.com/api/etc
and not
https://123.4.5.6/api/etc
In my case, I was making API calls to a secure server with a certificate, but using the IP instead of the domain name. This threw a Failed to load resource: net::ERR_INSECURE_RESPONSE.
open up your console and hit the URL inside. it'll take you to the API page and then in the page accept the SSL certificate, go back to your app page and reload.
remember that SSL certificates should have been issued for your Dev environment before.
If you're developing, and you're developing with a Windows machine, simply add localhost as a Trusted Site.
And yes, per DarrylGriffiths' comment, although it may look like you're adding an Internet Explorer setting...
I believe those are Windows rather than IE settings. Although MS tend to assume that they're only IE (hence the alert next to "Enable Protected Mode" that it requries restarted IE)...
Try this code to watch for, and report, a possible net::ERR_INSECURE_RESPONSE
I was having this issue as well, using a self-signed certificate, which I have chosen not to save into the Chrome Settings. After accessing the https domain and accepting the certificate, the ajax call works fine. But once that acceptance has timed-out or before it has first been accepted, the jQuery.ajax() call fails silently: the timeout parameter does not seem help and the error() function never gets called.
As such, my code never receives a success() or error() call and therefore hangs. I believe this is a bug in jquery's handling of this error. My solution is to force the error() call after a specified timeout.
This code does assume a jquery ajax call of the form jQuery.ajax({url: required, success: optional, error: optional, others_ajax_params: optional}).
Note: You will likely want to change the function within the setTimeout to integrate best with your UI: rather than calling alert().
const MS_FOR_HTTPS_FAILURE = 5000;
$.orig_ajax = $.ajax;
$.ajax = function(params)
{
var complete = false;
var success = params.success;
var error = params.error;
params.success = function() {
if(!complete) {
complete = true;
if(success) success.apply(this,arguments);
}
}
params.error = function() {
if(!complete) {
complete = true;
if(error) error.apply(this,arguments);
}
}
setTimeout(function() {
if(!complete) {
complete = true;
alert("Please ensure your self-signed HTTPS certificate has been accepted. "
+ params.url);
if(params.error)
params.error( {},
"Connection failure",
"Timed out while waiting to connect to remote resource. " +
"Possibly could not authenticate HTTPS certificate." );
}
}, MS_FOR_HTTPS_FAILURE);
$.orig_ajax(params);
}
This problem is because of your https that means SSL certification. Try on Localhost.

Regarding AmazonClientException

I have to implement some error handling logic for DynamoDb errors. As said by AWS documentation, the errors are divided into client and server errors.
May be I am missing something in the object browser, but I don't understand how I will retrieve the "HttpStatusCode StatusCode" for the client errors (AmazonClientException) ?
This is just part of the server errors (AmazonServiceException) only.
As, I need to do some logging based on the error code, it seems that it can not be obtained from client exceptions currently.
There is no status code for a AmazonClientException which is not also an AmazonServiceException. If you have one from the service, it will be of the second type and you can get the status code. If you have one of the first type, it could be because you don't have an internet connection, or the service responded with a malformed response (perhaps not even HTTP, who knows!).
It's a little confusing that they decided to extend AmazonClientException with AmazonServiceException, because it means that (in java) you might have to do something like this:
try {
// ... make some dynamo requests ...
} catch (AmazonServiceException e) {
// aha, I can get at the status code!
} catch (AmazonClientException e) {
// OK, something really bizarre happened... perhaps dynamo is
// down, or I'm having internet issues.
}

Slick2D KryoNet Applet

I'm using Kryonet with Slick2d to make a java game.
It works fine when running as a java application, however when running as an applet I get the following error:
00:00 INFO: [kryonet] Server opened.
00:04 DEBUG: [kryonet] Port 9991/TCP connected to: /(ip):55801
00:04 DEBUG: [kryo] Write: RegisterTCP
00:04 INFO: [kryonet] Connection 1 connected: /(ip)
00:04 INFO: [SERVER] Someone has connected.
00:04 ERROR: [kryonet] Error reading TCP from connection: Connection 1
com.esotericsoftware.kryonet.KryoNetException: Error during deserialization.
at com.esotericsoftware.kryonet.TcpConnection.readObject(TcpConnection.java:141)
at com.esotericsoftware.kryonet.Server.update(Server.java:192)
at com.esotericsoftware.kryonet.Server.run(Server.java:350)
at java.lang.Thread.run(Unknown Source)
Caused by: com.esotericsoftware.kryo.KryoException: Buffer underflow.
at com.esotericsoftware.kryo.io.Input.require(Input.java:162)
at com.esotericsoftware.kryo.io.Input.readLong(Input.java:621)
at com.esotericsoftware.kryo.io.Input.readDouble(Input.java:745)
at com.esotericsoftware.kryo.serializers.DefaultSerializers$DoubleSerializer.read(DefaultSerializers.java:141)
at com.esotericsoftware.kryo.serializers.DefaultSerializers$DoubleSerializer.read(DefaultSerializers.java:131)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:735)
at com.esotericsoftware.kryonet.KryoSerialization.read(KryoSerialization.java:57)
at com.esotericsoftware.kryonet.TcpConnection.readObject(TcpConnection.java:139)
... 3 more
00:04 INFO: [SERVER] Someone has disconnected.
00:04 INFO: [kryonet] Connection 1 disconnected.
The server is running locally as a runnable jar and the client applet in a HTML file locally aswell, which is running xampp to act as a web server.
I've tried different serializers, buffer sizes and sending just single String/Booleans etc, it just doesn't seem to like anything.
The client connects to the server perfectly fine, however when it comes to sending any packets, I get the above error, no matter what packet is sent.
Any help/advice would be really appreciated - I've been stumped on this for a while!
Thanks
I believe I have the same problem or at least similar one. I am using Kryonet for server and client. The client is an applet and when I run it trough Eclipse's Applet Viewer it works fine. When I run it trough a web server I get similar errors. Client and server connect, server receives client's packets, but the client gives an error wherever it tries any deserialization. I found that the applet permissions are to blame. If you change the permissions of the Applet Viewer (if you are using Eclipse) to be the same as of a web page, you will get the same errors. The advantage is that you can then debug the problem.
To change the permissions for Eclipse:
Go to your project folder \bin\ and open "java.policy.applet". Inside you should have:
grant {
permission java.security.AllPermission;
};
Change that to:
grant {
permission java.io.FilePermission "<<ALL FILES>>", "read, write, execute, delete";
permission java.net.SocketPermission "*", "accept, connect, listen, resolve";
permission java.util.PropertyPermission "*", "read, write";
permission java.lang.RuntimePermission "*";
permission java.awt.AWTPermission "showWindowWithoutWarningBanner";
};
With this change I had the same behavior for Applet Viewer as with an embedded applet. This is not a full solution, but can help in finding the cause of the problem.
Update:
I have found what is the problem in my case. The problem is in the FieldSerializer and the other serializers using it. When a class is registered, the FieldSerializer goes over it's fields and set's them all to be accessible. This operation is not allowed for an applet. The result is wrong registration and serialization/deserialization. I have found 2 workarounds:
1) Use of another serializer. The default one is a FieldSerializer and can be changed using
public void setDefaultSerializer (Class<? extends Serializer> serializer)
another option is to set the serializer when registering each class. Do not use serializers based on the FieldSerializer.
2) Try to fix the FieldSerializer. What I am doing is not fully correct, but it works in my case. We will make the FieldSerializer continue the registration if setting of the accessibility is causing an Exception. Another thing we need to do is set all fields of the classes we register to public. TO change the FieldSerializer you need the Kryo sources. Go to FieldSerializer.java, mething rebuildCachedFields(). You will find the following code there:
if (!field.isAccessible()) {
if (!setFieldsAsAccessible) continue;
try {
field.setAccessible(true);
} catch (AccessControlException ex) {
continue;
}
}
You need to change that to:
if (!field.isAccessible()) {
if (setFieldsAsAccessible)
try {
field.setAccessible(true);
} catch (AccessControlException ex) {
}
}
The other thing that needs to be changed is all of the registered classes to have only public fields.
I have similar problem in gradle build. May be you need just increase memory (either heap or PermSize) for the applet JVM

Where to declare collections when files are split between server and client directories?

I am struggling with Meteor when using separate client and server directories and was hoping someone could help me.
My server code in the server subdirectory looks like:
Testing = new Meteor.Collection("testing");
Testing.insert({hello1:'world1'});
Testing.insert({hello2:'world2'});
Testing.insert({hello3:'world3'});
Meteor.publish("testing", function() {
console.log('server: ' + Testing.find().count());
return Testing.find();
});
My client code in the client subdirectory looks like:
Meteor.subscribe("testing");
var Testing = new Meteor.Collection("testing");
console.log('count: ' + Testing.find().count());
I have tried this with autopublish on and off.
In my terminal window, I can see the log statement output a number of items as I would expect. But for my client, in the browser console window I always see a count of 0.
Not sure if this is related, but when I modify my subscribe statement and save my changes, I see this error in my console window:
POST http://localhost:3000/sockjs/574/ukpxre9v/xhr 503 (Service Unavailable) sockjs- 0.3.4.js:821
AbstractXHRObject._start sockjs-0.3.4.js:821
(anonymous function)
I'm sure I'm making some stupid mistake, but I haven't had any luck tracking it down. Any help would be greatly appreciated.
You're running console.log('count: ' + Testing.find().count()); too soon Meteor will sync your server collection down to the client but it takes a very short amount of time.
For instance you could run console.log('count: ' + Testing.find().count()); in your web console it should give you a proper result because you would have waited half a second or so for it to load the data down from the server.
You could put this code in a reactive context so it shows the live count correctly, such as Meteor.autorun or a Template helper.
The reason you see that 503 XHR error is when you modify your code and save it, meteor restarts and serves up the new content asap, so the socket between the client and server is temporarily interrupted, until it refreshes the page. This is not really anything wrong with your code.

ORA-29270: too many open HTTP requests

Can someone help me with this problem that occurs whenever you run a TRIGGER, but works in a normal PROCEDURE?
TRIGGER:
create or replace
procedure testeHTTP(search varchar2)
IS
Declare
req sys.utl_http.req;<BR>
resp sys.utl_http.resp;<BR>
url varchar2(500);
Begin
url := 'http://www.google.com.br';
dbms_output.put_line('abrindo');
-- Abrindo a conexão e iniciando uma requisição
req := sys.utl_http.begin_request(search);
dbms_output.put_line('preparando');
-- Preparandose para obter as respostas
resp := sys.utl_http.get_response(req);
dbms_output.put_line('finalizando response');
-- Encerrando a comunicação request/response
sys.utl_http.end_response(resp);
Exception
When Others Then
dbms_output.put_line('excecao');
dbms_output.put_line(sys.utl_http.GET_DETAILED_SQLERRM());
End;
close your user session and then the problem is fixed.
Internal there is a limit from 5 http requests.
Might a problem is the missing: utl_http.end_response
or an exception in the app and not a close from the resp object.
modify the code like that:
EXCEPTION
WHEN UTL_HTTP.TOO_MANY_REQUESTS THEN
UTL_HTTP.END_RESPONSE(resp);
you need to close your requests once you are done with them, it does not happen automatically (unless you disconnect form the db entirely)
It used to be utl_http.end_response, but I am not sure if it is the same api any more.
Usually we need UTL_HTTP.END_RESPONSE(resp); to avoid of ORA-29270: too many open HTTP requests, but I think I reproduced the problem of #Clóvis Santos in Oracle 19c.
If web-service always returns status 200 (success) then too many open HTTP requests never happens. But if persistent connections are enabled and web-service returns status 404, behavior becomes different.
Let`s call something that always return 404.
First call of utl_http.begin_request returns normally and opens new persistent connection. We can check it with select utl_http.get_persistent_conn_count() from dual;. Second call causes an exception inside utl_http.begin_request and persistent connection becomes closed. (Exception is correctly handled with end_response/end_request).
If I continue then each odd execution returns 404 normally and each even execution gives an exception (handled correctly of course).
After some iterations I get ORA-29270: too many open HTTP requests. If web-service returns status 200 everything goes normally.
I guess, it happens because of the specific web-service. Probable it drops persistent connection after 404 and doesn't after 200. Second call tries to reuse request on persistent connection but it doesn't exist and causes request leak.
If I use utl_http.set_persistent_conn_support (false, 0); once in my session the problem disappears. I can call web-service as many times as I need.
Resolution:
Try to switch off persistent connection support. Probable, on the http-server persistent connections work differently for different requests. Looks like a bug.

Resources