We have an existing app that uses an Oracle Coherence cluster to put/get POFable objects to/from a cluster cache. I'm trying to modify it to use an extend proxy to do the same thing. So far I've been able to startup a separate proxy process that connects to our cluster, and I have modified the cache config for the existing app to connect to the extend proxy. At startup, everything looks happy and connected. But when I try to put something into the cache, the objects end up with all their fields uninitialized.
I haven't changed any client application code when it comes to putting key/value pairs into the cache, because as I understand it, using extend is supposed to be transparent to clients. Here's the code that puts objects into cache:
// Initiliaze POF object from DB; MyObject implements EvolvablePortableObject
MyObject o = new MyObject();
o.setField1(...);
o.setField2(...);
o.setField3(...);
CacheFactory.getCache("cache-name").put(key, o, expireTime);
The net result is that the object ends up in the cache with all of its fields uninitialized (i.e. all zeroes and nulls). I have used the debugger to confirm that inside the client, the object is fully initialized and all of its fields are populated as I expect. Likewise, I have used a debugger to show that the object is uninitialized when MyObject.writeExternal() is called inside the proxy app. So something is breaking down between the client and proxy but I'm not sure what. Both point to the same POF config file and have the same classpath, so they should be seeing the same POF schemes and Java objects. I have turned logging up to level 9 on both client and proxy but I don't see any messages that are out of the ordinary. When I run without the proxy, the client is able to put the same objects into the cache without any issue.
In case anyone else has this issue, the root cause ended up being some missing tags. I already had it defined under the acceptor-config for the extend service:
<serializer>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>pof-config.xml</param-value>
</init-param>
</init-params>
</serializer>
(same thing needed for Invocation Service if you define one).
But also had to add a similar config to the client config to make things match:
<defaults>
<serializer>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>pof-config.xml</param-value>
</init-param>
</init-params>
</serializer>
Related
Environment
ASP.NET WebForms app over IIS
Docker container host
AWS ECS hosting platform
Each client hosting its own copy of the app with private database connection string
Background
In the non-docker environment, each copy is a virtual directory under IIS, and thus have their own individual web.config pointing to dedicated databases. The underlying codebase is the same for each client, with no client-specific customization involved. The route becomes / here.
In the docker environment (one container per client), each copy goes over as a central root application.
Challange
Since the root image is going to be the same, how to have the web.config overridden for each client deployment.
We shouldn't create multiple images (one per client) as that will mean having extra deployment jobs and losing out on centralization. The connection strings should ideally be stored in some kind of dictionary storage applicable at ECS level which can provide client-specific values upon loading of corresponding containers.
Presenting the approach we used to solve this issue. Hope it may help others struck in similar cases.
With the problem statement tied to having a single root image and having any customization being applied at runtime, we knew that there needs to be a transformation of web.config at time of loading of the corresponding containers.
The solution was to use a PowerShell script that will read the web.config and get replace the specific values which were having a custom prefix embedded to the key. The values got passed from custom environmental variables within ECS and the web.config also got updated to have the keys with the prefix added.
Now since the docker container can have only a single entry point, a new base image was created which instantiated an IIS server and called a PowerShell script as startup. The called script called this transformation script and then set the ServiceMonitor on the w3cwp.
Thanks a lot for this article https://anthonychu.ca/post/overriding-web-config-settings-environment-variables-containerized-aspnet-apps/
I would use environment variables as the OP suggests for this with a start up transform, however I want to make the point that you do not want sensitive information in ENV variables, like DB passwords, in your ECS task definition.
For that protected information, you should use ECS secrets coupled with Parameter Store in Systems Manager. These values can be stored encrypted in the Parameter Store (using a KMS key) and the ECS Agent will 'inject' them as ENV variables on task startup.
For me, to simplify matters, I simply use secrets for everything although you can choose to only encrypt the sensitive information and leave the others clear.
I dynamically add the secrets for the given application into my task definitions at deploy time by looking up the 'secrets' for the given app by 'namespace' (something that Parameter Store supports). Then, if I need to add a new parameter, I can just add a new secret to the store in the given namespace and re-deploy the app. It will pick up and inject into the task definition any newly defined secrets automatically (or remove ones that have been retired).
Sample ruby code for creating task definition:
params = ssm_client.get_parameters_by_path(path: '/production/my_app/').parameters
secrets = params.map{ |p| { name: p.name.split("/")[-1], value_from: p.arn } }
task_def.container_definitions[0].secrets = secrets
This last transform injects the secrets such that the secret 'name' is the ENV variable name... which ends up looking like this:
"secrets": [
{
"valueFrom": "arn:aws:ssm:us-east-1:578610029524:parameter/production/my_app/DB_HOSTNAME",
"name": "DB_HOSTNAME"
},
{
"valueFrom": "arn:aws:ssm:us-east-1:578610029524:parameter/production/my_app/DB_PASSWORD",
"name": "DB_PASSWORD"
}
You can see there are no values now in the task definition. They are retrieved and injected when ECS starts up your task.
More information:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html
I am working with open-resty and lua to create a server for redirecting requests. Redirects are done based on some data from a lua datatree structure (nested tables)
I am looking for a way to populate this data once on startup and after that share the data between workers.
ngx.ctx can save arbitrary data but lasts only during request.
Shared dict lasts till the end but can only save list of primitives.
I've read that it is possible to share data across lua modules. Because modules get instantiated only once at startup. The code is something like this
local _M = {}
local data = {
dog = {"value1", "value4"},
cat = {"value2", "value5"},
pig = {"value3", "value6"}
}
function _M.get_age(name)
return data[name]
end
return _M
and then in nginx.conf
location /lua {
content_by_lua_block {
local mydata = require "mydata"
ngx.say(mydata.get_age("dog"))
}
}
Is this third possibility thread safe?
Is there something else which can achieve this?
There is not a lot of documentation on this, that is why posted it here.
Any info would help,
Thank you
You can populate your data in init_by_lua, and access it later on. In your case initialization of mydata module can be achieved by:
init_by_lua_block {
require "mydata"
}
init_by_lua runs once during nginx startup, and then the process it run in forks into workers, so each of them contain an independent copy of this data.
Workers are single-threaded, so you can safely access your data.
Now, if you want to modify your configuration at runtime, without reloading nginx, then it gets a bit more complicated. Each worker is independent, but we can use ngx.shared.DICT to propagate changes. Depending on your requirements there are two solutions you can use:
After each change put your configuration into shared dictionary. Create a timer that periodically reloads worker's configuration from this shared cache.
After each change put your configuration into shared dictionary, along with current timetamp or version number. On each request in a worker check whether this timestamp/version is never than the one cached locally - if it is then deserialize this configuration and cache it locally.
If you have an API that should be usable then you can use lua-resty-lock to create cross-worker critical sections that synchronize modification.
I'm running into an issue integrating Spring Security with my Elastic Beanstalk app backed by a MySql database. If I deploy my app I'm able to login in correctly for some time but eventually I'll start to receive login errors without an exception being thrown so I'm unable to get any useful information about the issue. I've downloaded the logs as well and can't see anything of value. I can see where the logs show accessing the public page, attempting to access the private section, returning the login page, and then the loginError page; however, nothing about any issue.
Even though I'm unable to login through a browser I am able to login if I run the app from an IDE as well as view the db in MySQL Workbench. This suggests to me the problem is due to some persistent state on the server.
I've had a similar problem before with another Beanstalk app using Spring Security and was able to resolve it by setting application properties as follows:
spring.datasource.test-on-borrow=true
spring.datasource.validation-query=SELECT 1
I'm using a more recent version of Spring than that app and the properties have been changed to specific datasources so I tried adding the following properties:
spring.datasource.tomcat.test-on-borrow=true
spring.datasource.tomcat.validation-query=SELECT 1
When that didn't work I added another based on an answer to a similar question here; now the properties are:
spring.datasource.tomcat.test-on-borrow=true
spring.datasource.tomcat.test-while-idle=true
spring.datasource.tomcat.validation-query=SELECT 1
That seemed to work (possibly due to less login activity) but eventually resulted in the same behavior .
I've looked into the various properties available but before I spend a lot of time randomly setting and/or overriding default settings I wanted to see if there's a reliable way to deal with this.
How can I configure my datasource to avoid login errors after long periods of time?
This isn't a problem of specific configuration values but with where those configurations reside. The default location for the application.properties (/resources; Intellij) is fine for deploying as a jar with an embedded Tomcat server but not as a war with a provided server. The file isn't found/used so no changes to the file affect the one given by AWS.
There are a number of ways to handle this; I chose to add an RDS configuration bean in my SpringBootServletInitializer:
#Bean
public RdsInstanceConfigurer instanceConfigurer() {
return () -> {
TomcatJdbcDataSourceFactory dataSourceFactory =
new TomcatJdbcDataSourceFactory();
// Abondoned connections...
dataSourceFactory.setRemoveAbandonedTimeout(60);
dataSourceFactory.setRemoveAbandoned(true);
dataSourceFactory.setLogAbandoned(true);
// Tests
dataSourceFactory.setTestOnBorrow(true);
dataSourceFactory.setTestOnReturn(false);
dataSourceFactory.setTestWhileIdle(false);
// Validations
dataSourceFactory.setValidationInterval(30000);
dataSourceFactory.setTimeBetweenEvictionRunsMillis(30000);
dataSourceFactory.setValidationQuery("SELECT 1");
return dataSourceFactory;
};
}
Below are the settings that worked for me.
From Connection to Db dies after >4<24 in spring-boot jpa hibernate
dataSourceFactory.setMaxActive(10);
dataSourceFactory.setInitialSize(10);
dataSourceFactory.setMaxIdle(10);
dataSourceFactory.setMinIdle(1);
dataSourceFactory.setTestWhileIdle(true);
dataSourceFactory.setTestOnBorrow(true);
dataSourceFactory.setValidationQuery("SELECT 1 FROM DUAL");
dataSourceFactory.setValidationInterval(10000);
dataSourceFactory.setTimeBetweenEvictionRunsMillis(20000);
dataSourceFactory.setMinEvictableIdleTimeMillis(60000);
My environment is Weblogic 10.3.5 on Solaris box. EJB is version 3 and there is anotation in the Bean class. Sorry for the confusion as the code is new to me and they also have deployment descriptor to generate ejb2 client code for another client to call, so it's not straigtforward.
I have a stateless session bean deployed to a cluster which has 2 server members say they are member1 and member2.
The session bean is deployed as clusterable as this is in the anotation:
homeIsClusterable = Constants.Bool.TRUE
This is how my Stand alone Java client lookup and call the EJB methods:
private void testBean(){
bean.methodA();
bean.methodB();
}
In the provider URL I ONLY specify the provider URL to ONE server member:
env.put(Context.PROVIDER_URL, "t3://member1:7005");
env.lookup("remote#the.bean.qulified.remoteinterface")
The Jndi name above is using the "mapped name + qualified remote interface class name", the mapped name is defined in the anotation.
Now the problem is, I found out, bean.methodA() got invoked in member1, and methodB() got invoked on member2, I found this from the logs of each server member. So it's always like this, member1 log will only show debug information from methodA, and member2 will only show debug information from methodB.
So here is my conceptual question - is this possible at all ? Are the above 2 methods supposed to be called on member1 only ? I know it's possible when you lookup through home interface you could possibly get a bean from either server, but in this case the ejb3 lookup is not going through the home interface(like in ejb2 we get a home and then call create method) but directly getting a remote object.
This caused issue as our methodB has a dependancy on methodA(methodA is doing some cleanup job, and then method re-initialize the cache), we need to do this on each cluster member.
This is just extra info but please focus on the above question from a concept perspective.
From the documentation:
When home-is-clusterable is True, the EJB can be deployed from multiple WebLogic Servers in a cluster. Calls to the home stub are load-balanced between the servers on which this bean is deployed, and if a server hosting the bean is unreachable, the call automatically fails over to another server hosting the bean.
I believe this is the case even when you explicitly only connect to a single member. This has some pretty good info in the Replica-Aware Home section:
http://www.informit.com/articles/article.aspx?p=101737&seqNum=8
It's more or less the whole point of clustering... a cluster appears as if it's a single server instance to a client.
I have a flex application that communicates via BlazeDS with two webapps running inside a single instance of Tomcat.
The flex client is loaded by the browser from the first webapp and all is well. However on the initial call to the second webapp the client receives the following error:
Detected duplicate HTTP-based FlexSessions, generally due to the remote host disabling session cookies. Session cookies must be enabled to manage the client connection correctly.
Subsequent calls to the same service method succeed.
I've seen a few posts around referring to the same error in the context of two flex apps calling a single webapp from the same browser page, but nothing which seems to help my situation - so I'd be very grateful if anyone could help out....
Cheers, Mark
Three potential solutions for you:
I found once that if I hit a remote object before setting up a messaging channel then the CientID would get screwed up. Try to establish an initial messaging channel once the application loads, and before any remote object calls are made.
Flash Builder's network monitoring tool can cause some problems with BlazeDS. I set up a configuration option on application load that checks to see if I'm in the dev environment (it is called just before setting up my channel from #1). If I'm in dev, I assign a UID manually. For some reason this doesn't take well outside the dev environment... been awhile since I set it all up so I can't remember the finer points as to why:
if (!(AppSettingsModel.getInstance().dev))
FlexClient.getInstance().id = UIDUtil.createUID();
BlazeDS by default only allows for a single HTTP session to be setup per client/browser. In my streaming channel definitions I added the following to allow for additional sessions per browser:
<channel-definition id="my-secure-amf-stream" class="mx.messaging.channels.SecureStreamingAMFChannel">
<endpoint url="https://{server.name}:{server.port}/FlexClient/messagebroker/securestreamingamf"
class="flex.messaging.endpoints.SecureStreamingAMFEndpoint"/>
<properties>
<add-no-cache-headers>false</add-no-cache-headers>
<idle-timeout-minutes>0</idle-timeout-minutes>
<max-streaming-clients>10</max-streaming-clients>
<server-to-client-heartbeat-millis>5000</server-to-client-heartbeat-millis>
<user-agent-settings>
<user-agent match-on="MSIE" kickstart-bytes="2048" max-streaming-connections-per-session="3" />
<user-agent match-on="Firefox" kickstart-bytes="2048" max-streaming-connections-per-session="3" />
</user-agent-settings>
</properties>
Problem: Duplicate session errors when flex.war and Livecycle.lca files are hosted in separate JVMs on WebSphere Server.
Solution:
Inside the command file for the event, set FlexClientId to null in execute method before calling remote service (Java method or LC Process).
Guess this approach can be used in other scenarios as well to prevent Duplicate session errors.
EventCommand.as file
—————————–
import mx.messaging.FlexClient;
//other imports as per your code
public function execute(event:CairngormEvent):void
{
var evt:EventName = event as EventName ;
var delegate:Delegate = new DelegateImpl(this as IResponder);
//***set client ID to null
FlexClient.getInstance().id = null;
delegate.functionName(evt.data);
}