WebLogic 11g - possible to deploy jdbc driver as shared library? - oracle11g

When configuring a data source on WebLogic 11g does anyone know if it's possible for the class specified as the connection pool driver to be deployed as a shared library rather than being installed in wlserver_10.3/server/lib?
The reason for wishing to do this is that we thought it might be more manageable to be able to deploy the driver in a complex production environment.
I've run some tests by deploying the jar file containing the driver with various deployment order values but always get "cannot load driver class" on server startup.
Thanks.

You can use the shared libraries from virtually anywhere keeping this major rule in mind, Weblogic must have permissions to read it.
If that criteria is met, please reference your jar in commEnv.sh "WEBLOGIC_CLASSPATH" and restart the server and you should be good to go. This is handy for shared mounts with common libraries but always always always make sure you can read the file.

Related

nopcommerce 4.0 datasettings.json transform

This may seem a bit trivial...but how do you go about transforming the db connection for a nopcommerce app as it is deployed to various environments.
The db connection is set in app_data\datasettings.json.
Normally this type of stuff is handled with web.config transforms.
How do you go about setting up build transforms for different environments (dev, test, prod)?
I am also looking around this topic.
In my humble opinion, the nopCommerce config is a pain, because it makes it really hard to do proper Continuous Integration/Continuous Delivery while keeping secrets safe.
At initial deployment you are greeted with the install page. The problem is that the installation process writes a a bunch of files to on server, including datasettings.json, where the connection string to the DB is hard-coded.
This means that when I deploy nopCommerce to Azure App Service, for deployments after installation, I have to make sure NOT to delete "additional files on the server" or the config will be deleted, since these config files written by the installer, are not in source control.
It is really impractical not to be able to use standards ASP.NET connection strings, environment variables or KeyVault.
To answer your question on how you do transformation on the config file, one possibility is to use a PowerShell script to read, transform, and write the config file directly on the App Service instance. There is an API for that.
https://blogs.msdn.microsoft.com/gabeshapiro/2017/01/01/samples-for-using-the-azure-app-service-kudu-rest-api-to-programmatically-manage-files-in-your-site/
https://github.com/projectkudu/kudu/wiki/REST-API
Alternatively, you can modify the source to read from Web.Config:
Change the connection string of nopCommerce?

JAVAFX Derby Client Server, do I need two different builds?

I am building application in javafx using Derby embedded DB. Do I need to create two separate builds in order to run it as client server ?
The primary differences are:
different syntax for the JDBC Connection URL, and
different Derby jars in the CLASSPATH.
It is possible to include both sets of Derby jars in the CLASSPATH, meaning that the only thing your application must vary at runtime in the JDBC Connection URL.
derbyclient.jar is significantly smaller than derby.jar, however, so if executable package size is crucial to you, you might find it worth the effort to have two different modes of packaging.

How to create an access database to be used by runtime

How do you save a database in Access 2010 so that the database can be connected to and the queries run from a machine that does not have Access installed? I have read that I can use runtime but can't find how to actually make it so that it can be used in runtime. Is this what the Package Solution Wizard is for or is just a certain file extension? If I do that, will the user have to install it? On my network I am not sure if that is allowed. Can you just email it as a file that doesn't need to be installed? I am really struggling to find much info.
You don't need to make any special preparations in your database for launching under runtime. Launching Microsoft Access with your database is similar to the way you would do so with the regular version of Access. Simply launch the msaccess.exe followed by the name of your database.
You can read more about this for instance here.
Access runtime should be installed on PC first

FinderSync invalidated on El Capitan

We have an application written in Mono that needs to communicate with an Finder Sync App extension.
All is working fine until we tried our app on El Capitan instead of on Yosemite.
We use a shared SQLite database to tell what paths are in which state and use NSDistributedNotificationCenter for communication between the two.
The shared SQLite database is outside of the sandboxed env so we have putted an excepention in our entitlements com.apple.security.temporary-exception.files.home-relative-path.read-write
If we remove this exception from the app extension, the extension works (but obviously we can't read our db)
Then we tought of putting the SQLite DB into memory, but shared memory databases isn't possible over multiple processes.
I can't find how I can create a NSFileHandle for a Sqlite Connection.
We could send over all the info to the application extension, but then that has to keep it in memory (preferably in a SQLite, cause we need to do some querying.)
Does anyone has some pointers of what we could do?
Try to look in The Application Group Container Directory it might do in your case. Basically it allows you to have shared folder between apps/extension.
App group container directories. A sandboxed app can specify an entitlement that gives it access to one or more app group container directories, each of which is shared among all apps with that entitlement.
After some research on similar problem I found it's much easier to have simple TCP server in main app that responds to extension with file status. This way you can easily broadcast file status change to all extension instances etc.

How to keep deployed code on multiple BizTalk front ends in sync?

We have multiple BizTalk 2006 application servers, and I find it almost impossible to keep the versions of our projects in sync on them. It's a tedious process of deploying the MSI packages, importing them, matching up files in the GAC, deploying some registry changes, and if one step is missed or somebody deployed an updated copy of a DLL directly to one server and not another, there's no easy way to tell.
How do others ensure that copies of software between the two servers are the same version?
Some Background:
Our environment has two (non-clustered) BizTalk front end servers and a separate database back-end. Until recently, though we had both front-ends configured, the host instances were stopped on the second server because of some troubleshooting. They've been disabled for a few months, and we're deployed some updated code in the meantime.
This morning, I did a folder diff on the GAC, as well as the folder that holds the local disk copy of the DLLs for our deployed project (C:\OurProject\ on both servers), and everything matched - same file sizes, same timestamps. However, once I turned on the second set of services, it became obvious that Server2 was using an old version of the project DLL - of the next three files processed, two had normal results and one was clearly out of date.
Please help me avoid an aneurysm.
One thing you may want to look into is the BizTalk Deployment Framework.
We are currently building up a new environment with BizTalk 2009 and I started out with a set of MSBuild scripts that handle exporting sources from SubVersion, building and deploying assemblies using BTSTask.
Of course BTSTask lacks a lot of functionality (start/stop applications) but at least for BizTalk 2006 there is BTSControl.
We use an automated build script whose ultimate end is an MSI with binding files for Dev/Stage/Prod. All released binding files are stored on a share and used to load the BizTalk server by hand. First the App is stopped, MSI's executed on both servers and then MSI imported. During the import, we specify the environment for the bindings and voila. We've had no issues with loss of synch.
So, I'd suggest taking all of your latest MSIs and re-execute them on the servers where you have differences. Otherwise, just try to put a process in place to create a repeatable load process by hand.

Resources