Sbt resource generation in runtime - sbt

I am trying to achieve what a resourceGenerator in Runtime would do: create a resource that is available on the classpath during runtime, however that would not be packaged under the main configuration.
In my specific case, I am trying to create an sbt plugin that facilitates dealing with JNI native libraries. The above mentioned resource would be a "fat" jar containing a shared library, thus it is not required for compilation but only during runtime.
My goal in the end is to publish the standard jar (in the Compile configuration) and publish the fat jar as an extra artifact (in the Runtime configuration). However, during local testing, I would like the shared libraries to be available on the classpath when simply calling run from sbt.
I tried implementing a resourceGenerator in Runtime, however with no success. An alternative approach I could imagine would be to modify runtime:exportedProducts or alter runtime:managedClasspath directly, however I first wanted to know if there is already a way to include resources only in the runtime configuration?

Related

Sbt fat jar (which excludes unused code)

So I've been trying to do some AWS lambda with scala and sbt. And one of the recommendations is
Minimize your deployment package size to its runtime necessities. This
will reduce the amount of time that it takes for your deployment
package to be downloaded and unpacked ahead of invocation. For
functions authored in Java or .NET Core, avoid uploading the entire
AWS SDK library as part of your deployment package.
I've been using sbt-assembly to create fat jars for my code and dependencies, but it seems like sbt-assembly will package all library dependencies when I only use like 10% of the aws-core library which adds a lot of content. Was wondering if there is something I can do to cut down on the number of dependencies to what is actually imported in my code (and their dependencies).
As far as I know, there is no direct and safe way to selectively contains the dependent classes in a fat jar generated by the sbt-compile plugin.
First of all, you should understand that sbt plugins just provide a settings and jar files required to invoke methods in your project. It means that the dependent plugin is brought to your project with the pre-compiled jar file, which is determined by the version that you specified on your build setting (e.g., plugins.sbt in your project dir).
For example, the jars of sbt-assembly are brought to your project from this link when you specify that you want to use the sbt-assembly (although it is brought to your project when you use recent sbt version by default).
Therefore, at least you may have two choices to shrink your jar files.
Compile the jar file from the scratch
For the sbt-aws, its source code is provided on this link, so you may selectively compile the source codes to get the classes that your program is going to use.
Use the tool for shrinking jar file
There are several tools to shrink your jar file based on the dependencies. The most popular tool is proguard; it seems that there is a proguard support for sbt.
Warning
As mentioned in another stack overflow answer, selectively choosing some classes from the jar may cause your program crash depending on the input value and several other conditions. You've said that only 10 percent of the jar file is used, but you cannot ensure whether other classes are required from your code and library that your project depends on. When you use the tool for helping you to shrink the jar file, be careful when the program is security critical one.

References in .NET Core (2.0)

I've tried to add a new reference to my .NET Core project.The strange thing is that I can access also the projects that are involved in my reference. For this example, i should be able to see the Repository project from service, but should not be able to access Entity Project.However , I can still access the entities object from Service.
How comes ?
References in SDK-based projects are fully transitive so - similar to many other package managers like npm or maven - you all the transitive references are available in the project to make sure the app compiles and runs cleanly, e.g. there are no unresolved references when the dependency is referenced and all assemblies are part of the build output and ready to run. (there may even be conflict resolution applied to conflicting version of assemblies resulting in the generation of binding redirects.)
In previous versions, you would need to install NuGet packages or add additional project references to other projects as well to not get build errors or type load exceptions.
Currently there is no perfect workaround if you want your project to do all the things needed to be able to run and resolve conflicts correctly but not pass transitive references to the compiler.
If you only need a dependency to build a project, but not to run it, you can mark a package or project reference as PrivateAssets="All" (add as attribute to the reference in the .csproj file).
If you want to enforce API usage - e.g. for layered APIs, consider writing a roslyn analyzer that will emit warnings if you reference APIs from places you don't want to. this may be suitable for large projects where tooling is needed to maintain the desired architecture.

java.lang.ClassNotFoundException: Could not load requested class : oracle.jdbc.driver.OracleDriver [duplicate]

How should I add JAR libraries to a WAR project in Eclipse without facing java.lang.ClassNotFoundException or java.lang.NoClassDefFoundError?
The CLASSPATH environment variable does not seem to work. In some cases we add JAR files to the Build Path property of Eclipse project to make the code compile. We sometimes need to put JAR files inside /WEB-INF/lib folder of the Java EE web application to make the code to run on classes inside that JAR.
I do not exactly understand why CLASSPATH does not work and in which cases we should add JARs to Build Path and when exactly those JARs should be placed in /WEB-INF/lib.
The CLASSPATH environment variable is only used by the java.exe command and even then only when the command is invoked without any of the -cp, -classpath, -jar arguments. The CLASSPATH environment variable is ignored by IDEs like Eclipse, Netbeans and IDEA. See also java.lang.ClassNotFoundException in spite of using CLASSPATH environment variable.
The Build Path is only for libraries which are required to get the project's code to compile. Manually placing JAR in /WEB-INF/lib, or setting the Deployment Assembly, or letting an external build system like Maven place the <dependency> as JAR in /WEB-INF/lib of produced WAR during the build, is only for libraries which are required to get the code to deploy and run on the target environment too. Do note that you're not supposed to create subfolders in /WEB-INF/lib. The JARs have to be placed in the root.
Some libraries are already provided by the target JEE server or servletcontainer, such as JSP, Servlet, EL, etc. So you do not need put JARs of those libraries in /WEB-INF/lib. Moreover, it would only cause classloading trouble. It's sufficient to (indirectly) specify them in Build Path only. In Eclipse, you normally do that by setting the Targeted Runtime accordingly. It will automatically end up in Build Path. You do not need to manually add them to Build Path. See also How do I import the javax.servlet / jakarta.servlet API in my Eclipse project?
Other libraries, usually 3rd party ones like Apache Commons, JDBC drivers and JEE libraries which are not provided by the target servletcontainer (e.g. Tomcat doesn't support many JEE libraries out the box such as JSF, JSTL, CDI, JPA, EJB, etc), need to end up in /WEB-INF/lib. You can just copy and paste the physical JAR files in there. You do not necessarily need to specify it in Build Path. Only perhaps when you already have it as User Library, but you should then use Deployment assembly setting for this instead. See also ClassNotFoundException when using User Libraries in Eclipse build path.
In case you're using Maven, then you need to make absolutely sure that you mark libraries as <scope>provided</scope> if those are already provided by the target runtime, such as JEE, Servlet, EL, etc in case you deploy to WildFly, TomEE, etc. This way they won't end up in /WEB-INF/lib of produced WAR (and potentially cause conflicts with server-bundled libraries), but they will end up in Eclipse's Build Path (and get the project's code to compile). See also How to properly install and configure JSF libraries via Maven?
Those JARs in the build path are referenced for the build (compile) process only. If you export your Web Application they are not included in the final WAR (give it a try).
If you need the JARs at runtime you must place them in WEB-INF/lib or the server classpath. Placing your JARs in the server classpath does only make sense if several WARs share a common code base and have the need to access shared objects (e.g. a Singleton).
If you are using Maven:
Open the project properties, and under Deployment Assembly click Add...
Then select Java Build Path Entries and select Maven Dependencies
Resolved by setting permissions.
Had related issue using PySpark and Oracle jdbc. The error does not state that the file cannot be accessed, just that the class cannot be loaded.
So if anyone still struggles, check the permissions. Some might find it obvious tho'.
I want to give the answer for the folowing link question ClassNotFoundException oracle.jdbc.driver.OracleDriver only in servlet, using Eclipse
Ans: In Myeclipse go to Server-->left click on Myeclipse Tomcat7-->Configure Server Connector-->(Expand)Myeclipse Tomcat7--> Paths-->Prepend to classpath-->Add jar (add oracle14 jar)-->ok

SOA MDS Target folder

I would like to understand what role the target folder plays in a SOA MDS project.
I am using JDeveloper and the target folder keeps getting populated with 2 .jar files. I am not sure where these jar files are coming from, but they contain old data which should be changed.
Can somebody please help me understand what is behind the making of these files?
The target folder is the default build output directory used by maven.
If working correctly, the builds should be generated there by maven using the configuration specified in the pom.xml file. In your case, the maven build might not have been run recently, which is why you see old content in the jars.
Have a look inside the pom.xml and see what build configuration has been specified there (it is likely to be no different from a SOA composite maven build file/pom file). If it's all built correctly, you should be able to deploy that jar directly to the MDS runtime (either manually or via maven).
In the pom file, you should be able to override most things there including the name, version, bundle type, target directory etc.
You can also use maven to keep track of your MDS changes - i.e. version it like any other build artifact/SOA composite. The versioned jars can also be uploaded to an artifact repository (such as nexus), in addition to being deployed to MDS runtime, so you have good level of traceability of MDS changes
PS -
This might help explain more: http://weblog.singhpora.com/2016/10/managing-shared-metadata-mds-in-ci.html

SBT How to disable Ivy cache for a specific groupid

I have several Scala modules that I'm building with SBT. Some of them (I'll call them dependant modules) are being published to Artifactory and then used by top-level modules.
All changes to the code are done in separate git branches. When the feature (or bugfix) is done, that branch is compiled in Jenkins and then deployed to the testing instance and handed off to the QA team.
So it's possible that there will be several git branches with different code in the dependant modules.
The problem is that Ivy is caching these modules locally, so it's possible that the top-level module will be built with the dependant module from a different branch (taken from the local cache).
I've tried adding changing() directive to the dependency specification in build.sbt.
In that case Ivy is ignoring the local cache and goes to Artifactory to download the POM file every time. Then it parses the POM file, but concludes that it has the jar file with that version in the local cache and fetches the jar file from the local cache and not from Artifactory. Which isn't what I want.
Since the code in branches hasn't been integrated into the master branch at this point, it's perfectly valid that different feature branches have the same version number, but different code.
Is there a way to tell Ivy (via SBT) to ignore local cache for a certain groupid? Or a single dependency at least?
If you are using versioning for your dependant modules then each codebase change must produce a different version. Ivy and maven expect that once the artifact has been published with a specific version it will stay unchanged forever. That is why they are using cached files. If you want to download a fresh version from repository on every compile you should add -SNAPSHOT suffix to the dependant module version number (eg: dep-module-1.1.1-SNAPSHOT)

Resources