Websphere 8 JNDI lookup of local EJB object - ejb

I'm trying to call EJB3 Service Bean Local Interface using the JNDI lookup
and the sample code is like this
localInterfaceObj = (Object) initialContext.lookup("ejblocal:com.infopro.eswx.service.ejb.NdcServiceControllerSBLocal")
but i get error
[10/29/13 16:08:59:609 GMT+08:00] 00000028 SystemErr R javax.naming.ConfigurationException: NamingManager.getURLContext cannot find the factory for this scheme: ejblocal
[10/29/13 16:08:59:609 GMT+08:00] 00000028 SystemErr R at com.infopro.servicelocator.ServiceLocator.getLocalInterface(ServiceLocator.java:257)
[10/29/13 16:08:59:609 GMT+08:00] 00000028 SystemErr R at com.infopro.servicelocator.ServiceLocator.getLocalInterface(ServiceLocator.java:273)
i've also tried other string inside the lookup() method in the form of
ejblocal:#
which is
ejblocal:ESERV/ESWX302EJB_NDC.jar/NdcServiceControllerSB#com.infopro.eswx.service.ejb.NdcServiceControllerSBLocal
and ejblocal:ESERV/ESWX302EJB_NDC/NdcServiceControllerSB#com.infopro.eswx.service.ejb.NdcServiceControllerSBLocal
and ejb/com.infopro.eswx.service.ejb.NdcServiceControllerSBLocal
but still encounter the same error
is there any way to solve this?
thank you.
regards,
Tan

Related

How do I register custom JDBC dialect in Rstudio?

I'm trying to analyze bigquery data in Rstudio server running on a google dataproc cluster. However, due to the memory limitations of Rstudio, I intend to run queries on this data in sparklyr but I haven't had any success importing the data directly into the spark cluster from bigquery.
I'm using google's official JDBC connectivity driver:
ODBC and JDBC drivers for BigQuery
I also have the following software versions running:
Google Dataproc: 2.0-Debian 10
Sparklyr: Spark 3.2.1 Hadoop 3.2
R version 4.2.1
I also had to replace the following spark jars with the versions being used by the JDBC connectivity driver above or added them where they were non-existent:
failureaccess-1.0.1 was added
protobuff-java-3.19.4 replaced 2.5.0
guava 31.1-jre replaced 14.0.1
Below is my code using the spark_read_jdbc function to retrieve a dataset from big query
conStr <- "jdbc:bigquery://https://www.googleapis.com/bigquery/v2:443;ProjectId=xxxx;OAuthType=3;AllowLargeResults=1;"
spark_read_jdbc(sc = spkc,
name = "events_220210",
memory = FALSE,
options = list(url = conStr,
driver = "com.simba.googlebigquery.jdbc.Driver",
user = "rstudio",
password = "xxxxxx",
dbtable = "dataset.table"))
The table gets imported into the spark cluster but when I try to preview it, the following error message is received
ERROR sparklyr: Gateway (551) failed calling collect on sparklyr.Utils: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 4) (faucluster1-w-0.europe-west2-c.c.ga4-warehouse-342410.internal executor 2): java.sql.SQLDataException: [Simba][JDBC](10140) Error converting value to long.
at com.simba.googlebigquery.exceptions.ExceptionConverter.toSQLException(Unknown Source)
at com.simba.googlebigquery.utilities.conversion.TypeConverter.toLong(Unknown Source)
at com.simba.googlebigquery.jdbc.common.SForwardResultSet.getLong(Unknown Source)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$9(JdbcUtils.scala:446)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$9$adapted(JdbcUtils.scala:445)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:367)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:349)
at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:31)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:349)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
When I try to import the data via an SQL query, e.g
SELECT date, name, age FROM dataset.tablename
I end up with a table looking like this:
date
name
age
date
name
age
date
name
age
date
name
age
I've read on several posts that the solution to this is to register custom JDBC dialect but I have no idea how to do this; what platform to do it on, or if it's possible to do it within Rstudio. Links to any materials that would help me solve this problem would be appreciated.

Writing a package function that uses a dbplyr call to tbl()

I have a simple function that should fetch a table using a DBI::dbConnect() connection. I am having trouble with the call to tbl() that works fine in an interactive session.
My function:
a2_db_read <- function(con, tbl_name, schema = "dbo"){
if(schema == "dbo"){
dplyr::tbl(con, tbl_name)
}
else{
dplyr::tbl(con, dbplyr::in_schema(schema, tbl_name))
}
}
If I make the call dplyr::tbl() I get:
Error in UseMethod("tbl") :
no applicable method for 'tbl' applied to an object of class "Microsoft SQL Server"
If I make the call dbplyr::tbl() I get:
a2_db_read(a2_con_uat, "AVL Data")
Error: 'tbl' is not an exported object from 'namespace:dbplyr'
How can I get that call to succeed in a function? My package Imports is:
Imports:
DBI,
dbplyr,
dplyr,
ggplot2,
odbc
I got it working with dplyr::tbl(), the correct usage.
The problem was that my connection was stored as an object in a package when in fact the connection needs to be made every time I restart R.
On a fresh R environment, the stored stale connection caused the error:
Error in UseMethod("tbl") :
no applicable method for 'tbl' applied to an object of class "Microsoft SQL Server"
When I regenerated the connection, it worked.

sparklyr spark_read_parquet from s3 error

When I read a parquet file on s3 from sparklyr context like this:
{spark_read_parquet(sc, name = "parquet_test", path = "s3a://<path-to-file>")}
It throws me an error which is:
Caused by: java.io.IOException: Could not read footer for file: FileStatus{path=s3a: .....
I was able to read the parquet file in a sparkR session by using read.parquet() function. So there must be some differences in terms of spark context configuration between sparkR and sparklyr.
Any suggestions on this issue? Thanks.
In yarn-client mode, The file schema s3 that you are using is not correct. You'll need to use the s3://<path-to-file>

RMySQL::dbListConnections no longer exists?

So, I've just starting messing around with the development version of RMySQL -- v.0.11.0.9000 -- and I've noticed that when trying to check if I had any open connections -- using DBI v. 0.3.1.9008 -- I get the error:
> DBI::dbListConnections(MySQL())
Error in (function (classes, fdef, mtable) :
unable to find an inherited method for function ‘dbListConnections’ for signature ‘"MySQLDriver"’
Indicating that RMySQL no longer extends dbListConnections for its driver (pardon my adhoc jargon)... Am I right in interpreting this as we no longer need to clean up our DB connections?
If not, how are we supposed to clean up after ourselves?
You don't have to cleanup after yourself, but it's still recommended. Just dbDisconnect(con) when you're done with it. If you don't happen to have con in an easily accessible format, don't worry about it.

SystemErr R 373846 [WebContainer : 2] INFO org.apache.bval.jsr303.ConfigurationImpl - ignoreXmlConfiguration == true

I am migrating from WAS 7.0 to WAS 8.5 ,JSF 1.2 to JSF 2.0 and richfaces 3.x to 4.x
I have various richfaces 4.3.x components used in my application.
whenever I try to use a component, on-change I get the following error logged in my server console :-
[3/8/15 7:44:26:986 EDT] 00000089 SystemErr R 373846 [WebContainer : 2] INFO org.apache.bval.jsr303.ConfigurationImpl - ignoreXmlConfiguration == true
The response after the Ajax call on-change is also very slow. I tried searching over different sites, but could not find a solution.
Thanks!

Resources