J Oliver EventStore - Examples exception - neventstore

when I run EventStore example in doc,it throw exception in SqlPersistenceFactory.cs
line 39:
Value can not be null
Parameter:dialect
Whether the connection string error in app.config?

I add some code in MainPrograms.cs line37--
.UsingSqlPersistence("EventStore")
.WidthDialect(new EventStore.Persistence.SqlPersistence.SqlDialects.MsSqlDialect())
.EnlistIn...

Related

RobotScript - Catch Python Code Exception

We have a following code in Python
def function1()
...........
raise Exception ..
...............
return 0
Robot script:
${STATUS}= function1
Can anyone please let me know how in Robot script we can catch the return code / exception and branch accordingly?
Run Keyword And Return Status will return a boolean true/false did the enclose keyword succeed.
Run Keyword And Ignore Error returns a tuple of two values - the 1st is the string "PASS" or "FAIL" depending did your keyword succeed or not; the second - the keyword's return value if it passed, or any error messages if not.
Thus surround your keyword with one of these 2 - it really boils down to do you care about the returned value in success or the error in failure - and work with the returned values.
${passed}= Run Keyword And Return Status function1
Run Keyword If ${passed} Action When Passed ELSE Different Action
${rc} ${msg} Run Keyword And Ignore Error function1
Run Keyword If "${rc}" == 'PASS' Log The keyword returned the value: ${msg}
... ELSE Log The keyword failed with the message: ${msg}

Resource 7bed8adc-9ed9-49dc-b15e-6660e2fc3285 transitioned to failure state ERROR when use openstacksdk to create_server

When I create the openstack server, I get bellow Exception:
Resource 7bed8adc-9ed9-49dc-b15e-6660e2fc3285 transitioned to failure state ERROR
My code is bellow:
server_args = {
"name":server_name,
"image_id":image_id,
"flavor_id":flavor_id,
"networks":[{"uuid":network.id}],
"admin_password": admin_password,
}
try:
server = user_conn.conn.compute.create_server(**server_args)
server = user_conn.conn.compute.wait_for_server(server)
except Exception as e: # there I except the Exception
raise e
When create_server, my server_args data is bellow:
{'flavor_id': 'd4424892-4165-494e-bedc-71dc97a73202', 'networks': [{'uuid': 'da4e3433-2b21-42bb-befa-6e1e26808a99'}], 'admin_password': '123456', 'name': '133456', 'image_id': '60f4005e-5daf-4aef-a018-4c6b2ff06b40'}
My openstacksdk version is 0.9.18.
In the end, I find the flavor data is too big for openstack compute node, so I changed it to a small flavor, so I create success.

how do i declare a dbms output variable on plsql?

I keep getting this error when trying to compile
[Error] PLS-00201 (159: 25): PLS-00201: identifier 'DMBS_UTILITY.FORMAT_ERROR_BACKTRACE' must be declared
Here's the faulty code:
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('STUDENT:'||V_STUDENT||
'Error Occured: '||SQLERRM ||CHR(10)||'['||
DMBS_UTILITY.FORMAT_ERROR_BACKTRACE|‌​|']');
...
As Gurwinder stated, you just have a typo in your code (DMBS_UTILITY instead of DBMS_UTILITY):
Try this:
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('STUDENT:'||V_STUDENT||
'Error Occured: '||SQLERRM ||CHR(10)||'['||
DBMS_UTILITY.FORMAT_ERROR_BACKTRACE|‌​|']');
...

SQLITE_ERROR: Connection is closed when connecting from Spark via JDBC to SQLite database

I am using Apache Spark 1.5.1 and trying to connect to a local SQLite database named clinton.db. Creating a data frame from a table of the database works fine but when I do some operations on the created object, I get the error below which says "SQL error or missing database (Connection is closed)". Funny thing is that I get the result of the operation nevertheless. Any idea what I can do to solve the problem, i.e., avoid the error?
Start command for spark-shell:
../spark/bin/spark-shell --master local[8] --jars ../libraries/sqlite-jdbc-3.8.11.1.jar --classpath ../libraries/sqlite-jdbc-3.8.11.1.jar
Reading from the database:
val emails = sqlContext.read.format("jdbc").options(Map("url" -> "jdbc:sqlite:../data/clinton.sqlite", "dbtable" -> "Emails")).load()
Simple count (fails):
emails.count
Error:
15/09/30 09:06:39 WARN JDBCRDD: Exception closing statement
java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (Connection is closed)
at org.sqlite.core.DB.newSQLException(DB.java:890)
at org.sqlite.core.CoreStatement.internalClose(CoreStatement.java:109)
at org.sqlite.jdbc3.JDBC3Statement.close(JDBC3Statement.java:35)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.org$apache$spark$sql$execution$datasources$jdbc$JDBCRDD$$anon$$close(JDBCRDD.scala:454)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1$$anonfun$8.apply(JDBCRDD.scala:358)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1$$anonfun$8.apply(JDBCRDD.scala:358)
at org.apache.spark.TaskContextImpl$$anon$1.onTaskCompletion(TaskContextImpl.scala:60)
at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:79)
at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:77)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:77)
at org.apache.spark.scheduler.Task.run(Task.scala:90)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
res1: Long = 7945
I got the same error today, and the important line is just before the exception:
15/11/30 12:13:02 INFO jdbc.JDBCRDD: closed connection
15/11/30 12:13:02 WARN jdbc.JDBCRDD: Exception closing statement
java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (Connection is closed)
at org.sqlite.core.DB.newSQLException(DB.java:890)
at org.sqlite.core.CoreStatement.internalClose(CoreStatement.java:109)
at org.sqlite.jdbc3.JDBC3Statement.close(JDBC3Statement.java:35)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.org$apache$spark$sql$execution$datasources$jdbc$JDBCRDD$$anon$$close(JDBCRDD.scala:454)
So Spark succeeded to close the JDBC connection, and then it fails to close the JDBC statement
Looking at the source, close() is called twice:
Line 358 (org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD, Spark 1.5.1)
context.addTaskCompletionListener{ context => close() }
Line 469
override def hasNext: Boolean = {
if (!finished) {
if (!gotNext) {
nextValue = getNext()
if (finished) {
close()
}
gotNext = true
}
}
!finished
}
If you look at the close() method (line 443)
def close() {
if (closed) return
you can see that it checks the variable closed, but that value is never set to true.
If I see it correctly, this bug is still in the master. I have filed a bug report.
Source: JDBCRDD.scala (lines numbers differ slightly)

Abort statement

I'm trying to abort a task in ada program but I get this error during compilation:
expect task name or task interface class-wide object for "abort"
The code looks like this:
task type Sending_Message;
type Send_Message is access Sending_Message;
declare
send : Send_Message;
begin
send := new Sending_Message;
...
abort send; -- this line throws error
end;
And again when I try line like this:
abort Sending_Message;
I get error:
invalid use of subtype mark in expression or call
Any idea what is wrong?
You have to explicitly dereference the access type:
abort send.all;

Resources