Exception in thread "main" quickfix.ConfigError: FIX44.xml: Could not parse data dictionary file - automated-tests

I am able to run fix with UseDataDictionary=N but when I turn this into UseDataDictionary=Y
FIX44.xml: Could not parse data dictionary file
Someone please help me what is wrong.
Caused by: quickfix.ConfigError: Could not parse data dictionary file
at quickfix.DataDictionary.load(DataDictionary.java:857)
at quickfix.DataDictionary.read(DataDictionary.java:838)
... 14 more
Caused by: java.lang.NoClassDefFoundError: org/w3c/dom/ls/DocumentLS
[DEFAULT]
ConnectionType=initiator
HeartBtInt=60
ReconnectInterval=1
FileStorePath=.\fixfiles\initiator
FileLogPath=.\log
StartTime=00:00:00
EndTime=00:00:00
UseDataDictionary=Y
DataDictionary=FIX44.xml
SocketReuseAddress=Y
SocketKeepAlive=Y
SocketTcpNoDelay=Y
ResetOnLogon=Y

Well the QF library is saying that it can load the DD but not parse it, therefore your DD is corrupt. I would recommend looking at the default DD that comes with the install pack.

Related

How to convert RDD to spark dataframe using sparklyr?

I have a lot of files with text data pushed by azure IOT on a blob storage in a lot of folders, and I want to read them and have a delta lake table with one row for each line of a file. I used to read them file by file, but it takes too much time so I want to use spark to speed up this treatment. It needs to integrate a databricks workflow made in R.
I've found spark_read_text function to read text file, but it cannot recursively read directory, it only understand if all the files are in one directory.
Here is an example of a file path (appid/partition/year/month/day/hour/minute/file):
app_id/10/2023/02/06/08/42/gdedir22hccjq
Partition is a random folder (there is around 30 of them right now) that azure IoT seems to create to treat data in parallel, so data for the same date can be split in several folders, which does not simplify the reading efficiency.
So the only function I found to do that is spark.textFile, which works with jokers and recursively handle directories. The only problem is that it return a RDD, and I can't find a way to transform it to a spark dataframe, which could ultimatly be accessed using a tbl_spark R object.
Here is what I did so far:
You need to set the config to recursively read the folder (here I do this on databricks in a dedicate python cell):
%py
sc._jsc.hadoopConfiguration().set("mapreduce.input.fileinputformat.input.dir.recursive", "true")
Then I can create a RDD:
j_rdd <- spark_context(sc) %>%
invoke("textFile", "/mnt/my_cont/app_id/*/2022/11/17/*", 10L)
This work to create the RDD, and as you can see I can map all the partitions (before the year) with a "*", as well as the folders four hours and minutes recursively with the "*" at the end.
I can collect it and create a R dataframe:
lst <- invoke(j_rdd, "collect")
data.frame(row = unlist(lst))
This correctly get my data, one column of text and one row for each line of each file (I can't display an example for privacy reason but it's not important).
The problem is I don't want to collect, but want to update a delta table with this data, and can't find a way to get a sparklyr object that I can use. The j_rdd object I got is like this:
>j_obj
<jobj[2666]>
org.apache.spark.rdd.MapPartitionsRDD
/mnt/my_cont/app_id/*/2022/11/17/* MapPartitionsRDD[80] at textFile at NativeMethodAccessorImpl.java:0
The closer I got so far: I tried to copy code here to convert data to a dataframe using invoke, but I don't seems to do it correctly:
contents_field <- invoke_static(sc, "sparklyr.SQLUtils", "createStructField", "contents", "character", TRUE)
schema <- invoke_static(sc, "sparklyr.SQLUtils", "createStructType", list(contents_field))
j_df <- invoke(hive_context(sc), "createDataFrame", j_rdd, schema)
invoke(j_df, "createOrReplaceTempView", "tmp_test")
dfs <- tbl(sc, "tmp_test")
dfs %>% sdf_nrow()
I only have one column with character in it so I thought it would work, but I get this error:
Error : org.apache.spark.SparkException: Job aborted due to stage failure: Task 14 in stage 25.0 failed 4 times, most recent failure: Lost task 14.3 in stage 25.0 (TID 15158) (10.221.193.133 executor 2): java.lang.RuntimeException: Error while encoding: java.lang.ClassCastException: java.lang.String cannot be cast to org.apache.spark.sql.Row
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 0, contents), StringType, false), true, false, true) AS contents#366
at org.apache.spark.sql.errors.QueryExecutionErrors$.expressionEncodingError(QueryExecutionErrors.scala:1192)
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$Serializer.apply(ExpressionEncoder.scala:236)
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$Serializer.apply(ExpressionEncoder.scala:208)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.hashAgg_doAggregateWithoutKey_0$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)
at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at org.apache.spark.scheduler.ShuffleMapTask.$anonfun$runTask$3(ShuffleMapTask.scala:81)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.scheduler.ShuffleMapTask.$anonfun$runTask$1(ShuffleMapTask.scala:81)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.doRunTask(Task.scala:156)
at org.apache.spark.scheduler.Task.$anonfun$run$1(Task.scala:125)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.scheduler.Task.run(Task.scala:95)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$13(Executor.scala:832)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1681)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:835)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:690)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to org.apache.spark.sql.Row
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$Serializer.apply(ExpressionEncoder.scala:233)
... 28 more
Does anyone have an idea how to convert this RDD object (using R/sparklyr) that I got in return of the invoke function in something usable without collecting data ?
Finally, I found that spark_read_text can also read multiple files with jokers, but you have to put a joker for each directories and files, it cannot discover folders recursively.
For example:
dfs <- spark_read_text(sc, "/mnt/container/app_id/10/2023/02/06/*")
...doesn't work. But:
dfs <- spark_read_text(sc, "/mnt/container/app_id/10/2023/02/06/*/*/*")
...works. Also:
dfs <- spark_read_text(sc, "/mnt/container/app_id/*/2023/02/06/*/*/*")
...with a joker above the date also works.
As the directory depth doesn't change in my case, that's enough for me.

itk::ImageFileReaderException (000000B4DD17F060)

I'm new here, learning ITK through QT Creator; this is demo. https://itk.org/Doxygen/html/Examples_2IO_2ImageReadWrite_8cxx-example.html
I want to display Medical images '.jpg format'. I didn't got any such error (red signed) in ISSUE-box but in APPLICATION OUTPUT-box found such sentences;
is anyone encounter such error before; thanks indeed
itk::ImageFileReaderException (000000B4DD17F060)
Location: "unknown"
File: E:\ITK\ITK\include\ITK-5.2\itkImageFileReader.hxx
Line: 133
Description: Could not create IO object for reading file C://Users//siat//Documents//ImageReadExportVTK//Chest-CT.jpg
There are no registered IO factories.
Please visit https://www.itk.org/Wiki/ITK/FAQ#NoFactoryException to diagnose the problem.

R Package IBrokers placeOrder() function fails

I'm using the package: IBrokers. It works well for me when I request historical data. Also the call to reqAccountUpdates() works well.
I am having problems with this script:
# myscript.r
.libPaths("rpackages")
library(IBrokers)
tws2 = twsConnect(2)
print('Attempting BUY')
mytkr = twsFuture("ES","GLOBEX","201412")
myorderid = sample(1001:3001, 1)
IBrokers:::.placeOrder(tws2, mytkr, twsOrder(myorderid, "BUY", "1", "MKT"))
twsDisconnect(tws2)
Sometimes the above script works okay. Usually though it fails. When it fails, it seems to connect okay.
Then I see this in my TWS console:
03:47:45:581 JTS-EServerSocket-290: [2:47:71:1:0:0:0:ERR] Message type -1. Socket I/O error -
03:47:45:581 JTS-EServerSocket-290: Anticipated error
jextend.d: Socket I/O error -
at jextend.sc.b(sc.java:364)
at jextend.ch.sb(ch.java:1534)
at jextend.ch.run(ch.java:1390)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:196)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at jextend.xh.d(xh.java:45)
at jextend.sc.c(sc.java:579)
at jextend.sc.r(sc.java:227)
at jextend.af.a(af.java:232)
at jextend.sc.f(sc.java:650)
at jextend.pd.a(pd.java:822)
at jextend.sc.b(sc.java:358)
... 3 more
03:47:45:583 JTS-EServerSocket-290: [2:47:71:1:0:0:0:ERR] Socket connection for client{2} has closed.
03:47:45:583 JTS-EWriter14-291: [2:47:71:1:0:0:0:ERR] Unable write to socket client{2} -
03:47:45:584 JTS-EServerSocketNotifier-288: Terminating
Can you offer any ideas on how you would wrestle with this issue?
One other piece of info:
I think the call to reqIds() may be necessary. Sometimes reqIds() would return an id not high enough. Then, I'd use it and placeOrder() would fail. So, I call reqIds() but then use Sys.time() to give me an id which is larger than the last ID I used.
Another problem may have been some code-text I copied out of a PowerPoint. Some of the code-characters may have been corrupt.
The main problem was orderid.
I need to be careful how I generate orderid.
Also I may have moused in bad characters from a powerpoint preso which had an example.
I posted some code which works in a comment in this thread.
Dan

Plone: TypeError: Can't pickle objects in acquisition wrappers

I am using / fixing collective.logbook to save errors on the site. Currently logbook fails on my site on some exceptions:
File "/srv/plone/xxx/src/collective.logbook/collective/logbook/events.py", line 101, in hand
transaction.commit()
File "/srv/plone/buildout-cache/eggs/transaction-1.1.1-py2.6.egg/transaction/_manager.py", line 8
return self.get().commit()
File "/srv/plone/buildout-cache/eggs/transaction-1.1.1-py2.6.egg/transaction/_transaction.py", li
self._commitResources()
File "/srv/plone/buildout-cache/eggs/transaction-1.1.1-py2.6.egg/transaction/_transaction.py", li
rm.commit(self)
File "/srv/plone/buildout-cache/eggs/ZODB3-3.10.5-py2.6-linux-x86_64.egg/ZODB/Connection.py", lin
self._commit(transaction)
File "/srv/plone/buildout-cache/eggs/ZODB3-3.10.5-py2.6-linux-x86_64.egg/ZODB/Connection.py", lin
self._store_objects(ObjectWriter(obj), transaction)
File "/srv/plone/buildout-cache/eggs/ZODB3-3.10.5-py2.6-linux-x86_64.egg/ZODB/Connection.py", lin
p = writer.serialize(obj) # This calls __getstate__ of obj
File "/srv/plone/buildout-cache/eggs/ZODB3-3.10.5-py2.6-linux-x86_64.egg/ZODB/serialize.py", line
return self._dump(meta, obj.__getstate__())
File "/srv/plone/buildout-cache/eggs/ZODB3-3.10.5-py2.6-linux-x86_64.egg/ZODB/serialize.py", line
self._p.dump(state)
TypeError: Can't pickle objects in acquisition wrappers.
This is obviously because logbook tries to write a record of the error which refers to an acquired object. I assume that the solution is to clean the error from these kind of objects.
However, how can I figure out what is the bad object, how it ends up to the transaction manager and what are the Python object references causing this issue? Or anything which could help me to debug this issue?
If you can reproduce this reliably, you can put in a print statement or pdb.set_trace() in the ZODB connection _register method (in ZODB/connection.py inside the ZODB egg):
def _register(self, obj=None):
# ... skipped lines ...
if obj is not None:
self._registered_objects.append(obj)
# Insert print statement here.
Now whenever any object has been marked as changed or is added to the connection as a new object, it'll be printed to the console. That should help you with the debugging process. Good luck!

XCode4 can not Watch value of variables

It's a bit annoying that when I hit a break point in XCode 4, values of Watch Expressions are always grayed out. I have to create dummy variables pointing to the thing I want to watch in order to get around it.
The log says the following errors when I run the app:
warning: Unable to read symbols for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.3.3 (8J2)/Symbols/System/Library/Frameworks/IOKit.framework/IOKit (file not found).
warning: Tried to remove a non-existent library: /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.3.3 (8J2)/Symbols/System/Library/Frameworks/IOKit.framework/IOKit
Current language: auto; currently objective-c++
warning: Unable to read symbols for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.3.3 (8J2)/Symbols/Developer/usr/lib/libXcodeDebuggerSupport.dylib (file not found).
How can I fix this?
As for myself, I debug variables using two handy GDB console commands. You can enter them when in debug mode in debug console after GDB mark. I use "p" command for printing basic C type variables:
p [[[self pointerToMyClass] anotherPointerToOtherClass] someIntValue]
And I use "po" command for printing content of arrays, for checking objects:
po [[[self pointerToMyClass] anotherPointerToOtherClass] someNSArray]
po [[[self pointerToMyClass] anotherPointerToOtherClass] myUIImageView]

Resources