Set Program to use LIBXML_PARSEHUGE - libxml2

I am using the commandline app xmlsec to encrypt and decrypt files. I got an XML File with a node at 40 MB of size.
I already found out i need to set
LIBXML_PARSEHUGE
to parse nodes bigger than 10 MB
Does anyone know how to enable this?
I searched the source code of xmlsec for the Parser init, but couldn't find a way to integrate the option
Do i have to set this inside the source and recompile it? When so, do i have to recompile libxml or xmlsec?

Ok, so i found the solution and post it here just in case anyone needs this sometime
In
src/Parser.c
xmlDocPtr xmlSecParseFile(const char *filename){}
Contains this
/* enable parsing of XML documents with large text nodes */
xmlCtxtUseOptions (ctxt, XML_PARSE_HUGE);
Orinigally, the second line is commented out. You have to uncomment it and recompile the tool

You can also activate the parameter via the simplexml_load_string function itself:
simplexml_load_string($xmlString,'SimpleXMLElement', LIBXML_PARSEHUGE);

Related

How to view decompiled R code in order to debug it?

I'm working with ASSIGN SESSION:DEBUG-ALERT = TRUE. and as a result, while testing a program I get an error message with following callstack details (only the first line):
--> USER-INTERFACE-TRIGGER my_own_window.w at line 587 (\\<official_build_server_directory>\my_own_window.r)
my_own_window.w at line 709 (\\<official_build_server>\<my_own_window.r)
...
As you can see, something's wrong with my window at lines 587 and 709, but:
While compiling the window files, some things happen which mess with the line numbers, and the mentioned line numbers are the ones from the compiled *.r files, which are different than the ones from the original *.w files.
In order to be sure about the line numbers, I would need a de-compiler, or at least a *.r-viewer (being based on an internal de-compiler).
It's not the r-code's you need to look into. It's the DEBUG-LISTING files. If you have the source-code execute:
COMPILE my_own_window.w DEBUG-LIST c:\temp\my-own_window.debuglist .
That file shows you the actual line numbers.
For future reference: so far Progress has not provided a decompiler. Any available decompilers at the time of writing this are 3rd party and also possibly not legal regarding Progress OpenEdge licenses.
You can also click on the 'Debug' button in that alert box, which will invoke the debugger which steps through an 'on the fly' debug-listing.
For the debug-listing on the fly to work, you will need to have the source files in your propath. The debugger will detect and complain if source files have changed after your code was executed.
And you will also need to ensure the debugger is enabled by starting proenv and then prodebugenable -enable-all

Dynamic output issue when rowset is empty

I'm running a u-sql script similar to this:
#output =
SELECT Tag,
Json
FROM #table;
OUTPUT #output
TO #"/Dir/{Tag}/filename.txt"
USING Outputters.Text(quoting : false);
The problem is that #output is empty and the execution crashes. I already checked that if I don't use {tag} in the output path the script works well (it writes an empty file but that's the expectable).
Is there a way to avoid the crash and simply don't output anything?
Thank you
The form you are using is not yet publicly supported. Output to Files (U-SQL) documents the only supported version right now.
That said, depending on the runtime that you are using, and the flags that you have set in the script, you might be running the private preview feature of outputting to a set of files. In that case, I would expect that it would work properly.
Are you able to share a job link?

Use SQLite with biicode

So far I have been able to succesfully use boost, cereal and gtest using biicode but I am having troubles with sqlite. I am trying to use it doing the following:
#include <sqlite3.h>
So I edited my biicode.conf to include those lines, including the alising for the header:
[requirements]
sqlite/sqlite:9
[includes]
sqlite.h: sqlite/sqlite/sqlite3/sqlite3.h
But when I try to call bii cpp:build it does the following
WARN: Removing unused reference to "sqlite/sqlite: 9" from myuser/test "requirements"
Then I ended up with the expected:
database_impl.cpp:(.text+0x516): undefined reference to `sqlite3_exec'
Surprisingly, the compilation succedd even though sqlite3.h is obviously not included but that's maybe because the call to sqlite is from a template function.
I have looked at the example but CMakeList.txt does not seem to add any additional includes directories. For example for boost I had to add:
SET(Boost_USE_STATIC_LIBS OFF)
bii_find_boost(COMPONENTS chrono system filesystem log thread REQUIRED)
target_include_directories(${BII_BLOCK_TARGET} INTERFACE ${Boost_INCLUDE_DIRS})
target_link_libraries(${BII_BLOCK_TARGET} INTERFACE ${Boost_LIBRARIES})
But the two examples I found here and here don't seem to add anything to the includes directories, not even a link folder. I suppose sqlite has to be compiled with your sources so how do I make biicode add those files to my projects automatically ?
There're several problems. You just wrote in [includes] section sqlite.h instead of sqlite3.h and you should only write the prefix, sqlite/sqlite/sqlite3, later instead of the full dependency name.
Then, you can solve it so:
[requirements]
sqlite/sqlite: 9
[includes]
sqlite3.h: sqlite/sqlite/sqlite3
Or, you could try the SQLite version uploaded into fenix user:
[requirements]
fenix/sqlite: 0
[includes]
sqlite3.h: fenix/sqlite
Don't worry about a "WARN" message that says biicode is ignoring sqlite3.c file because it's harcoded into the block CMakeLists.txt to catch this file ;)
Note: you should write your external #includes's with double quotes instead of <>, because the last ones are referred to system headers and biicode could be using some system deps and you don't realize it.

I'm having problems with configuring a filter that replicates specific tables only

I am trying to use filters to select specific tables to replicate.
I tried running this with the installer
./tools/tungsten-installer --master-slave -a \
...
--svc-extractor-filters=replicate \
--property=replicator.filter.replicate.do=test,*.foo"
and got this exception in trepctl status after the master had not installed properly:
Plugin class name property is missing or null: key=replicator.filter.replicate
which file is this properties file? How do I find it? Moreover, in specifying the settings for the filter, how do I know what exactly to put?
I discovered that I am supposed to Modify the configuration template file prior to configuration according to Issue 219 but what changes am I supposed to make in tungsten-replicator-2.0.5-diff that will later on be patched to the extraction?
Issue 254 suggests that If you want to apply a filter out of the box, you can use these options with tungsten-installer:
-a --property=replicator.filter.Replicate.ignoreFilter=schema_x.tablex,schema_x,tabley,schema_y,tablez
--svc-thl-filter=Replicate
However when I try using this for --property=replicator.filter.replicate.do,
but the problem is still the same:
pendingExceptionMessage: Plugin class name property is missing or null: key=replicator.filter.replicate
Your assistance will be greatly appreciated.
Rumbi
Update:
Hi
I had a look at this file: /root/tungsten/tungsten-replicator/samples/
conf/filters/default/tableignore.tpl .Acoording to this sample, a
static-SERVICE_NAME.properties file is supposed to have something like
this configured, please confirm if this is the correct syntax:
replicator.filter.tabledo=com.continuent.tungsten.replicator.filter.JavaScr iptFilter
replicator.filter.tabledo.script=${replicator.home.dir}/samples/
scripts/javascript-advanced/tabledo.js
replicator.filter.tabledo.tables=foo(database).bar(table)
replicator.stage.thl-to-dbms.filters=tabledo
However, I did not find tabledo.js (or something similar) in the
directory where tableignore.js exists. Could I please have the
location of this file. If there is an alternative way of specifiying
--property=replicator.filter.replicate.do=test without the use of
this .js file, your suggestions are most welcome.
Download the latest version of tungsten replicator. The missing tpl file was added about a month ago. After installation, the filtered tables should be added to static-service.properties under the section FILTERS.
Locate your replicator configuration file in static-YOUR_SERVICE_NAME.properties, e.g.
/opt/continuent/tungsten/tungsten-replicator/conf/static-mysql2vertica.properties
Make sure the individual dbms properties are set, in particular the setting replicator.applier.dbms:
# Batch applier basic configuration information.
replicator.applier.dbms=com.continuent.tungsten.replicator.applier.batch.SimpleBatchApplier
replicator.applier.dbms.url=jdbc:mysql:thin://${replicator.global.db.host}:${replicator.global.db.port}/tungsten_${service.name}?createDB=true
replicator.applier.dbms.driver=org.drizzle.jdbc.DrizzleDriver
replicator.applier.dbms.user=${replicator.global.db.user}
replicator.applier.dbms.password=${replicator.global.db.password}
replicator.applier.dbms.startupScript=${replicator.home.dir}/samples/scripts/batch/mysql-connect.sql
# Timezone and character set.
replicator.applier.dbms.timezone=GMT+0:00
replicator.applier.dbms.charset=UTF-8
# Parameters for loading and merging via stage tables.
replicator.applier.dbms.stageTablePrefix=stage_xxx_
replicator.applier.dbms.stageDirectory=/tmp/staging
replicator.applier.dbms.stageLoadScript=${replicator.home.dir}/samples/scripts/batch/mysql-load.sql
replicator.applier.dbms.stageMergeScript=${replicator.home.dir}/samples/scripts/batch/mysql-merge.sql
replicator.applier.dbms.cleanUpFiles=false
Depending on the database you are replicating to you may have to omit/modify some of the lines.
For more information see:
https://code.google.com/p/tungsten-replicator/wiki/Replicator_Batch_Loading
I don't know if this problem is still open or not.
I am using this version 2.0.6-xxx and installing the service using the parameters works for me.
I would like to point it out, that as the parameter says "--svc-extractor-filters" defines an extractor filter. Meaning that the parameters will guide the extraction of data in the master server.
If you intend to use it on the slave service, you should use the "--svc-applier-filters".
The parameters
--svc-extractor-filters=replicate \
--property=replicator.filter.replicate.do=test,*.foo"
supposed to create the following in the properties file:
This is the filter set up.
replicator.filter.replicate=com.continuent.tungsten.replicator.filter.ReplicateFilter
replicator.filter.replicate.ignore=
replicator.filter.replicate.do=test,*.foo
And you should also be able to find the
replicator.stage.binlog-to-q.filters=replicate
parameter set.
If you intend to use this filter in the slave, please find the line with:
replicator.stage.q-to-dbms.filters=mysqlsessions,pkey,bidiSlave
and change it as
replicator.stage.q-to-dbms.filters=mysqlsessions,pkey,bidiSlave,replicate
Hope this brief description did help to you!

How to work with hook_nodeapi after image thumbnail creation with ImageCache

A bit of a followup from a previous question.
As I mentioned in that question, my overall goal is to call a Ruby script after ImageCache does its magic with generating thumbnails and whatnot.
Sebi's suggestion from this question involved using hook_nodeapi.
Sadly, my Drupal knowledge of creating modules and/or hacking into existing modules is pretty limited.
So, for this question:
Should I create my own module or attempt to modify the ImageCache module?
How do I go about getting the generated thumbnail path (from ImageCache) to pass into my Ruby script?
edit
I found this question searching through SO...
Is it possible to do something similar in the _imagecache_cache function that would do what I want?
ie
function _imagecache_cache($presetname, $path) {
...
...
// check if deriv exists... (file was created between apaches request handler and reaching this code)
// otherwise try to create the derivative.
if (file_exists($dst) || imagecache_build_derivative($preset['actions'], $src, $dst)) {
imagecache_transfer($dst);
// call ruby script here
call('MY RUBY SCRIPT');
}
Don't hack into imagecache, remember every time you hack core/contrib modules god kills a kitten ;)
You should create a module that invokes hook_nodeapi, look at the api documentation to find the correct entry point for your script, nodeapi works on various different levels of the node process so you have to pick the correct one for you (it should become clear when you check the link out) http://api.drupal.org/api/function/hook_nodeapi
You won't be able to call the function you've shown because it is private so you'll have to find another route.
You could try and build the path up manually, you should be able to pull out the filename of the uploaded file and then append it to the directory structure, ugly but it should work. e.g.
If the uploaded file is called test123.jpg then it should be in /files/imagecache/thumbnails/test123/jpg (or something similar).
Hope it helps.

Resources