BaseX GUI: Writeback not setting to true - xquery

I am using BaseX 7.9 and want to set the WRITEBACK option to true. So, I execute db:writeback[true] on the Editor window
The Query Info shows:
Compiling:
- removing unknown element/attribute true
- db:writeback[()]: removing ()
Query:
db:writeback[true]
Optimized Query:
()
Result:
- Hit(s): 0 Items
- Updated: 0 Items
- Printed: 0 Bytes
- Read Locking: local [prueba_08242014_01]
- Write Locking: none
Timing:
- Parsing: 0.93 ms
- Compiling: 0.27 ms
- Evaluating: 0.42 ms
- Printing: 1.24 ms
- Total Time: 2.86 ms
Query plan:
<QueryPlan>
<Empty size="0"/>
</QueryPlan>
Yet, then when I execute db:system(), WRITEBACK appears as false on the result window:
<system>
<localoptions>
...
<writeback>false</writeback>
...
</localoptions>
</system>
(It is abbreviated)

What Went Wrong
BaseX automatically registers the db prefix for the http://basex.org/modules/db namespace. Your code is evaluated as XQuery, and returns all root elements in the db namespace with local name writeback, and then filters those with a predicate for those having a true child node. An input document that would match this query is
<writeback xmlns="http://basex.org/modules/db"><true/></writeback>
Modifying Options
To modify options in BaseX, use the SET [option] [value] command in the Command input.

Related

Why can't I open a *.w file in the appBuilder?

I have a *.w file, referring to two include files ({incl\include_file.i}, {incl\do_something_file.i}). That first include-file contains the definition of a RECID variable "recordid":
DEF INPUT-OUTPUT PARAMETER recordid AS RECID.
I am capable to compile the *.w file, the listing file looks as follows: (just a fragment)
Prompt>findstr "recordid do_something" listing.txt
...
1 x DEF INPUT-OUTPUT PARAMETER recordid AS RECID.
...
1 x 1 {incl\do_something_file.i
2 x 1 INPUT-OUTPUT recordid
So, the compilation works. In top of that, I've checked the pairs of "&ANALYZE-SUSPEND" and "&ANALYZE-RESUME" clauses and everything is fine.
Nevertheless, I can't open the *.w file, as the mentioned RECID seems not to be known (errors 201 and 196).
Edit after first comments
This the exact error message I get while opening the *.w file, using the AppBuilder (I'm working with a Dutch version of the tool, hence the Dutch words in between):
---------------------------
Fout
---------------------------
This file cannot be analyzed by the AppBuilder.
Please check these problems in your file or environment:
** Onbekende veld- of variabelenaam - recordid. (201)
** .\incl\<do_something_file>.i Compilatiefout op regel 7. (196)
---------------------------
OK
---------------------------
Edit with more information on ANALYZE- clauses
I've launched following findstr command on my code with the following results:
Prompt>findstr /I "ANALYZE-RESUME ANALYZE-SUSPEND" <filename>.w
&ANALYZE-SUSPEND _VERSION-NUMBER ... GUI
&ANALYZE-RESUME
&ANALYZE-SUSPEND _UIB-CODE-BLOCK _CUSTOM _DEFINITIONS ...
&ANALYZE-RESUME
...
I confirm that the number of &ANALYZE-SUSPEND clauses equals the number of &ANALYZE-RESUME clauses, they are in the right sequence (first a SUSPEND and then a RESUME) and none of them is commented out.
Does anybody have an idea what's going wrong?
The problem was caused by an include, being outside of an suspend resume clause, in order to solve such a situation the following command might be useful:
findstr /I "ANALYZE {incl" <source_file>.w
The result should look like the following:
...
&ANALYZE-SUSPEND _UIB-CODE-BLOCK _CONTROL C-Win
&ANALYZE-RESUME
&ANALYZE-SUSPEND _UIB-CODE-BLOCK _CUSTOM _MAIN-BLOCK C-Win
{incl\something.i}
{incl\something_else.i}
&ANALYZE-RESUME
...
You see following rules:
The number of suspends and resumes must be equal.
Every suspend is to be closed by a resume.
Not one of those can be commented out.
It is advised to have includes between the suspend and the resume.

How do you configure a swap partition using cloud-init?

We have an instance that uses cloudinit for initial instantiation, and this instance and cloudinit work great.
We want to add swap to this instance, and have correctly configured a suitable disk, however we cannot figure out how to get cloudinit to initialise the swap disks, like cloudinit does with all the other disks on the machine.
Our configuration of our disks, including swap, is as follows:
fs_setup:
- label: vidi
device: /dev/xvde
filesystem: ext4
- label: swap
device: /dev/xvdg
filesystem: swap
mounts:
- [ /dev/xvde, /var/lib/vidispine, ext4, defaults, 0, 0 ]
- [ /dev/xvdg, none, swap, sw, 0, 0 ]
This results in an /etc/fstab as follows:
LABEL=cloudimg-rootfs / ext4 defaults,discard 0 0
/dev/xvde /var/lib/vidispine ext4 defaults,comment=cloudconfig 0 0
/dev/xvdg none swap sw,comment=cloudconfig 0 0
The disk /dev/xvde is formatted correctly on startup. The disk /dev/xvdg is ignored.
What addditional steps are required for cloudinit to "mkswap" and "swapon" the /dev/xvdg disk?
In response to "What addditional steps are required for cloudinit to 'mkswap' and 'swapon' the /dev/xvdg disk?", the short answer is "nothing".
The longer answer is that you need to be running a version of cloud-init with the following bugfix applied:
https://github.com/canonical/cloud-init/pull/143
Which fixes the following error when running mkswap:
mkswap: invalid block count argument: ''
Most specifically, Ubuntu Bionic images 20200131 and newer work properly.
Older versions of cloudinit require the following added to the runcmd scripts on boot to work around the bug above:
- mkswap /dev/xvdg
- swapon -a
Solution: swap is a space on the disk that is used when the amount of physical ram of memory is full, therefore it can save your system from crashing due to out of memory exception. In order to apply swap partition using cloud-init package on ubuntu, you'll need to mount dedicated disk partition ( in boot time ) on /etc/fstab ( a configuration table designed to ease the burden of mounting and unmounting file systems to a machine) create ( using mkswap ) and start swap ( using swapon ).
First create an attach an additional disk to your machine, I'm attaching example using terraform:
resource "aws_instance" "example" {
ami ="<some-ami>"
instance_type = "t3.micro"
tags = {
Name = "example"
}
// root
root_block_device {
volume_size = 50
volume_type = "gp2"
delete_on_termination = true
}
// swap partition
ebs_block_device {
device_name = "/dev/xvdb"
volume_size = 20
volume_type = "gp2"
delete_on_termination = true
}
}
Second mount an additional disk and toggle the swap functionality on template file of cloud-init:
mounts:
- [ /dev/nvme1n1, none, swap, sw, 0, 0 ]
bootcmd:
- mkswap /dev/nvme1n1
- swapon /dev/nvme1n1
For verification: type on terminal:
swapon --show
#output:
NAME TYPE SIZE USED PRIO
/dev/nvme1n1 partition 20G 0B -2

Reading MarkLogic logs from Query Console using XQuery

I want to read MarkLogic logs (for eg : ErrorLog.txt) from query console using Xquery. I had the below code but the problem is output is not properly formatted. Result is like below
xquery version "1.0-ml";
for $hid in xdmp:hosts()
let $h := xdmp:host-name($hid)
return
xdmp:filesystem-file("file://" || $h || "/" ||xdmp:data-directory($hid) ||"/Logs/ErrorLog.txt")
Problem is result is coming as per host basis like first all log of one host is coming and then starting with time 00:00:01 of host 2 and then 00:00:01 of host 3 after running the Xquery.
2019-07-02 00:00:35.668 Info: Merging 2 MB from /cams/q06data02/testQA2/Forests/testQA2-2.2/0002b4cd to /cams/q06data02/testQA2/Forests/testQA2-2.2/0002b4ce, timestamp=15620394303480170
2019-07-02 00:00:36.007 Info: Merged 3 MB at 9 MB/sec to /cams/q06data02/testQA2/Forests/test2-2.2/0002b4ce
2019-07-02 00:00:38.161 Info: Deleted 3 MB at 399 MB/sec /cams/q06data02/test2/Forests/test2-2.2/0002b4cd
Is it possible to get the output with hostname included with log entries and also if we can sort the output by timelines something like
host 1 : 2019-07-02 00:00:01 : Info Merging ....
host 2 : 2019-07-02 00:00:02 : Info Deleted 3 MB at 399 MB/sec ...
Log files are text files. You can parse and sort them like any other text file.
Although they can get very large (GB+), so simple methods may not be performant.
Plus you need to be able to parse the text into fields in order to sort by a field.
Since the first 20 bytes of every line is the time stamp, and that timestamp is in ISO format which sorts lexically same as date, you can split the file by lines and sort using basic colation to get by time sorting of multiple files.
In V9 one can use the pair of xdmp:logfile-scan and xdmp:logmessage-parse to efficiently search over log files (remotely as well as local) and then transform the results into text, XML (attribute or element format) or JSON.
One can also use the REST API for the same.
see: https://docs.marklogic.com/REST/GET/manage/v2/logs
Once logfiles (ideally a selected subset of log messages that is small enough to manage) is converted to a structured format (xml , json or text lines) then sorting, searching, enriching etc is easily performed.
For something much better take a look at Ops Director https://docs.marklogic.com/guide/opsdir/intro

teradata export query using cmd windows not working

new post :
i already read tutorial and i found this script
.LOGMECH LDAP;
.LOGON xx.xx.xx.xx/username,password;
.LOGTABLE dbname.LOG_tablename;
DATABASE dbname;
.BEGIN EXPORT SESSIONS 2;
.EXPORT OUTFILE D:\test.txt
MODE RECORD format text;
select a.my_date,b.name2,a.value from dbsource.tablesource a
inner join dbname.ANG_tablename b
on a.name1=b.name2
where value=59000
and a.my_date >= 01/12/2015
;
.END EXPORT;
.LOGOFF;
but it is like not working
D:\>bteq < dodol.txt
BTEQ 15.00.00.00 Tue Jan 05 14:40:52 2016 PID: 4452
+---------+---------+---------+---------+---------+---------+---------+----
.LOGMECH LDAP;
+---------+---------+---------+---------+---------+---------+---------+----
.LOGON xx.xx.xx.xx/username,
*** Logon successfully completed.
*** Teradata Database Release is 13.10.07.12
*** Teradata Database Version is 13.10.07.12
*** Transaction Semantics are BTET.
*** Session Character Set Name is 'ASCII'.
*** Total elapsed time was 4 seconds.
+---------+---------+---------+---------+---------+---------+---------+----
.LOGTABLE dbname.LOG_tablename;
*** Error: Unrecognized command 'LOGTABLE'.
+---------+---------+---------+---------+---------+---------+---------+----
DATABASE dbname;
*** New default database accepted.
*** Total elapsed time was 2 seconds.
+---------+---------+---------+---------+---------+---------+---------+----
.BEGIN EXPORT SESSIONS 2;
*** Error: Unrecognized command 'BEGIN'.
+---------+---------+---------+---------+---------+---------+---------+----
.EXPORT OUTFILE D:\test.txt
*** Warning: No data format given. Assuming REPORT carries over.
*** Error: Expected FILE or DDNAME keyword, not 'OUTFILE'.
+---------+---------+---------+---------+---------+---------+---------+----
MODE RECORD format text;
MODE RECORD format text;
$
*** Failure 3706 Syntax error: expected something between the beginning of
the request and the 'MODE' keyword.
Statement# 2, Info =6
*** Total elapsed time was 1 second.
+---------+---------+---------+---------+---------+---------+---------+----
select a.my_date,b.name2,a.value from dbsource.tablesource a
inner join dbname.ANG_tablename b
on a.name1=b.name2
where value=59000
and a.my_date >= 01/12/2015
;
old post :
I am new in teradata, i have found mload to upload big data, now i have question, is there option to use cmd ( win7 ) to export data from teradata to xxx.txt
--- sample
select a.data1,b.data2,a.data3 from room1.REPORT_DAILY a
inner join room1.andaikan_saja b
on a.likeme=b.data2
where revenue=30000
and content_id like '%super%'
and a.trx_date >= 01/12/2015
;
this is my mload up.txt
.LOGMECH LDAP;
.LOGON xx.xx.xx.xx/username,mypassword;
.LOGTABLE mydatabase.LOG_my_table;
SET QUERY_BAND = 'ApplicationName=TD-Subscriber-RechargeLoad; Version=01.00.00.00;' FOR SESSION;
.BEGIN IMPORT MLOAD
TABLES mydatabase.my_table
WORKTABLES mydatabase.WT_my_table
ERRORTABLES mydatabase.ET_my_table mydatabase.UV_my_table;
.LAYOUT LAYOUT_DATA INDICATORS;
.FIELD number * VARCHAR(20);
.DML LABEL DML_INSERT;
INSERT INTO mydatabase.my_table
(
number =:number
);
.IMPORT INFILE "D:\folderdata\data.txt"
LAYOUT LAYOUT_DATA
FORMAT VARTEXT
APPLY DML_INSERT;
.END MLOAD;
.LOGOFF &SYSRC;
i need solution to export file to my laptop, just like my script that i put ---sample title ....
i use that script from teradasql, and i am search for cmd script
If it's just a few MB and an adhoc export you can use SQL Assistant: Set the delimiter in Tools-Options-Export/Import, maybe modify the settings in Tools-Options-Export and then click File-Export Results before submitting your Select. (Similar in TD Studio)
Otherwise the easiest way to extract data in a readable delimited format is TPT, either Export for large amounts of data (GBs) or SQL Selector (MBs). TPT is available for most Operating Systems including Windows.
There's a nice User Guide with lots of example scripts:
Job Example 12: Extracting Rows and Sending Them in Delimited Format
In your case you'll define a generic template file like this:
DEFINE JOB EXPORT_DELIMITED_FILE
DESCRIPTION 'Export rows from a Teradata table to a delimited file'
(
APPLY TO OPERATOR ($FILE_WRITER() ATTR (Format = 'DELIMITED'))
SELECT * FROM OPERATOR ($SELECTOR ATTR (SelectStmt = #ExportSelectStmt));
);
Change $SELECTOR to $EXPORT for larger exports.
Then you just need a job variable file like this:
SourceTdpId = 'your system'
,SourceUserName = 'your user'
,SourceUserPassword = 'your password'
,FileWriterFileName = 'xxx.txt'
,ExportSelectStmt = 'select a.data1,b.data2,a.data3 from room1.REPORT_DAILY a
inner join room1.andaikan_saja b
on a.likeme=b.data2
where revenue=30000
and content_id like ''%super%''
and a.trx_date >= DATE ''2015-12-01'' -- modified this to a valid date literal
;'
The only bad part is that you have to double any single quotes within your select, e.g. '%super%' -> ''%super%''.
Finally you run a cmd:
tbuild -f your_template_file -v your_job_var_file
Depending on the volume of data you wish to extract from Teradata you can use Teradata BTEQ or the Teradata Parallel Transport (TPT) utility with the EXPORT operator from the command line to extract the data.
The TPT utility is the eventual replacement for the legacy Teradata Load and Unload utilities (FastLoad, MultiLoad, FastExport, and TPump) and provides an easier mechanism to produce delimited flat files over FastExport. TPT is fairly flexible and effective for exporting large volumes of data to channel or network attached clients.
Teradata BTEQ can perform lightweight load and unload functions. The BTEQ manual is pretty good at providing you an overview of how to use the various commands to produce a semi-structured report or data extract. It doesn't have a simple command to produce a delimited flat file. If you review the manual's overview of the EXPORT command you should get a good feel for how BTEQ behaves when working with channel or network attached clients.

BaseX: Slow XQuery

I've got a BaseX XML database with ~20 XML files. These files are different in size and structure. The biggest file has got 524 MB. It consists of a parent ARTICLE tag with 267685 ART subtags.
This is my XQuery: "/ARTICLE/ART[PRDNO=12345]" (pretty straightforward; proper namespaces omitted for clarity). PRDNO is a foreign key to the PRODUCT/PRD XML structure, there are multiple (in average ~10) products per article.
Everything works as it is supposed to, but this query is quite slow - it takes approximately 1s to execute. Similar queries for other objects in the database (where the data amount is smaller) are much faster.
What can I do to optimize this query?
I ran "optimize" (which took some minutes), I ensured the TEXT index is in place.
This is the output of "info database":
> info database
Database Properties
Name: hospindex
Size: 1740 MB
Nodes: 69360063
Documents: 22
Binaries: 0
Timestamp: 2014-09-03-09-34-07
Resource Properties
Timestamp: 2014-09-03-09-21-14
Encoding: UTF-8
CHOP: true
Indexes
Up-to-date: true
TEXTINDEX: true
ATTRINDEX: true
FTINDEX: false
LANGUAGE: English
STEMMING: false
CASESENS: false
DIACRITICS: false
STOPWORDS:
UPDINDEX: false
MAXCATS: 100
MAXLEN: 96
EDIT: This is the query execution plan:
Compiling:
- adding text() step
Query:
/*:ARTICLE/*:ART[*:PRDNO=1005935]
Optimized Query:
(db:open-pre("hospindex",0), db:open-pre("hospindex",32884731), ...)/*:ARTICLE/*:ART[(*:PRDNO/text() = 1005935)]
Result:
- Hit(s): 1 Item
- Updated: 0 Items
- Printed: 2078 Bytes
- Read Locking: local [hospindex]
- Write Locking: none
Timing:
- Parsing: 1.12 ms
- Compiling: 0.46 ms
- Evaluating: 1684.35 ms
- Printing: 0.35 ms
- Total Time: 1686.3 ms
Query plan:
<QueryPlan>
<IterPath>
<DBNodeSeq size="22">
<DBNode name="hospindex" pre="0"/>
<DBNode name="hospindex" pre="32884731"/>
<DBNode name="hospindex" pre="33685448"/>
<DBNode name="hospindex" pre="38260847"/>
<DBNode name="hospindex" pre="38358876"/>
</DBNodeSeq>
<IterStep axis="child" test="*:ARTICLE"/>
<IterStep axis="child" test="*:ART">
<CmpG op="=">
<CachedPath>
<IterStep axis="child" test="*:PRDNO"/>
<IterStep axis="child" test="text()"/>
</CachedPath>
<Int value="1005935" type="xs:integer"/>
</CmpG>
</IterStep>
</IterPath>
</QueryPlan>
Your query will be evaluated much faster when using quotes around your search value:
/ARTICLE/ART[PRDNO = "12345"]
The reason is that the current version of BaseX does not provide a numeric value index (it may be included in BaseX 8.0).
You get more insight into the query compilation steps by turning on the QUERYINFO option.

Resources