This is my XML
<?xml version="1.0" encoding="UTF-8"?>
<ns1:Message xmlns:ns1="http://API.JOEJANE.Envelope">
<ns1:MessageHeader>
<ns1:MessageId>ce82fe4f-c843-57a6-14a1-15d79773b638</ns1:MessageId>
<ns1:From>ABC</ns1:From>
<ns1:To>JOEJANE</ns1:To>
<ns1:PlantId>7301</ns1:PlantId>
</ns1:MessageHeader>
</ns1:Message>
This is my code trying to read it into a dataset
def temp-table ttMsgHdr no-undo serialize-name "ns1:MessageHeader"
field MsgId as char serialize-name "ns1:MessageId"
field MsgFrom as char serialize-name "ns1:From"
field MsgTo as char serialize-name "ns1:To"
field PlantId as char serialize-name "ns1:PlantId".
def dataset dsONE xml-node-name "ns1:Message" for
ttMsgHdr.
def var dXml as longchar.
dataset dsONE:read-xml("longchar",dXml,"empty",?,?,?,?).
find first ttMsgHdr no-error.
And this is the error I get.
DATASET name 'ns1:Message' in namespace '' not found in XML Document.
I also tried with different namespace
like this:
def dataset dsONE xml-node-name "" for
ttMsgHdr.
or
def dataset dsONE xml-node-name "ns1:Message xmlns:ns1=""http://API.JOEJANE.Envelope""" for
ttMsgHdr.
or
def dataset dsONE xml-node-name "ns1:Message xmlns:ns1" for
ttMsgHdr.
But still getting the same error.
please help thank you
Try setting the namespace URI and prefix in the dataset definition and then skip the ns1-part in serialize-name:
def temp-table ttMsgHdr no-undo serialize-name "MessageHeader"
field MsgId as char serialize-name "MessageId"
field MsgFrom as char serialize-name "From"
field MsgTo as char serialize-name "To"
field PlantId as char serialize-name "PlantId".
def dataset dsONE namespace-url "http://API.JOEJANE.Envelope"
namespace-prefix "ns1" xml-node-name "Message" for ttMsgHdr.
ns1 is a namespace, since everything is in the same namespace and there are no namespace conflicts, you can just omit it:
def temp-table tt no-undo serialize-name 'MessageHeader'
field cId as char serialize-name 'MessageId'
field cFrom as char serialize-name 'From'
field cTo as char serialize-name 'To'
field cPlantId as char serialize-name 'PlantId'
.
def dataset ds serialize-name 'Message'
for tt
.
dataset ds:read-xml( 'file', 'my.xml', ?, ?, ? ).
find first tt.
message tt.cID.
Watch it run in AblDojo.
Related
I am using following dynamic query to fetch the data from a table. But I am getting compilation error "Phrase or option conflicts with previous phrase or option. (277)". Not sure where I am making mistakes and how to fix it. Please help me modifying the below example query.
define variable hbuffer as handle no-undo.
define variable hQuery as handle no-undo.
define variable cQuery as character no-undo.
define temp-table tt_table no-undo
field tt_week1 as character label "Week1"
.
create buffer hbuffer for table "<table>".
cQuery = "for each <table> no-lock ".
create query hQuery.
hQuery:set-buffers(hbuffer).
cQuery = cQuery + ":".
hQuery:query-prepare(cQuery).
hQuery:query-open().
if hQuery:query-open() then
do:
do while hQuery:get-next():
create tt_table.
assign tt_week1 = hbuffer::qty[1] /*field name qty data type is deci-10[52].*/
.
end.
end.
for each tt_table :
disp tt_week1.
end.
As pointed out by Mike, your attempt to reference the extent is throwing the error, a dynamic extent reference uses round parentheses (and works fine with shorthand):
hbuffer::qty(1)
Additionally:
you do not need to terminate your query with a :
get-next() defaults to no-lock
you are opening your query twice
// some demo data
define temp-table db no-undo
field qty as decimal extent 7
.
create db. db.qty[1] = 1.
create db. db.qty[1] = 2.
// the question
define variable hb as handle no-undo.
define variable hq as handle no-undo.
define variable cquery as character no-undo.
define temp-table tt no-undo
field week1 as character label 'Week1'
.
create buffer hb for table 'db'.
cquery = substitute( 'for each &1', hb:name ).
create query hq.
hq:set-buffers( hb ).
if hq:query-prepare( cquery ) and hq:query-open() then do:
do while hq:get-next():
create tt.
tt.week1 = hb::qty(1). // <-- round parentheses
end.
end.
for each tt:
display tt.week1.
end.
https://abldojo.services.progress.com/?shareId=626aff353fb02369b2545434
The compile error should tell you the line the error is coming from.
In the code snippet, you haven't define a tt_week field anywhere.
In general, if you want to assign a (temp)table field, you should use the table.field notation; the AVM can often figure out your intent, but not being specific is error-prone.
The problem is with the shorthand syntax here:
hbuffer::qty[1]
If you replace this with:
hbuffer:buffer-field ("qty"):BUFFER-VALUE (1)
it'll work (until the point, that Peter made with the undefined field tt_week1. I did not find any reference saying if or if not the shorthand syntax should work with EXTENT fields. It may be worth checking that with Progress tech-support.
So this will bring you further:
assign tt_data = hbuffer:buffer-field ("qty"):BUFFER-VALUE (1) /*field name qty data type is deci-10[52].*/
I am writing
Char name [5][4];
For(i=1;i<4;i++)
Cin>>name[5][i];
But it is showing error
I am new to progress 4GL. I have a CSV file that has data for the first 2 rows. 1st-row data is for the list of users and 2nd-row data is for users to be deactivated.
In my program, if I selected flag yes then the program should check the second row in CSV file and store it to a temp table. Please take a look at what I have tried from my side as it is not helping me to focus only on the second row in CSV instead it is taking all the data including 1st-row data as well.
I really appreciate if you tell me how can I create new/move to a sheet(sec) in CSV file and parse the data using progress 4GL
DEFINE TEMP-TABLE tt_sec7Role
FIELD ttsec_role AS CHARACTER.
DEFINE VARIABLE v_dataline AS CHARACTER NO-UNDO.
DEFINE VARIABLE v_count AS INTEGER NO-UNDO.
EMPTY TEMP-TABLE tt_sec7Role.
input from "C:\Users\ast\Desktop\New folder\cit.csv".
repeat:
import unformatted v_dataline.
if v_dataline <> '' then
do:
do v_count = 1 to NUM-ENTRIES(v_dataline,','):
create tt_sec7Role.
ttsec_role = entry(v_count,v_dataline,',').
end.
end. /* if v_dataline <> '' then */
end. /*repeat*/
input close.
v_count = 0.
FOR EACH tt_sec7Role:
v_count = v_count + 1.
END.
MESSAGE v_count.
If you simply need to count rows just add an integer and increase it after each import statement:
define variable counter as integer no-undo.
input from "C:\Users\ast\Desktop\New folder\cit.csv".
repeat:
import unformatted v_dataline.
counter = counter + 1.
if v_dataline <> '' then
do:
//If you only want to do this on line 2
if counter = 2 then do v_count = 1 to NUM-ENTRIES(v_dataline,','):
create tt_sec7Role.
ttsec_role = entry(v_count,v_dataline,',').
end.
end. /* if v_dataline <> '' then */
end. /*repeat*/
input close.
Once you determine if you should read that second row, create a record in your temp-table, then do another import. Then copy that part of the data to your tt, and at the end just cycle through your tt and export the fields with a comma as a delimiter.
how do I code this properly to work in Oracle SQL :
update table_name
set field_name =
replace(field_name, x'BF', x'00')
where condition expression ;
Not sure how to code the replace all occurrence of hex 'BF' with null value hex'00' contained in data field field_name.
You can use the unistr() function to provide a Unicode character. e.g.:
update table_name
set field_name = replace(field_name, unistr('\00bf'))
where condition expression ;
which would remove the ¿ character completely; or to replace it with a null character:
set field_name = replace(field_name, unistr('\00bf'), unistr('\0000'))
though I suspect sticking a null in there will confuse things even more later, when some other system tries to read that text and stops at the null.
Quick demo:
with t (str) as (
select 'A ¿ char' from dual
)
select str,
replace(str, unistr('\00bf')) as removed,
replace(str, unistr('\00bf'), unistr('\0000')) as replaced,
dump(replace(str, unistr('\00bf')), 16) as removed_hex,
dump(replace(str, unistr('\00bf'), unistr('\0000')), 16) as replaced_hex
from t;
STR REMOVED REPLACED REMOVED_HEX REPLACED_HEX
--------- --------- --------- ----------------------------------- -----------------------------------
A ¿ char A char A char Typ=1 Len=7: 41,20,20,63,68,61,72 Typ=1 Len=8: 41,20,0,20,63,68,61,72
(Just as an example of the problems you'll have - because of the null I couldn't copy and paste that from SQL Developer, and had to switch to SQL*Plus...)
The first dump shows the two spaces (hex 20) next to each other; the second shows a null character between them.
I can create table like this:
CREATE TABLE mytable
(
name text,
surname varchar
)
I can also create table like this:
CREATE TABLE mytable2
(
name BLABLA,
surname mygrandpaType
)
I know there is no diff between text and varchar in SQLite. But even in second table I can insert and select querys from it and works fine..
It does not make sense what is it use for datataype approach in SQLite ?
Any column with the exception of INTEGER PRIMARY KEY can hold values of any type. The specified datatype is just a hint and it is called type affinity.
Determining type affinity:
If the declared type contains the string "INT" then it is assigned INTEGER affinity.
If the declared type of the column contains any of the strings "CHAR", "CLOB", or "TEXT" then that column has TEXT affinity. Notice that the type VARCHAR contains the string "CHAR" and is thus assigned TEXT affinity.
If the declared type for a column contains the string "BLOB" or if no type is specified then the column has affinity NONE.
If the declared type for a column contains any of the strings "REAL", "FLOA", or "DOUB" then the column has REAL affinity.
Otherwise, the affinity is NUMERIC.
Therefore your BLABLA column gets the NUMERIC affinity.
Further reading: https://www.sqlite.org/datatype3.html