I have a pipe delimited .txt Flat File that I'm using to do bulk insert to SQL. Everything works well for straight one to one. However, the Flat File now contains 2 new fields that can repeat an unknown number of times.
Is there a way to create a single flat file schema where I can have an unbounded child within the main unbounded child? I think the place I'm getting tripped up is how to make the ChildRoot listed below just a "group heading" like Root is where ChildRoot doesn't correspond to a location in the flat file. How do I insert something like that?
Schema:
-Roots
--Root (unbounded)
---ChildID
---ChildName
Roots gets a direct link to my sql stored procedure to do a bulk insert on as many "Root" rows that come in.
Now I have:
Schema:
-Roots
--Root (unbounded)
---Child
---ChildName
---ChildRoot (unbounded)
----ChildRootID
----ChildRootName
**EDIT
I should also add that ChildRootID & ChildRootName can repeat an indefinite number of times until the row delimiter (carriage return) is found
Related
I want to use READ_NOS to read a file from S3 and get all rows to return. But it only returns some rows .
I create a foreign table for a parquet file.
but result: https://imgur.com/a/E0KLNJT
use studio still the same result: https://imgur.com/a/d8UP9uH
how to get the all rows return ?
the first SQL (COUNT*) show the number of records. The second one the number of Parquet Files. So in average each files holds 6.470 records.
There is a Teradata Orange Book dedicated to Use of NOS with some backgroud but as well some example SQL. Chapter 5 of this is focussed on Parquet Files.
It looks like RETURNTYPE ('NOSREAD_PARQUET_SCHEMA') is important in the combination os READ_NOS and Parquet.
I use Neo4J Community Edition version 3.2.1.
Consider this CSV-file with edges:
node1,relation,node2,type
1,RELATED_TO,2,Married
2,RELATED_TO,1,Married
1,RELATED_TO,3,Child
2,RELATED_TO,3,Child
3,RELATED_TO,4,Sibling
3,RELATED_TO,5,Sibling
4,RELATED_TO,5,Sibling
I have allready created the nodes for this. I then run the following csv load command:
load csv with headers from
"file:///test_dataset/edges.csv" as line
match (person1:Person {pid:line.node1}),
(person2:Person {pid:line.node2})
create (person1)-[:line.relation {type:line.type}]->(person2)
But this returns the following error:
Invalid input '.': expected an identifier character, whitespace, '|', a length specification, a property map or ']' (line 5, column 24 (offset: 167))
"create (person1)-[:line.relation {type:line.type}]->(person2)"
It seems that I cannot use "line.relation" like this. How can I use the relation from the csv-file (second column) using csv load?
I have seen this answer, but I would like to do this using native query language.
To verify that the rest of the query is correct I have managed to create the edges correctly by hardcoding the relation like this:
load csv with headers from
"file:///test_dataset/edges.csv" as line
match (person1:Person {pid:line.node1}),
(person2:Person {pid:line.node2})
create (person1)-[:RELATED_TO {type:line.type}]->(person2)
Natively it's not possible to create a node with a dynamic label and a relationship with a dynamic type.
That's why there is a procedure for that.
If you want to do it natively and you know all the distinct value of your relation column, you can create many cypher script like that (one per value):
LOAD CSV WITH HEADERS FROM "file:///test_dataset/edges.csv" AS line
WITH line WHERE line.relation ='RELATED_TO'
MATCH (person1:Person {pid:line.node1})
MATCH (person2:Person {pid:line.node2})
CREATE (person1)-[:RELATED_TO {type:line.type}]->(person2)
I'm using the .import command to insert a large csv into an SQLite db. Unfortunately, some of the lines contain the delimeter inside the value for the field. For example, each line is in the form:
id, title, first name, last name, location
but some lines have values like:
1, Mr, Bob, Saget, Sydney, Australia
and the comma in the location field causes an expected 5 columns but found 6 error. Luckily, it's not important to me to insert every line, so I'd like to just not insert any lines that raise this error. Is this possible?
Parsing the csv with regex is a last resort as the file is very large and could take minutes every time I need to import it.
I am trying to convert data from Act 2000 to a MySQL database. I have successfully imported the DBF files into individual MySQL tables. However I am having issues with the *.BLB file, which seems to be a non-standard memo file.
The DBF files, identifies themselves as dbase III Plus, No memo format. There is a single *.BLB which is a memo file for multiple DBFs to share BLOB data.
If you read this document: http://cicorp.com/act/sdk/ACT6-SDK-ChapterA.htm#_Toc483994053)
You can see that the REGARDING column is a 6 character one. The description is: This 6-byte field is supplied by the system and contains a reference to a field in the Binary Large Object (BLOB) Database.
Now upon opening the *.BLB I can see that the block size is 64 bytes. All the blocks of text are NULL padded out to that size.
Where I am stumbling is trying to convert the values stored in the REGARDING column to blocks location in the BLB file. My assumption is that 6 character field is an offset.
For example, one value for REGARDING is, (ignoring the square brackets): [ ",J$]
In my Googling, I found this: http://ulisse.elettra.trieste.it/services/doc/dbase/DBFstruct.htm#C1.5
It explains that in memo fields (in normal DBF files at least) the space value is ignore (i.e. it's padding out the column).
Therefore if I'm correct (again, square brackets) [",J$] should be the offset in my BLB file. Luckily I've still got access to the original ACT2000 software, so I can compare the full text in the program / MySQL and BLB file.
Using my example value, I know that the DB row with REGARDING value of [ ",J$] corresponds to a 1024 byte offset (or 16 blocks, assuming my guess of a 64 byte sized block).
I've tried reading some Python code for open source projects that read DBF files - but I'm in over my head.
I think what I need to do is unpack the characters to binary, but am not sure.
How can I find the 64-block based spot to read from based on what's found in the DBF files?
EDIT by Jerry Dodge
I've attempted to reverse-engineer the strings in this field to hexadecimal values, and then to an integer value using StrToInt64, but the result still does not match up with the blob file. I've also tried multiplying this integer value by 64 and not multiplying, but the result keeps winding up outside of the size of the blob file, not actually finding any data.
For example, a value of ___/BD (_ = space) translates to $2f4244 hexidecimal, which in turn translates to the integer value of 3097156, but does not correspond with any relevant portion of data in the blob file, even when multiplied or divided by 64.
According to the SDK you linked, the following happens as I understand:
There is a TYPE field (right behing REGARDING) that encodes what REGARDING is used for (see the second table of the linked chapter). So I'd assume that if type=6 (meeting not held) the REGARDING is either irrelevant or only contains a meeting ID reference from some other table. On that line of thought I would only expect REGARDING to be a BLB offset if type=101 (or possibly 100). I'd also not abandon the thought that in these relevant cases TYPE might be a concatenation of BLB file index and offset (because there is a mention that each file must not be longer than 30K chars and I really expect to be able to store much more data even in one table).
I have a flat file with some repeating sections in it, and I'm confused how to create the schema via the BT flat file mapping wizard. The file looks like this:
001,bunch of data
002,bunch of data
006,bunch of data
006A,bunch of data
006B,bunch of data
006B,bunch of data
006,bunch of data
006A,bunch of data
006B,bunch of data
As you can see, the 006* records can repeat. I'm going to want to wind up with XML that looks like this:
<001Stuff>...</001Stuff>
<002Stuff>...</002Stuff>
<006Loop>
<006Stuff>...</006Stuff>
<006AStuff>...</006AStuff>
<006BStuff>...</006BStuff>
<006BStuff>...</006BStuff>
</006Loop>
<006Loop>
<006Stuff>...</006Stuff>
<006AStuff>...</006AStuff>
<006BStuff>...</006BStuff>
</006Loop>
Obviously I can't just set the first group of 006* records to "Repeating record" and Ignore the second set. I'm used to dealing with single repeating rows via the wizard (i.e. another 006 row right after the first one) and not nested things like this - any suggestions on how to proceed? Thanks!
Working with the Flat File Schema Wizard is quite hard and there is only so much it can help you with. I always seem to have to tweak its output a little bit.
In order to make things a little bit easier, I suggest you should restrict your sample document to a single occurrence of the whole <006> structure. You will not have to set many lines to Ignored using the Flat File Schema Wizard :
001,bunch of data
002,bunch of data
006,bunch of data
006A,bunch of data
006B,bunch of data
006B,bunch of data
Next, each repeating structure should be wrapped inside a corresponding Repeating Record in the definition of your Xml Schema.
Please, note that you can always run the Flat File Schema Wizard recursively on nested structures to have more fine-grained control. So I would suggest, first, to run the wizard with an all-encompassing repeating <006> structure, like so :
Then, you can right click on the structure, and provide a more detailed definition of nested child structures, only highlighting a subset of the sample contents, like so:
Then, the most important part: you need to tweak the Child Order property to Conditional Default for both repeating structures, because there is only one empty line at the end of your document file and the Wizard cannot help you out with this situation.
For reference, your resulting structure should look like so:
With the following settings:
BunchOfStuff (Root) : Delimited, 0x0D 0x0A, Suffix.
_001Stuff : Delimited, ,, Prefix, Tag Identifier 001.
_002Stuff : Delimited, ,, Prefix, Tag Identifier 002.
_006Loop : Delimited, 0x0D 0x0A, Conditional Default.
_006Stuff : Delimited, ,, Prefix, Tag Identifier 006.
_006AStuff : Delimited, ,, Prefix, Tag Identifier 006A.
_006BLoop : Delimited, 0x0D 0x0A, Conditional Default.
_006BStuff : Delimited, ,, Prefix, Tag Identifier 006B.
Hope this helps.
Treat everything from the first start of the first 006, record to the start of the second 006, record as one record. When you define 006 record, set it up as a repeating record also. This should create a node for each 660, group and nodes for each 600 under it.
That is what I would try.
Here is my output after 2 minutes of work. Except for the node/element names I think it is what you want. You would still have to create seperate elements for each of the fields in your data.
<_x0030_01 xmlns="">001,bunch of data
<_x0030_02 xmlns="">002,bunch of data
<_x0030_06 xmlns="">
<_x0030_06_Child1>bunch of data
<_x0030_06_Child2>
<_x0030_06_Child2_Child1>A,bunch of data
<_x0030_06_Child2>
<_x0030_06_Child2_Child1>B,bunch of data
<_x0030_06_Child2>
<_x0030_06_Child2_Child1>B,bunch of data
<_x0030_06 xmlns="">
<_x0030_06_Child1>bunch of data
<_x0030_06_Child2>
<_x0030_06_Child2_Child1>A,bunch of data
<_x0030_06_Child2>
<_x0030_06_Child2_Child1>B,bunch of data