sqltextinfo gets truncated while copying to excel - teradata

I need to get a column 'sqltextinfo' from log table but the text is not correctly placed in a cell when I copy it to excel.
I tried the following query to get this. But it throws error -9134 result exceeded maximum length
oreplace(oreplace(otranslate(sqltextinfo),', ',''),chr(10),chr(13),'') sqltextinfo
is there any way to use the above query without using substr
Please help

I use this to replace any whitespace with a single ' ':
Cast( RegExp_Replace(Cast(SqlTextInfo AS CLOB(31000)), '\s+', ' ',1,0,'c') AS VARCHAR(31000))

Related

Line feed leaving extra space at end of line in XQuery

I have an application where I create a .csv file and then create a .xlsx file from the .csv file. The problem I am having now is that the end of rows in the .csv have a trailing space. My database is in MarkLogic and I am using a custom REST endpoint to create this. The code where I create the .csv file is:
document{($header-row,for $each in $data-rows return fn:concat('
',$each))}
I pass in the header row then pass each data row with a carriage return and a line feed at the beginning. I do this to put the cr/lf at the end of the header and then each of the other lines except for the last one. I know my data rows do not have the space at the end. I have tried to normalize the space around $each and the extra space is still present. It seems to be tied to the line feed. Any suggestions on getting rid of that space?
Example of current output:
Name,Email,Phone
Bill,bill#company.com,999-999-9999
Steve,steve#company.com,999-999-9999
Bob,bob#company.com,999-999-9999
. . .
You are creating a document that is text() node from a sequence of strings. When those are combined to create a single text(), they will be separated by a space. For instance, this:
text{("a","b","c")}
will produce this:
"a b c"
Change your code to use string-join() to join your $header-row and sequence of $data-rows string values with the
separator:
document{ string-join(($header-row, $data-rows), "
") }

Error in loading a data to neo4j in case statement in cypher query

try to upload CSV data with a case statement in the query, but the following error appears:
cypher:
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM 'file:///test.csv' as line
MATCH(a:test_t{tid:line.pid})
CASE
WHEN line.key !='NA' THEN
WITH split(line.key,",") as name
UNWIND name as x
MERGE(k:test_key{key_term:toLower(x)})
MERGE(a)-[:contains]->(k)
END
Error
Neo.ClientError.Statement.SyntaxError: Invalid input 'S': expected 'l/L' (line 5, column 3 (offset: 137))
"CASE"
Can anyone help me?
The CASE clause does not support embedding other Cypher clauses (but it can invoke functions). In fact, a CASE clause is not actually needed for your use case.
This query should work (the :auto at the beginning is needed in neo4j 4.0+):
:auto USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM 'file:///test.csv' as line FIELDTERMINATOR ';'
WITH line
WHERE line.key <> 'NA'
MATCH (a:test_t {tid: line.pid})
UNWIND split(line.key, ',') as x
MERGE (k:test_key {key_term: TOLOWER(x)})
MERGE (a)-[:contains]->(k)
This query filters out all unwanted lines as soon as they are obtained from the file. Reducing the number of rows of data being worked on as early as possible is good practice.
Also, you have a second issue. Your data file cannot use the comma as both the (default) field terminator AND as the delimiter between your x values.
To resolve this ambiguity, the above query chose to use the FIELDTERMINATOR ';' option to specify that the ";" character will be used as the field terminator. A sample data file would look like this:
pid;key
123;NA
234;Foo,Bar
345;Bar,Baz
456;NA
567;Baz
You are using the CASE incorrectly. You cannot have update clauses inside of a CASE statement. Instead you can use a WHERE clause to filter the rows of the file. For instance, adding WHERE line.key != 'NA' while processing the file befor you move onto the update will work. Something like this should fit the bill.
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM 'file:///test.csv' as line
MATCH (a:test_t {tid: line.pid})
WITH line
WHERE line.key <> 'NA'
WITH split(line.key, ",") as name
UNWIND name as x
MERGE (k:test_key {key_term: toLower(x)})
MERGE (a)-[:contains]->(k)
It looks like,from your logic you could even move the test up above the MATCH. So this might be better (fewer unecessary matches).
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM 'file:///test.csv' as line
WITH line
WHERE line.key <> 'NA'
MATCH (a:test_t {tid: line.pid})
WITH split(line.key, ",") as name
UNWIND name as x
MERGE (k:test_key {key_term: toLower(x)})
MERGE (a)-[:contains]->(k)

Want a commandline to get the data as it is as we get when we export data

I have a data that is having some thousands of records and each record having multiple columns.One of the column is having a data where there is a punctuation mark "," in it.
When I had tried to spool that data into a csv file and text to columns data using the delimters as comma,the data seems to be inappropriate as the data itself has a comma in it.
I am looking for a solution where I can export the data using a command line which is having as it is look when I export the data via TOAD.
Any help is much appreciated.
Note: I was looking for this solution since many days but got a chance now to post it here.
When exporting the dataset in Toad, select a delimiter other than a comma or drop down the "string quoting" dropdown box and select "double quote strings including NULLS".
Oh wait if you are spooling output, you'll need to add the double-quotes in your select statement like this in order to surround the columns containing the comma with double-quotes:
select '"' || column || '"' as column from table;
This format is pretty standard but use pipes as delimiters instead and save space by not having to wrap strings in double-quotes. Depends on what the consumer of the data requires really.

SQLite: trim usage

My goal is to extract the domain out of given URL.
For that end I use the following:
select distinct ltrim(rtrim('https://www.youtube.com/watch?v=...', '/'), 'https://')
The result I get is:
www.youtube.com/watch?v=...
While the following is expected:
www.youtube.com
How can the above be achieved?
Note:
I notices that the trim function works differently than I expected.
select distinct ltrim('https://www.youtube.com/watch?v...', 'youtu') returns the same string without any change.
Trying to trim only the slash by select ltrim('https://www.youtube.com/watch?v...', '/') returns the same string as well.
Any explainations are welcomed.
Trim only removes the given characters at the beginning and/or end of the string.
You'll need substr and instr. (https://www.sqlite.org/lang_corefunc.html)
But the best option is probably to fix this in your code before saving it to the database.
At the end I didn't use trim but substr as offered.
The following worked:
select replace(substr(substr(<url>, instr(<url>, '//')+2),0,instr(substr(<url>, instr(<url>, '//')+2),'/')),'.','')
select replace(substr(substr(<url>, instr(<url>, '//www.')+6),0,instr(substr(<url>, instr(<url>, '//www.')+6),'/')),'.','')

Shiny, When I make a query using reactive input variables I get an extra blank before the string I'm comparing too so my query doesn't get anything

I have this query
query<-paste("select disease_status,expvalue from taylor_21036,taylor_exp_21036 where geo_accession=id_geoacc and id_gpl like '",input$gen,"' order by'",input$gen,"'")
For some strange reason when I view the query I get:
select disease_status,expvalue from taylor_21036,taylor_exp_21036 where geo_accession=id_geoacc and id_gpl like ' hsa-let-7a ' order by ' hsa-let-7a '
and extra blank is added in the left and in the right of my string
How can I fix this? Any idea?
I was getting mad because I didn't know why I was getting the error "need finite 'ylim' values" when I was trying to boxplot the data frame I got after doing a dbGetQuery with the query mentioned before but finally I found the problem, the data frame was empty because my query doesn't get any rows because of the annoying extra blanks.
Please I will appreciate any advice. Thank you!!!
Thanks in advance.
The default separator between terms in paste is " " (space). If you don't want a separator then use paste0 or add the argument sep="" to your paste.

Resources