I have read the docs for MariaDB's REGEX_REPLACE but cannot get my query to work. I am storing links in a column, link and want to change the end of the link:
From www.example.com/<code> to www.example.com/#/results/<code> where <code> is some hexidecimal hash, e.g. 55770abb384c06ee00e0c579. What I am trying is:
SELECT REGEX_REPLACE("link", "www\\.example\\.com\\/(.*)", "www\\.example\\.com\\/#\\/results\\/\\1");
The result is:
Showing rows 0 - 0.
I wasn't able to figure out what the first argument was--the documentation says "subject". Turns out it's just the column name. So this works:
UPDATE my_table
SET my_link = REGEXP_REPLACE(
my_link,
"http:\\/\\/www\\.example\\.com\\/(.*)",
"http:\\/\\/www\\.example\\.com\\/#\\/results\\/\\1")
WHERE my_link IS NOT NULL
Related
My data is string like:
'湯姆 is a boy.'
or '梅isagirl.'
or '約翰,is,a,boy.'.
And I want to split the string and only choose the Chinese name.
In R, I can use the command
tmp=strsplit(string,[A-z% ])
unlist(lapply(tmp,function(x)x[1]))
And then getting the Chinese name I want.
But in PostgreSQL
select regexp_split_to_array(string,'[A-z% ]') from db.table
I get a array like {'湯姆','','',''},{'梅','','',''},...
And I don't know how to choose the item in the array.
I try to use the command
select regexp_split_to_array(string,'[A-z% ]')[1] from db.table
and I get an error.
I don't think that regexp_split_to_array is the appropriate function for what you are trying to do here. Instead, use regexp_replace to selectively remove all ASCII characters:
SELECT string, regexp_replace(string, '[[:ascii:]~:;,"]+', '', 'g') AS name
FROM yourTable;
Demo
Note that you might have to adjust the set of characters to be removed, depending on what other non Chinese characters you expect to have in the string column. This answer gives you a general suggestion for how you might proceed here.
What is wrong with my code:
ExecSql('DELETE FROM STLac WHERE RegN=99 AND BegDate>= 2016-12-14');
This runs, but deletes ALL the rows in STLac for RegN, not just the rows with BegDate on or after 2016-12-14.
Originally I had:
ExecSql('DELETE FROM STLac WHERE RegN=99 AND BegDate>= :myDdate,[myDate]);
which has the advantage I hoped of not being particular to the date format. So I tried the literal date should in the format SQLite likes. Either way I get all rows deleted, not just those on or after the specified date.
Scott S.
Try double quote while putting date. As any value must be provided in between quotes until and unless that column is not int
ExecSql('DELETE FROM STLac WHERE RegN=99 AND BegDate>= "2016-12-14"');
SQLite does not have datetime format as such, so you have to figure out how date is actually represented in the table and change your query to provide the same format. First execute the "select" statement in some kind of management tool,
select * from STLac where RegN = 99 and BegDate >= '2016-12-14' --(or '2016.12.04' or something else)
which displays the result in the grid; when you see the expected rows, change it to "delete" query and copy into your Delphi program.
I need to use cast function with length of column in teradata.
say I have a table with following data ,
id | name
1|dhawal
2|bhaskar
I need to use cast operation something like
select cast(name as CHAR(<length of column>) from table
how can i do that?
thanks
Dhawal
You have to find the length by looking at the table definition - either manually (show table) or by writing dynamic SQL that queries dbc.ColumnsV.
update
You can find the maximum length of the actual data using
select max(length(cast(... as varchar(<large enough value>))) from TABLE
But if this is for FastExport I think casting as varchar(large-enough-value) and postprocessing to remove the 2-byte length info FastExport includes is a better solution (since exporting a CHAR() will results in a fixed-length output file with lots of spaces in it).
You may know this already, but just in case: Teradata usually recommends switching to TPT instead of the legacy fexp.
I Have multiple records in table like below. Each record holds mutiple entries separated by #.
record1 - 123.45.56:ABCD:789:E # 1011.1213.1415:FGHI:1617:J #
record2 - 123.45.56:ABCD:1617:E # 1011.1213.1415:FGHI:12345:J #
I need to pass an argument to a different project/service which builds an sql query and send the output to me.
Now if I send an argument like below, it gives me wrong output
123.45.56:*:1617
This recognizes both record1 and record 2 as proper output because of wildcard char. But as per my requirement only record2 is proper as record1 has 123.45.56 in one entry and 1617 in a different entry.
Is there a way to construct an expression that says the like condition to ignore such invalid entries.
Please note that I cant change the query as I am not constructing it. The only way for me is to tweak the expression that I can send as argument.
You need to restrict the pattern you match to be specic enough such that it only matches the first record and not the second one.
You can try:
SELECT *
FROM yourTable
WHERE col LIKE '123.45.56:' AND col LIKE '1617:J #'
According to SQLite documentation I can write:
SELECT * FROM docs WHERE docs MATCH 'title:linux problems';
Where title is a column name. Is it possible to create something like:
SELECT * FROM docs WHERE docs MATCH 'ignore:linux problems';
To search in all table excluding the linux column?
You can search only in one column or in all columns.
You could try to list all columns except the one you want to ignore:
SELECT * FROM docs WHERE docs MATCH 'col1:linux OR col2:linux OR ...'
You can use a NOT to exclude a column, for example:
SELECT * FROM docs WHERE docs MATCH 'linux problems' NOT ignorecolumnname:'linux problems';
This will match everything but the ignored (NOT) column.
EDIT: apparently this is driver dependent, it works on some but not all.