Is there a reason that the following formula would not returning a true even if the search range contains a match but with a space behind it?
=IF(COUNTIF(Main!C:C, "*"&L4&"*")=0, FALSE, TRUE)
Im having trouble comparing two columns as the data entered has spaces at the back or other errors which keeps returning false.
Once I go and manually add a space behind the name in the search range match it would return true.
=IF(COUNTIF(Main!C:C, "*L4*")=0, FALSE, TRUE)
This works for me.
Related
When i try to read values from a single cell in a tab using cypress it returns {specwindow: , chainerid: ch-https://app.trahop.com-218} instead of returning values inside the cell.
This is the line of code which returns the meesage above:
cy.log(cy.get("tr:nth-child(1) td:nth-child(1)")) .Im sorry if this is a stupid question but i cant find any way to read the values inside the cell.
cy.get() does not yield a text variable, but instead yields a Cypress-wrapped DOM element. In order to see the value, you'll need to include it in a chained Cypress command, or yield it into a .then() (or similar) Cypress statement.
cy.get('tr:nth-child(1) td:nth-child(1)')
.should('have.text', 'foo');
cy.get('tr:nth-child(1) td:nth-child(1)')
.then(($el) => {
cy.log($el.text());
});
I have some very large CSV files (~183mio. rows by 8 columns) that I want to load into a database using R. I use duckdb for this and it its built-in function duckdb_read_csv, which is supposed to auto-detect datatypes for each column. If I enter the following code:
con = dbConnect(duckdb::duckdb(), dbdir="testdata.duckdb", read_only = FALSE)
duckdb_read_csv(con, "d15072021","mydata.csv",
header = TRUE)
It produces this error:
Error: rapi_execute: Failed to run query
Error: Invalid Input Error: Could not convert string '2' to BOOL between line 12492801 and 12493825 in column 9. Parser options: DELIMITER=',', QUOTE='"', ESCAPE='"' (default), HEADER=1, SAMPLE_SIZE=10240, IGNORE_ERRORS=0, ALL_VARCHAR=0
I've looked at the rows in question and I can't find any irregularities in column 9. Unfortunately, I cannot post the dataset because it's confidential. But the entire column is filled with either FALSE or TRUE.
If I set the parameter nrow.check to something larger than 12493825 it doesn't produce the same error but takes very long and simply converts the column to VARCHAR instead of a logical. Setting nrow.check to -1 (meaning it checks every row for a pattern) crashes R and my PC completely.
The weird thing: This isn't consistent. Earlier I imported the dataset whilst keeping the default value for nrow.check at 500 and it read the file with no issue (though still converting column 9 to VARCHAR). I have to read a lot of files that are the same pattern so I need to have a reliable way of reading them. Anyone know how duckdb_read_csv actually works and why I might get this error?
Note that reading the files into memory and then into a database isn't an option because I run out of memory instantly.
the way the sniffer works is by sampling nrow.check rows to figure out the data type, so the result can differ from runs if you get unlucky, increasing it will reduce the chances of failing it, mainly because the sniffer looks at more rows.
If increasing the number of rows is not possible due to performance issues, you can of course first define the schema of the CSV file. But then you must know the schema beforehand.
As an example of how you can define the schema and turn off the sniffer:
select * from
SELECT * FROM read_csv('test.csv', COLUMNS=STRUCT_PACK(a := 'INTEGER', b := 'INTEGER'), auto_detect='false')
I am working on a R script aiming to check if a data.frame is correctly made and contains the right information at the right place.
I need to make sure a row contains the right information, so I want to use a regular expression to compare with each case of said row.
I thought maybe it did not work because I compared the regex to the value by calling the value directly from the table, but it did not work.
I used regex101.com to make sure my regular expression was correct, and it matched when the test string was put between quotes.
Then I added as.character() to the value, but it came out FALSE.
To sum up, the regex works on regex101.com, but never did on my R script
test = c("b40", "b40")
".[ab][0-8]{2}." == test[1]
FALSE
I expect the output to be TRUE, but it is always FALSE
The == is for fixed full string match and not used for substring match. For that, we can use grep
grepl("^[ab][0-8]{2}", test[1])
#[1] TRUE
Here, we match either 'a' or 'b' at the start (^) of the string followed by two digits ranging from 0 to 8 (if it should be at the end - use $)
I am using the XRRichText.visible=off if there is no data, but still it throwing some spaces in report. I do not want those spaces if there is no data.
Just want to display none & no spaces . How can I do this?
The upper spaces are just XRRichText.
Set the property ProcessNullValues for the labels with the issue as ‘Suppress and Shrink’.
The purpose of this property value is: If a control receives a null value, it is not printed (without adding blank space in its place).
The property has two more values:
Leave – A control is always printed.
Suppress - If a control receives a null value, a blank space is printed instead.
I am loading a table in which the first column is a URL and reading it into R using read.table().
It seems that R is dropping about 1/3 of the columns and does not return any errors.
The URLs do not contain any # characters or tabs (my separator field), which I understand could be an issue. If I convert the URLs to integer IDs first, the problem goes away.
Is there something about the field that might be causing R to drop the rows?
Without a sample of the data, it's hard to say. But one small "gotcha" is that # is the default comment.char in read.table(). Try to set comment.char = "" and see if that fixes it.
Thanks for all your help,
Yes, so initially there were some hashes and I was able to handle them using comment.char = ''. The problem turned out to be that some of my URLs contained ' and " characters. The strangest thing about the situation is that it didn't return any errors. After I removed these characters using tr, I had no issues with loading the data.