I am attempting to use Tableau's "DATEPARSE" function to get a standard date/time format... so far my attempts have failed.
The format of the raw unicode date is: 2022-08-01T08:00:00-04:00
I was successfully able to do the first part, but does anyone have an idea on the whole thing?
The first part: DATEPARSE("yyyy-MM-dd",[Date])
I figured it out:
DATEPARSE("yyyy-MM-dd'T'HH:mm:ssZ",[Date])
Related
I am new to VAEX. Also I couldn't find any solution for my specific question in google. So I am asking here hoping someone can solve my issue :).
I am using VAEX to import data from CSV file in my DASH Plotly app and then want to convert Date column to datetime format within VAEX. It successfully imports the data from csv file. Here is how I imported data from csv into VAEX:
vaex_df=vaex.from_csv(link,convert=True,chunk_size=5_000)
Below it shows the type of the Date column after importing into VAEX. As you can see, it takes the Date column as string type.
Then when I try to change data type of Date columns with below code, it gives error:
vaex_df['Date']=vaex_df['Date'].astype('datetime64[ns]')
I dont know how to handle this issue, so I need your help. What am I doing wrong here?
Thanks in advance
The vaex.from_csv is basically an alias to pandas.read_csv. In pandas.read_csv there is an argument that use can use to specify which columns should be parsed as datetime. Just pass that very same argument to vaex.from_csv and you should be good to go!
I've been trying to make a timeseries plot in Grafana, but I keep getting messages like "Data does not have a time field" or "No numeric fields found." How could I format my data to fix these issues? Thank you!
See README of used plugin first
https://grafana.com/grafana/plugins/frser-sqlite-datasource/ :
Your ts column is not formatted in accordance with RFC3339.
Your value_string is a string and not numeric - REAL type.
Reformat your time variable in SQLite to Unix time format, ie strftime("%s" , my_time_var ).
Then in the grafana query dialog define my_time_var as a time formatted column of a Time Series.
I don't know if it also works in Windows.
This question already has answers here:
How to convert Excel date format to proper date in R
(5 answers)
Closed 1 year ago.
I am reading an Excel file using the function readxl::read_excel(), but it appears that date are not getting read properly.
In the original file, one such date is 2020-JUL-13, but it is getting read as 44025.
Is there any way to get back the original date variable as in the original file?
Any pointer is very appreciated.
Thanks,
Basically, you could try to use:
as.Date(44025)
However, you will notice error saying Error in as.Date.numeric(44025) : 'origin' must be supplied. And that means that all you need is to know origin, i.e. starting date from which to start counting. When you check, mentioned by Bappa Das, help page for convertToDate function, you will see that it is just a wrapper for as.Date() function and that the default argument for origin parameter is "1900-01-01".
Next, you can check, why is this, by looking for date systems in Excel and here is a page for this:
Date systems in Excel
Where is an information that for Windows (for Mac there are some exceptions) starting date is indeed "1900-01-01".
And now, finally, if you want to use base R, you can do:
as.Date(44025, origin = "1900-01-01")
This is vectorized function, so you can pass whole column as well.
You can use openxlsx package to convert number to date like
library(openxlsx)
convertToDate("44025")
Or to convert the whole column you can use
convertToDate(df$date)
more a tip question that can save lots of time in many cases. I have a script.R file which I try to save and get the error:
Not all of the characters in ~/folder/script.R could be encoded using ASCII. To save using a different encoding, choose "File | Save with Encoding..." from the main menu.
I was working on this file for months and today I was editing like crazy my code and got this error for the first time, so obviously I inserted a character that can not be encoded while I was working today.
My question is, can I track and find this specific character and where exactly in the document is?
There are about 1000 lines in my code and it's almost impossible to manually search it.
Use tools::showNonASCIIfile() to spot the non-ascii.
Let me suggest two slight improvements this.
Process:
Save your file using a different encoding (eg UTF-8)
set a variable 'f' to the name of that file. something like this f <- yourpath\\yourfile.R
Then use tools::showNonASCIIfile(f) to display the faulty characters.
Something to check:
I have a Markdown file which I run to output to Word document (not important).
Some of the packages I used to initialise overload previous functions. I have found that the warning messages sometimes have nonASCII characters and this seems to have caused this message for me - some fault put all that output at the end of the file and I had to delete it anyway!
Check where characters are coming back from Warnings!
Cheers
Expanding the accepted answer with this answer to another question, to check for offending characters in the script currently open in RStudio, you can use this:
tools::showNonASCIIfile(rstudioapi::getSourceEditorContext()$path)
Iam trying to load some data using tpump utility from Unix Console.
The data has various datatypes viz., text, number, decimal, date.
Now, iam stuck as what should be the FORMAT type i need to specify in the tpump script.
I went through the tpump manual, but could not decipher the FORMAT type to used.
The data/columns are delimited by "|" symbol.
Any info/hint in using the appropriate FORMAT type would be of great help.
If this is a duplicate question, please help me with the actual question link.
Thanks a lot in advance.