How to convert revit data to 3dtiles data?Including materials and textures.Is there any good way to deal it?
I used revit2016 to export .nwc data ,then used naviswork to make the .nwc data convert to .fbx data.
Related
I want to export fake.bc.Rdata in package "qtl" into a CSV, and when running "summary" it shows this is an object of class "cross", which makes me fail to convert it. And I tried to use resave, but there is warning :cannot coerce class ‘c("bc", "cross")’ to a data.frame.
Thank you all for your help in advance!
CSV stands for comma-separated values, and is not suitable for all kinds of data.
It requires, like indicated in the comments, clear columns and rows.
Take this JSON as an example:
{
"name":"John",
"age":30,
"likes":"Walking","Running"
}
If you were to represent this in CSV-format, how would you deal with the difference in length? One way would be to have repeated data
name,age,likes
John,30,Walking
John,30,Running
But that doesn't really look right. Even if you merge the two into one you would still have trouble reading the data back, e.g.
name,age,likes
John,30,Walking/Running
Thus, CSV is best suited for tidy data.
TL;DR
Can your data be represented tidily as comma-separated values, or should you be looking at alternative forms of exporting your data?
EDIT:
It appears you do have some options:
If you look at the reference, you have the option to export your data using write.cross().
For your data, you could use write.cross(fake.bc, "csv", "myCrossData", c(1,5,13)). It then does the following:
Comma-delimited formats: a single csv file is created in the formats
"csv" or "csvr". Two files are created (one for the genotype data and
one for the phenotype data) for the formats "csvs" and "csvsr"; if
filestem="file", the two files will be names "file_gen.csv" and
"file_phe.csv".
I've got a data structure as shown below:
It seems to be a data frame with meta data. I was able to manually build the data frame for this example with
d = data.frame(a1=x$value$value[1], a2=x$value$value[2], a3=x$value$value[3])
a=x$attributes
colnames(d)=a$names$value
However, I wonder if this is some sort of standard exchange format and if there is a more general solution to read the embedded data into a variable?
EDIT
The data structure came from an RDX2 file which contains JSON
load("data.json")
x=fromJSON(data_json)
The JSON structure contains the same data:
To answer my own question: the above is the result of serializing a data frame with
rlist::serialize(data, "data.json")
to a json file. Afterwards this file has been read as plain text, the text is converted using rjson::fromJSON, and the R data structure is written as it is to another file. Instead of this
data = rlist::unserialize("data.json")
should have been used.
Using AZ ML workbench for a class project (required tool) I coded the desired logic below in an exploration notebook but cannot find a way to include this into a Data-prep Transform Data flow.
all_columns = df.columns
sum_columns = [col_name for col_name in all_columns if col_name not in ['NPI', 'Gender', 'State', 'Credentials', 'Specialty']]
sum_op_columns = list(set(sum_columns) & set(df_op['Drug Name'].values))
The logic is using the column names from one data source df_op (opioid drugs) to choose which subset of columns to include from another data source df (all drugs). When adding a py script/expression Transform Data Flow I'm only seeing the ability to reference the single df. Alternatives?
I may have a way for you to access both data frames.
In Workbench, once you have the data sources that you need loaded, right click on one and select "Generate Data Access Code File".
Once there you're automatically given code to access that specific file. However, you can use the same code to access the other files.
In the screenshot above, I have two data sources. I can use the below code to access them both as a pandas data frame and manipulate them as I need.
df_salary = datasource.load_datasource('SalaryData.dsource')
df_startup = datasource.load_datasource('50-Startups.dsource')
I believe from there you can save your updated data frame to a CSV and then use that in the train script.
Hope that helps or at least points you to another solution.
I'm having an issue trying to format date in r... tried the following codes
rdate<-as.Date(dusted$time2,"%d/%m/%y") and also the recommendations on this stackoverflow question Changing date format in R but still couldn't get it to work.
geov<-dusted
geov$newdate <- strptime(as.character(geov$time2), "%d/%m/%Y")
all i'm getting is NA for the whole column for date. This are daily values, i would love if r can read them. Data available here https://www.dropbox.com/s/awstha04muoz66y/dusted.txt?dl=0
To convert to date, as long as you successfully imported the data already into a data frame such as dusted or geov, and have time2 holding dates as strings resembling 10-27-06, try:
geov$time2 = as.Date(geov$time2, "%m-%d-%y")
equal sign = used just to save on typing. It is equivalent to <-, so you can still use <- if you prefer
this stores the converted dates right back into geov$time2, overwriting it, instead of creating a new variable geov$newdate as you did in your original question. This is because a new variable is not required for conversion. But if for some reason you really need a new variable, feel free to use geov$newdate
similarly, you also didn't need to copy dusted to a new geov data frame just to convert. It does save time for testing purposes though, just in case the conversion doesn't work you can restart from copying dusted to geov instead of having to re-import data from a file into dusted
Additional resources
help(strptime) for looking up date code references such as %y. On Linux, man date can reveal the date codes
I wish to read data into R from SAS data sets in Windows. The read.ssd function allows me to do so, however, it seems to have an issue when I try to import a SAS data set that has any non-alphabetic symbols in its name. For example, I can import table.sas7bdat using the following:
directory <- "C:/sas data sets"
sashome <- "/Program Files/SAS/SAS 9.1"
table.df <- read.ssd(directory, "table", sascmd = file.path(sashome, "sas.exe"))
but I can't do the same for a table SAS data set named table1.sas7bdat. It returns an error:
Error in file.symlink(oldPath, linkPath) :
symbolic links are not supported on this version of Windows
Given that I do not have the option to rename these data sets, is there a way to read a SAS data set that has non-alphabetic symbols in its name in to R?
Looking about, it looks like others have your problem as well. Perhaps it's just a bug.
Anyway, try the suggestion from this (old) R help post, posted by the venerable Dan Nordlund who's pretty good at this stuff - and also active on SASL (sasl#listserv.uga.edu) if you want to try cross-posting your question there.
https://stat.ethz.ch/pipermail/r-help/2008-December/181616.html
Also, you might consider the transport method if you don't mind 8 character long variable names.
Use:
directory <- "C:/sas data sets"
sashome <- "/Program Files/SAS/SAS 9.1"
table.df <- read.ssd(library=directory, mem="table1", formats=F,
sasprog=file.path(sashome, "sas.exe"))