I am newbie in Tableau. I am using Tableau desktop version 9.3. I want to know whether we can create multiple tableau data extract on a single data source. As I have data of last 5 years and I want to create tde for every single year (eg. tde for year 2011, tde for year 2012).
Is it possible to create different tableau data extracts on single data source in tableau desktop?
Yes, you can create as many extracts as you like. Do it from the sheet as opposed to the data window. Right click the data source name and choose extract. Name and save as you wish. You'll then have 5 distinct data sources.
One caveat, you'll have a better experience if you save the data source as well, not just the extract. The data source has all the metadata information in the data pane (aliases, formulas, default formats, groups etc). The extract has a copy of the data only.
The command to save a data source is "Add to Saved Data Sources". You will have the option of saving just the datasource as a TDS file or packaging your extract with the datasource as a TDSX file.
Tableau will let you save an extract and then just open that extract as a data source, known as a naked extract, but you will lose most of the metadata and the connection to the original data.
If you have Tableau Server, you can publish your datasources instead of saving them as files.
Related
I am new to informatica data-integration. We are building a common data layer to ingest data from multiple sources (RDBMS and File Storage) to target DB.
We are intend to ingest only common entities and their respective columns/attributes. For example, I have product table which is available in all the sources but with different name and column count. Please refer below
Please refer below table.
In above table, I have 2 data source having product information. However, both structure is not same as the column name differ from source to source and clumn count as well.
Now, I am trying to standardize and would like to build generic data pipeline in #Informatica Data Integration Hub.
I have explored their Schema Drift option but that is only available in Mass Ingestion.
They also have Java Activity block to do manual column mapping. But the problem here is my data source will increase in future and that will become manual intervention.
Any suggestion?
“Is there a way to connect external databases to R to extract the data. Say, I have a database called Aziz(Name changed). In Aziz, I have 100 lots of databases in the form of excel or csv. Can I extract any one of the them into R? ”
I have imported data from Google analytics API and I have now multiple data frames in my script. Now, I would like to store it so that later I can specify these as data location for hive external tables. (hive tables are just like sql tables with schema , just that these can source data from flat files).
Let's say I have df1 , df2....df 30.
What should be the next step to store this data in HDFS? Should I create a file per dataframe or may be something else?
I would like to read a data file saved in Tableau data format (*.tdsx) in R. I'm wondering whether it is possible in R or any other tools can do that.
My scenario is I have data saved in Tableau server and I connect via Tableau desktop. I make tableau visualizations from that data. But I also need to validate the measures/dimensions I create in Tableau by using another tool such as R (my favorite). So I have saved a local copy of the data that I saved, and want to open that using another tool. How can this be done?
Tableau Data Extracts are primarily write-only (or append-only) data sources to be read by Tableau. There is no public API currently for read or update operations. Extracts are great for speeding up access to read-only subsets of data, but they aren't meant to replace the original data source.
If you want a good data exchange format, why not pick something standard like CSV? Then R and Tableau both can read the CSV file, and convert it to a native format like TDSX for performance if desired.
Or keep your data in a database and point both R and Tableau at it.
I have a database that is used to store transactional records, these records are created and another process picks them up and then removes them. Occasionally this process breaks down and the number of records builds up. I want to setup a (semi) automated way to monitor things, and as my tool set is limited and I have an R shaped hammer, this looks like an R shaped nail problem.
My plan is to write a short R script that will query the database via ODBC, and then write a single record with the datetime, the number of records in the query, and the datetime of the oldest record. I'll then have a separate script that will process the data file and produce some reports.
What's the best way to create my datafile, At the moment my options are
Load a dataframe, add the record and then resave it
Append a row to a text file (i.e. a csv file)
Any alternatives, or a recommendation?
I would be tempted by the second option because from a semantic point of view you don't need the old entries for writing the new ones, so there is no reason to reload all the data each time. It would be more time and resources consuming to do that.