How can i create excel sheet directly with datatable - asp.net

I need all my data of datatable into a excel,but i don't want to use the for loop to write line by line.Because if the rows are 200 or more it is taking time.
IS there any fastest way to do it.

fastest way I know of is to got to the data tab: get external data, create a connection to the database using ODBC or whatever, and then select the table you want per sheet. it will run the a query and get all data based on the query or table specified.
You can then right click on the data section and refrehs the data anytime you want OR you can set the data to refresh on open, sheet activate etc.
This is a "PULL" approach as opposed to a PUSH approach others are discussing.

Related

how to move data between datasets in different regions?

I'm using BigQuery integrated with Firebase and all the datasets are in the same Project. My analytics dataset is in useast-4 but for some reason my firebase_imported_segments dataset region is just marked as US
I'd like to move data from the analytics dataset into a table in the firebase_imported_segments.
At first, I tried a simple INSERT query but I get the error firebase_imported_segments was not found in location us-east4
So then I tried building a SELECT statement and exporting the rows using "Save Results > Big Query Table" but that gives a similar error that the destination dataset is not found. Oddly enough, if I create a table in firebase_imported_segments and try to save the results using that table name, I get a "Table already exists" error. So it's not that it can't find the firebase_imported_segments dataset, it just won't create a new table in that dataset.
How can I get around this? I saw some BQ documentation that moving data between regions is possible but I didn't a simple walkthrough of how it's accomplished. I'm also confused by why firebase would put some data in one specific region (useast-4) and then other data in a multi-region (US) if they aren't compatible.
You can move datasets using "Copy" in the BigQuery UI then delete the old dataset. See Copy dataset documentation.
Option 1: Use the Copy button.
Go to the BigQuery page in the Cloud console.
In the Explorer panel, expand your project and select a dataset.
Expand the More Actions option (triple dot button) and click Open.
Click Copy. In the Copy dataset dialog that appears, do the following:
a. In the Dataset field, either create a new dataset or select an
existing dataset ID from the list.
Dataset names within a project must be unique. The project and dataset
can be in different regions, but not all regions are supported for
cross-region dataset copying.
b. In the Location field, the location of the source dataset is
displayed.
c. Optional: To overwrite both data and schema of the destination tables
with the source tables, select the Overwrite destination tables
checkbox.
d. To copy the dataset, click Copy.
To avoid additional storage costs, consider deleting the old dataset.
Option 2: Use the BigQuery Data Transfer Service.
Enable the BigQuery Data Transfer Service.
Create a transfer for your
data source.
I tested this and can confirm that it works. I created a dataset in us-east4 named analytics_us_regional and has a table named east_4_table and copied it to a dataset located in US.
Copy us-east4 to US dataset:
When copy is initiated a data transfer job is created:
Copied to US:
With regards to the data in firebase located in us-east4 based from the firebase export to BQ. When the export is enabled the first time, the user will define the location of the tables. It might be possible thatus-east4 region was selected initially.
Don't know if it will work in your case, but I had a dataset in europe-west1 and I want to copy it to EU region, I have done these two ways and it both worked:
First way:
1- Click on the dataset you want to copy and click on "COPY".
2- On the copy menu on the dataset destination click on "CREATE NEW DATA SET" and select the destination region you want that dataset to be. Click on CREATE DATA SET.
3 - On the "Copy data set" menu click on COPY.
4 - You will get an error "Cannot create a transfer in REGION_EUROPE_WEST_1 when destination dataset is located in JURISDICTION_EU" but a dataset with no tables will be created on your destination Region.
5 - Now if you try to copy the source dataset by clicking on COPY and selecting the dataset created in set 4, it will work now.
Second way: (best way)
1 - Open a New Query sheet Click on MORE- >Query settings-> Advanced options, uncheck the "Automatic location selection" and select the destination region or Multi-region you want (in my case EU).
2- On this query sheet run "CREATE SCHEMA your_new_dataset_name" -> this will create the dataset "your_new_dataset_name" in the destination region selected in point 1.
3 - Click on the dataset you want to copy and click on "COPY".
4 - On the copy menu on the Data Set destination select the dataset created in point 2, and click on COPY.
Both ways under the wood utilize the BigQuery Data Transfer Service but you don't need to access the service directly.
In fact, both ways do exactly the same thing which is creating a destination empty dataset in the correct region you want to copy yours, once you have that the Copy function will work correctly.

Spotfire: How to filter 'old' data from data functions running on a data link that appends 'new' data

My situation is this. I have a data-link that continually appends new snapshot data to a table, and upon the arrival of a new snapshot, it runs an R data function (script) which does some calculations with results that append to an output table. The R calculations are quite expensive and the input data is large, and more importantly, the snapshots are independent of each other, so there is no need to re-process previously received snapshots every time a new snapshot arrives.
I can't make a data function that takes it's own results as input (i.e. to filter by previously processed dates), and my other idea also throws up cyclic dependencies (creating a second data function to generate a second table with previously processed dates).
Has anyone experienced this issue, and could you please give me some ideas on safe ways to address it? I'm new to Spotfire (and dash-boarding generally).
I had the same situation - I got around this with a few steps:
Added a new filtering scheme just for data processing
Adding a calculated column to my output table which was just "X",
that I could filter out ie. filter out the entire output table
Added a relationship between my input table and output on a key column which excluded any filtered out columns
In my data function input parameters, selected to filter the input data by the data processing filtering scheme.
This meant every time my function runs it only runs off the unprocessed data.
I am current appending the data so that it adds on update not
replaces. This works fine when I open the file everyday and save it
however is not working with the scheduled updates in the web player-
it doesn't add a data function add rows action each time - still
looking for a way around this.... possibly an iron python script will
do the trick..

Deleting duplicate rows in MS ACCESS

I am making a table by combining columns of two tables. But there are few duplicate rows also. How do you delete duplicate rows from MS Access. I tried using the duplicate record query and also tried to use append query. But neither of them worked.
There are a couple of ways you can do this without using the Access wizards:
1. Run a Make Table Query and in the select statement use grouping for every column. If the tables have a unique ID, don't use the ID because that will make every record unique
2. Export a Select query with the columns you want from the two tables to an Excel workbook. Once in Excel go to the Data tab and click on the column header selector and click on the Remove Duplicates button. Make sure you expand the selection if it asks you. Then save and link the Excel Workbook to Access again and do whatever you need to do from there
There are more ways, but you'd have to provide more details as to what you're trying to accomplish

Binding to MS Access Local Database

I am using the Shield UI ASP.NET chart and am trying to add an ms access data source for a pie chart.. I have created a database and a table and added it to the solution. The table contains some data, so that couldn’t be the problem.
After this I configured the data source and specified the following select statement:
SELECT * FROM [Sales]
but the chart showed no data.
I than changed the query to
SELECT [ID], [ProductName], [SaleAmount] FROM [Sales]
because probably there was a column name missing but there was no success either. In both cases I ran the query and it returned rows.
What could I be doing wrong?
Since there is no data visualized on your chart while your database contains rows, the problem could be that you are not specifying the exact column needed for the chart. At design time when specifying the datasource the chart doesn’t acuire any specific field for any specific column. This means that you need to place some extra code:
<DataSeries>
<shield:ChartBarSeries DataFieldY="SaleAmount">
</shield:ChartBarSeries>
</DataSeries>
Furthermore- if you need to specify more than one series e.g. visualize more than one field you need to repeat that for each field adding the appropriate data series.
You may find more information here:
https://www.shieldui.com/documentation/asp.net.chart/databinding/data.source.controls

Unable to view uploaded timeseries data in UI chart

Running the Databus server from the command line I have successfully uploaded timeseries data via curl, and am able to query the same data with the api. I'm unable to view any of the data in the table in the UI. After selecting "My Databus" -> Tables is says "You do not belong to any groups that have tables yet. Add some groups, then tables!!!". Navigating to the Database and selecting the table -> chart no data comes back there either.
I have noticed that the query it issues is from a recent time range, while the data I loaded is for an earlier time period. Is there a default way to show the most recent data available in a table?
Is your table type relational or stream? If relational, what is the primary key?
If time series, this url will give you the last 10 values because of the parameter 10 and reverse=true.
http://[yourhost]/api/firstvaluesV1/10/rawdataV1/[yourtablename]?reverse=true
If relational table, you can retreive all values like so
http://[yourhost]/api/getdataV1/select+c+from+[yourtablename]+as+c
replace either urls [yourhost] and [yourtablename] values.
We do not use the tables page much. It is better to click in the specific database as in My Databus -> Databases and then click on the database that has your table. We are about to add a view data link in there showing most recent 1000 values or something like that. There is already a view chart which shows most recent 2 hours(again, we want to change that to most recent 1000 data points instead as well).

Resources