Unable to view uploaded timeseries data in UI chart - databus

Running the Databus server from the command line I have successfully uploaded timeseries data via curl, and am able to query the same data with the api. I'm unable to view any of the data in the table in the UI. After selecting "My Databus" -> Tables is says "You do not belong to any groups that have tables yet. Add some groups, then tables!!!". Navigating to the Database and selecting the table -> chart no data comes back there either.
I have noticed that the query it issues is from a recent time range, while the data I loaded is for an earlier time period. Is there a default way to show the most recent data available in a table?

Is your table type relational or stream? If relational, what is the primary key?
If time series, this url will give you the last 10 values because of the parameter 10 and reverse=true.
http://[yourhost]/api/firstvaluesV1/10/rawdataV1/[yourtablename]?reverse=true
If relational table, you can retreive all values like so
http://[yourhost]/api/getdataV1/select+c+from+[yourtablename]+as+c
replace either urls [yourhost] and [yourtablename] values.
We do not use the tables page much. It is better to click in the specific database as in My Databus -> Databases and then click on the database that has your table. We are about to add a view data link in there showing most recent 1000 values or something like that. There is already a view chart which shows most recent 2 hours(again, we want to change that to most recent 1000 data points instead as well).

Related

how to move data between datasets in different regions?

I'm using BigQuery integrated with Firebase and all the datasets are in the same Project. My analytics dataset is in useast-4 but for some reason my firebase_imported_segments dataset region is just marked as US
I'd like to move data from the analytics dataset into a table in the firebase_imported_segments.
At first, I tried a simple INSERT query but I get the error firebase_imported_segments was not found in location us-east4
So then I tried building a SELECT statement and exporting the rows using "Save Results > Big Query Table" but that gives a similar error that the destination dataset is not found. Oddly enough, if I create a table in firebase_imported_segments and try to save the results using that table name, I get a "Table already exists" error. So it's not that it can't find the firebase_imported_segments dataset, it just won't create a new table in that dataset.
How can I get around this? I saw some BQ documentation that moving data between regions is possible but I didn't a simple walkthrough of how it's accomplished. I'm also confused by why firebase would put some data in one specific region (useast-4) and then other data in a multi-region (US) if they aren't compatible.
You can move datasets using "Copy" in the BigQuery UI then delete the old dataset. See Copy dataset documentation.
Option 1: Use the Copy button.
Go to the BigQuery page in the Cloud console.
In the Explorer panel, expand your project and select a dataset.
Expand the More Actions option (triple dot button) and click Open.
Click Copy. In the Copy dataset dialog that appears, do the following:
a. In the Dataset field, either create a new dataset or select an
existing dataset ID from the list.
Dataset names within a project must be unique. The project and dataset
can be in different regions, but not all regions are supported for
cross-region dataset copying.
b. In the Location field, the location of the source dataset is
displayed.
c. Optional: To overwrite both data and schema of the destination tables
with the source tables, select the Overwrite destination tables
checkbox.
d. To copy the dataset, click Copy.
To avoid additional storage costs, consider deleting the old dataset.
Option 2: Use the BigQuery Data Transfer Service.
Enable the BigQuery Data Transfer Service.
Create a transfer for your
data source.
I tested this and can confirm that it works. I created a dataset in us-east4 named analytics_us_regional and has a table named east_4_table and copied it to a dataset located in US.
Copy us-east4 to US dataset:
When copy is initiated a data transfer job is created:
Copied to US:
With regards to the data in firebase located in us-east4 based from the firebase export to BQ. When the export is enabled the first time, the user will define the location of the tables. It might be possible thatus-east4 region was selected initially.
Don't know if it will work in your case, but I had a dataset in europe-west1 and I want to copy it to EU region, I have done these two ways and it both worked:
First way:
1- Click on the dataset you want to copy and click on "COPY".
2- On the copy menu on the dataset destination click on "CREATE NEW DATA SET" and select the destination region you want that dataset to be. Click on CREATE DATA SET.
3 - On the "Copy data set" menu click on COPY.
4 - You will get an error "Cannot create a transfer in REGION_EUROPE_WEST_1 when destination dataset is located in JURISDICTION_EU" but a dataset with no tables will be created on your destination Region.
5 - Now if you try to copy the source dataset by clicking on COPY and selecting the dataset created in set 4, it will work now.
Second way: (best way)
1 - Open a New Query sheet Click on MORE- >Query settings-> Advanced options, uncheck the "Automatic location selection" and select the destination region or Multi-region you want (in my case EU).
2- On this query sheet run "CREATE SCHEMA your_new_dataset_name" -> this will create the dataset "your_new_dataset_name" in the destination region selected in point 1.
3 - Click on the dataset you want to copy and click on "COPY".
4 - On the copy menu on the Data Set destination select the dataset created in point 2, and click on COPY.
Both ways under the wood utilize the BigQuery Data Transfer Service but you don't need to access the service directly.
In fact, both ways do exactly the same thing which is creating a destination empty dataset in the correct region you want to copy yours, once you have that the Copy function will work correctly.

How to list unique values of a particular field in Kibana

I am having a field named rpc in my elasticsearch database and I am displaying it using Kibana. When I search in search bar of kibana like:
rpc:*
It display all the values of rpc field but I want to have only those value to be displayed which are unique.
I have been playing around with Kibana4 since a couple of weeks now. I find it intuitive and simple and the experience has been great till now. Following your question, I tried getting unique results via a Data Table visualization. Why? Because I personally find it easier to understand. Following are the steps:
1. Get unique count
Create the visualization (Visualize -> Data Table). First lets get
the count of how many unique entries we have for a particular field
(We will use this in the later part for verification). I'm using
clientip.raw but as I see, it will work just fine with any friendly
field name too.
2. Set the aggregation right
Set you aggregation back to count and have a Split Rows as follows. Not doing this will give you count 1 for each field value (since it is looking for unique counts) when you populate the table. Noteworthy part is setting the Top field to 0. Because Kibana won't let you enter anything else than a digit (Obviously!). This was the tricky part. Hit Apply and you'll get the results. Unique field values and the count of each of them.
3. Verification:
Going to the last page of the table, we see there are exactly 543 results. This is how I know it works.
What Next?
You save this visualization and add it to a Dashboard. There you can always check the request, query, response and other stats.
Just an addition to the above mathakoot answer.
For the user of newer version (which do not allow bucket size of 0 anymore) just set a value greater than the maximum number of result
And report the value in the Options>Per Page field
I am using Kibana 6 so the UI looks a bit different than the older answers here.
Here is what worked for me
Create a visualization from your query, I used a line graph type (don't think it matters)
Under Data, set metrics aggregation = "Unique Count" and set field to your field.
Set x-axis aggregation = "Terms" and set field to your field.
Set Size > your number of records
Under Metrics and Axes, disable drawing of the graph, circles, and labels (this really helps the UI not lag)
Run query and then click "Inspect" and download CSV
Data
Metrics & Axes
I wanted to achieve something similar but I'm stuck with Kibana 3.1.
I simply added a panel of type "TERMS" and configured its Field = User-agent and left everything else on default values. This gave me a nice bar chart with one bar for each User-agent.

Use the same sql Server table to do different updates, is there a way to do that?

Im using Asp.net (VB.net), in my Database :
have One table called (Trade), the same rows of this table are used from 3 different users, These users can make different updates on this table, they should see the basic informations of the table (I mean by the Basic, before the table (trade) has been updated)
The problem is here when the first user wants to modify the table's rows, the second and third user cannot see the basic information any more, and if they decide to change or update some data, the first will lose his updated rows..
The data will be overwritten every time the users make updates on the table.
What I want, is to know if there is a way to do like a copy, or an image of the table for the 3 users, and every user can update normally, without creating the same Table with the same rows 3 times??!
Update
My table structure is: Trade(trName, Carrier, POl, POD, Vgp, Qgp) There is no primary key..
Thank you..
Solution to your problem could be two copies of the original table. Show the original table always to the user as the initial data. And in second table keep the updated data always. Now the trick comes here to maintain the log, for that you have to maintain the log table, this table will have all the fields of original table along with one additional column "UserId", this will have the ID of user who has changed the value. Now each time before updating the data, copy it in the log table. If this suits your need then post the fields of your table then we can workout on the table structures.

Binding to MS Access Local Database

I am using the Shield UI ASP.NET chart and am trying to add an ms access data source for a pie chart.. I have created a database and a table and added it to the solution. The table contains some data, so that couldn’t be the problem.
After this I configured the data source and specified the following select statement:
SELECT * FROM [Sales]
but the chart showed no data.
I than changed the query to
SELECT [ID], [ProductName], [SaleAmount] FROM [Sales]
because probably there was a column name missing but there was no success either. In both cases I ran the query and it returned rows.
What could I be doing wrong?
Since there is no data visualized on your chart while your database contains rows, the problem could be that you are not specifying the exact column needed for the chart. At design time when specifying the datasource the chart doesn’t acuire any specific field for any specific column. This means that you need to place some extra code:
<DataSeries>
<shield:ChartBarSeries DataFieldY="SaleAmount">
</shield:ChartBarSeries>
</DataSeries>
Furthermore- if you need to specify more than one series e.g. visualize more than one field you need to repeat that for each field adding the appropriate data series.
You may find more information here:
https://www.shieldui.com/documentation/asp.net.chart/databinding/data.source.controls

How can i create excel sheet directly with datatable

I need all my data of datatable into a excel,but i don't want to use the for loop to write line by line.Because if the rows are 200 or more it is taking time.
IS there any fastest way to do it.
fastest way I know of is to got to the data tab: get external data, create a connection to the database using ODBC or whatever, and then select the table you want per sheet. it will run the a query and get all data based on the query or table specified.
You can then right click on the data section and refrehs the data anytime you want OR you can set the data to refresh on open, sheet activate etc.
This is a "PULL" approach as opposed to a PUSH approach others are discussing.

Resources