How to share experience on working with elixir reporting tool?
I need to create a table with dynamic columns from db. How to create a table with dynamic columns using elixir reporting tool?
sql:select id,name from customer where dept_id='1442';
Above query return set of records. I need to pass that records dynamically to a table using elixir reporting tool.
The tool description can find below
http://www.elixirtech.com/
Related
I'm new to Dynamo and scratching my head on how query my table, and I think I might overthink things. I have a simple table with partners with corresponding developers. I would like to query the partner data based on a given developer id. I've created a secondary global index with base table SK as PK (which lets me query with developer#123), but how would I get the partner item (partner#123) by developer id (developer#123) in this scenario in one single query? I'm only querying using the aws console now so now code examples available.
Any pointers would be greatly appreciated!
I want a query to get the column relation or reference of column for the table or for all the databases.
Like in MySQL, we have a query
SELECT * FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE WHERE
TABLE_SCHEMA = 'database_name';
So, What is the query for Progress OpenEdge to get column relation.
Also where are procedure, functions and views stored in ProgressDB?
Query to view database name list?
To find relationships, or views, or stored procedures you must query the meta-schema. Stefan's link documents SYSTABLES, SYSCOLUMNS, SYSINDEXES, SYSPROCEDURES, and SYSVIEWS. These are the tables that define what you have asked for.
https://docs.progress.com/bundle/openedge-sql-reference-117/page/OpenEdge-SQL-System-Catalog-Tables.html
The Progress database does not explicitly store relationships. They are implied, by convention, when there are common field names between tables but this does not create any special relationship in the engine. You can parse the tables above and make some guesses but, ultimately, you probably need to refer to the documentation for the application that you are working with.
Most Progress databases were created to be used by Progress 4gl applications. SQL came later and is mostly used to support 3rd party reporting tools. As a result there are two personas - the 4gl and sql. They have have many common capabilities but there are some things that they do not share. Stored procedures are one such feature. You can create them on the sql side but the 4gl side of things does not know about them and and will not use them to enforce constraints or for any other purpose. Since, as I mentioned, most Progress databases are created to support a 4gl application, it is very unusual to have any sql stored procedures.
(To make matters even more complicated there is some old sql-89 syntax embedded within the 4gl engine. But this very old syntax is really just token sql support and is not available to non-4gl programs.)
I've a clustered partitioned table exported from GA 360. Attached is the image. I would like to create exact replica of the same. Using Web UI it's not possible. I created backup table using bq command line tool, still no luck.
Also, whenever we check preview it has a day filter. It looks like this:
Whenever data is appended to the backup table, I don't find this filter there even though this option is set to true while creating a table.
If you can give more context about handling this kind of table it would be beneficial.
Those are indeed sharded tables. As explained by #N. L they follow a time-based naming approach: [PREFIX]_YYYYMMDD. They then get grouped together. The explained procedure to backup them seems correct. Anyhow, I would recommend to use partitioned tables as it will be easier to backup them and they perform better in general.
This is not a cluster / partitioned table. This one is a sharded non-partitioned table having one common prefix. Once you start creating multiple tables with same prefix we can see them under the same prefix.
Ex:
ga_session_20190101
ga_session_20190102
both these tables will be grouped together.
To take backup of these tables you need to create a script to copy source to destination table with same name and execute that script using bq command line tool under the same project.
I'm pretty new to NoSQL in general but I'm trying azure's cosmos db to store my logs. I've been really struggling with their UI.
In the image below, I just created a random collection and added a simple entity with 3 fields. All I want to do at this point is so reduce the columns so that I only see the ones I want. Granted I can't remove the first 3 columns, whenever I try to remove any column at all, I get this error and instead of removing the columns it just leave it blank. Now I can't find any documentation for this simple UI (which shouldn't really be the case) so here are some things maybe someone can help me out.
How to remove columns. Like in SQL, only display the columns I want?
How to create custom query? The query builder here is really limited and unusable to be honest.
Is there any other DB Client similar to MongoDB's compass that might make the UI easier?
1.How to remove columns. Like in SQL, only display the columns I want?
You could choose your desired columns in the advanced options,but the PartitionKey,RowKey and TimesStamp are default columns in cosmos db table api,you can't remove them in the portal data explorer.
2.How to create custom query?
Same as above situations,it has limitation in the portal UI.
3.Is there any other DB Client similar to MongoDB's compass that might make the UI easier?
However,you could download the Azure Storage Explorer so that you could easily manage the contents of your storage account.
It's more flexible than portal UI ,such as you could filter the columns,including the default columns.
However, the general design of the tool UI is similar to the portal UI.
BTW,as #David mentioned in the comment,if you want to query the data more flexible with sql language,you need to consider using Cosmos DB SQL API instead of Cosmos DB Table API.
I agree that the experience could be quite daunting. As a suggestion, you could try Cerebrata (https://cerebrata.com/) as a cost-effective solution.
The tool basically allows you to remove the desired columns in just a few clicks (as projected in the GIF below). The tool also allows you to edit multiple entities in bulk and more.
CosmosDB Delete entities Table API - Cerebrata
We have a set of tables with the data about users interactions online and we want to create a table with the scheme similar to GA BigQuery Export Schema (this feature is not yet available in Russia).
I couldn't find the information on how to create a record field in BQ querying existing tables.
On the contrary, it is written that "This type is only available when using JSON source files."
Is there any workaround or this feature expected in a nearer future? Can I submit a feature request?
Currently the only way to get nested and repeated records into BigQuery is loading JSON files. Once a query is run, all structure is flattened.
Feature request noted, hopefully BigQuery will support emitting nested records results!