Compare identical tables in FileMaker 19 - odbc

I'm writing a simple app that needs to update some data from an ODBC source.
I have the same table in a MySQL database and the FileMaker app. I want to write a script that will run through all records in one table and compare them to the records in the other table. If there is a difference between them, show a dialog and let the user choose to copy the new information into the FileMaker record.
I've got the ODBC and external data source part working and I have the tables visible in FileMaker (ESS). Looping over all records is easy, but I can't figure out how to find the identical record in the ODBC source via its primary key.
The tables are absolutely identical except for where they are stored. Same data fields, etc. and in most cases 99% of the data will be identical.

If you are able to add the MySQL table as an external data source in Filemaker, you should be able to define a relationship between the two tables matching on the primary key field.
Once you have done that, you can access the corresponding record's data directly through the relationship.

Related

Create backup of bigquery cluster table

I've a clustered partitioned table exported from GA 360. Attached is the image. I would like to create exact replica of the same. Using Web UI it's not possible. I created backup table using bq command line tool, still no luck.
Also, whenever we check preview it has a day filter. It looks like this:
Whenever data is appended to the backup table, I don't find this filter there even though this option is set to true while creating a table.
If you can give more context about handling this kind of table it would be beneficial.
Those are indeed sharded tables. As explained by #N. L they follow a time-based naming approach: [PREFIX]_YYYYMMDD. They then get grouped together. The explained procedure to backup them seems correct. Anyhow, I would recommend to use partitioned tables as it will be easier to backup them and they perform better in general.
This is not a cluster / partitioned table. This one is a sharded non-partitioned table having one common prefix. Once you start creating multiple tables with same prefix we can see them under the same prefix.
Ex:
ga_session_20190101
ga_session_20190102
both these tables will be grouped together.
To take backup of these tables you need to create a script to copy source to destination table with same name and execute that script using bq command line tool under the same project.

How to connect R-studio to Oracle Database and also add external file csv to the queried data for ShinyDashBoard in R?

Below is the scenario.
Scenario:
I am building ShinyDashBoard that will allow end users to track system usability at specific location.
For this I need two tables that are in Oracle Database (As explained below). Also,I have one csv file that I would like to join after data is queried. This csv file contains data regarding System location. The purpose of joining csv file is to retrieve data of all systems located at specific location. This csv file consist of System, Location columns. From end-user perspective they would like to see where all systems are located, and at what frequency are those system being used ?
Table1 consists of User login information. It consist of UserName, UserLogin Date columns
Table2 consist of data regarding which user is using which system.(This table does not contain data of all systems, it only contains data where the system is being currently used). This table consist of UserName and System column.
Question #1: My question is how to approach in building the App ? I read this post where author suggest it's possible to connect to Oracle Database in R-Studio using RODBC package. However, I am not sure where should I put the code for connecting to Oracle database ? Does connection code goes in Global environment ? or does it go in Sever part of shiny App ?
Question #2: How to join external data ? I know I can perform Join on my Table1 and Table2 on "UserName" column. Then I can perform Join on "System" column that is in csv file.Where should the code for Join go ?
Question #3: Regarding username and password part of RODBC package.Since, end-users will be different persons using the app. What should I put for username and password when deploying the application?
I know this question covers various areas, if possible provide basic syntax of how would basic ShinyDashBoard app look like, which can connect to Oracle Database, and query the data from two table and perform join with external csv file,and render data on ShinyDashboard in form of visualization.
If it is Oracle you will need RJDBC package.
https://cran.r-project.org/web/packages/RJDBC/index.html

SQLite without main database?

We have several pairs of databases that relate to each other, i.e.
db1a and db1b
db2a and db2b
db3a and db3b
etc.
where there are cross-db joints between db1a and db1b, 2a and 2b, etc.
Having an open sqlite database connection, they can be attached, i.e.
ATTACH 'db1a' as a; ATTACH 'db1b' as b;
and later detached and replaced with other pairs when needed.
How should the database "connection" be created, though, as there is originally no real database to attach to? Using the first one as the main database gives it much more significance which is not meaningful - and as it is not detachable it's a hinderance later on.
The only way I can see is opening a :memory: or a temporary ('') database connection. Is there some better option available?
In the absence of any better alternative, :memory: is the best choice. (:temp: is a normal file name, and invalid in many OSes; a temp DB would have an empty file name.)
If you have some meta database that lists all other databases, you could use that.
Please note that when you have multiple attached databases, any change to one of them will involve a multi-database transaction.
So if the various database pairs do not have any relationship with other databases with different numbers, consider using a new connection for each pair.

Moving data in Access 2010

I'm a bit inexperienced but have a managed to learn how to use my database (access2010) but now I need to remove old files. In the database I have a primary table with multiple tables which stores additional information such as my notes.
I can't seem to figure out how to remove old files based on an input date.
I want to remove all files and the data stored in the dependent tables completely from year 2011 and back after backing up the database.
I've tried a delete query, and I've tried to simply copy and past inside the tables. I know there has to be a way to do this without deleting individual files.
When I run a delete query I get invalid key errors and when I delete files from my primary table, I get errors indicating there are associated data stored in the other tables.
Since I can't seem to delete all data across all tables for a certain date range, can anyone point out what I might be doing wrong?
You will need to JOIN the additional tables to your "primary" table, include a WHERE clause to only delete those matching your date range.
For an example, see this answer https://dba.stackexchange.com/a/102738

Is there a way to find the SQL that updated a particular field at a particular time?

Let's assume that I know when a particular database record was updated. I know that somewhere exists a history of all SQL that's executed, perhaps only accessible by a DBA. If I could access this history, I could SELECT from it where the query text is LIKE '%fieldname%'. While this would pretty much pull up any transactional query containing the field name I am looking for, it's a great start, especially if I can filter the recordset down to a particular date/time range.
I've discovered the dbc.DBQLogTbl view, but it doesn't appear to work as I expect. Is there another view that contains the information I am looking for?
It depends on the level of database query logging (DBQL) that has been enabled by the DBA.
Some DBA's may elect not to detailed information for tactical queries so it is best to consult with your DBA team to understand what is being captured. You can also query the DBC.DBQLRules to determine what level of logging has been enabled.
The following data dictionary objects will be of particular interest to your question:
DBC.QryLog contains the details about the query with respect to the user, session, application, type of statement, CPU, IO, and other fields associated with a particular query.
DBC.QryLogSQL contains the SQL statements. If a SQL statement is exceeds a certain length it is split across multiple rows which is denoted by a column in this table. If you join this to the main Query Log table care must be taken if you are aggregating and metrics in the Query Log table. Although more often then not if your are joining the Query Log table to the SQL table you are not doing any aggregation.
DBC.QryLogObjects contains the objects used by a particular query and how they were used. This includes tables, columns, and indexes referenced by a particular query.
These tables can be joined together in DBC via QueryID and ProcID. There are a few other tables that capture information about the queries but are beyond the scope of this particular question. You can find out about those in the Teradata Manuals.
Check with your DBA team to determine the level of logging being done and where they historical DBQL data is retained. Often DBQL data is moved nightly to a historical database and there often is a ten minute delay in data being flushed from cache to the DBC tables. Your DBA team can tell you where to find historical DBQL data.

Resources