Use mySql innodb cluster sandbox after restart of system/ not working? - innodb

During installation I selected Inno db cluster sandbox configration, which left me with sandbox running at 3310 and mySql server at 3306. After restarting, nothing was working,how to restart and reinitiate the Inno DB cluster sandbox? Apparrently I selected this option to just to check out what is Inno DB Cluster

Apparently I had to go to multiple documentation to see what Inno DB cluster to solve this problem
You can use these in mysqlsh
- startSandboxInstance Starts an existing MySQL Server instance on
- rebootClusterFromCompleteOutage Brings a cluster back ONLINE,If you dont run it , the instance will be on super read only mode. it will take 10 min , dont close it by thinking it is stuck.
Eg
dba.startSandboxInstance(3310)
dba.startSandboxInstance(3320)
dba.startSandboxInstance(3330)
Then run dba.rebootClusterFromCompleteOutage() to disable GR auto-initialisation and super read only mode
if it is showing the Dba.rebootClusterFromCompleteOutage: An open session is required to perform this operation. (RuntimeError), use \connect root#localhost:3310 and then use dba.rebootClusterFromCompleteOutage()
I am attaching my screenshot of how I used these commands with mysqlsh

Related

DB is still writable after changing to read only

I'm trying to make DB read-only for MariaDB, but it is not working correctly until we restart the service (Spring Boot application). The service keeps writing the data until we restart.
How can we make DB read-only so that it immediately stops writing regardless of service restart?
I tried commands:
1.
FLUSH TABLES WITH READ LOCK; 
SET GLOBAL read_only = 1; 
FLUSH TABLES WITH READ LOCK; 
None of the above fulfills the requirement. Please suggest if there's a different approach to do that.

Various difficulties creating ASP.NET Session tables via aspnet_regsql.exe

We're trying to move ASP.NET session state for one of our Azure web apps into a database, and it seems like the aspnet_regsql.exe tool is the way to go. Unfortunately, I'm getting stuck on a few issues below. It's an Azure SQL database, and I'm connecting using the server's admin account.
I initially wanted to add the session tables to our existing database, so I ran .\aspnet_regsql.exe -U adminusername -P adminpassword -S servername.database.windows.net -d databasename -ssadd -sstype c. Which throws the exception "Database 'databasename' already exists. Choose a different database name"
Omitting the database name and running it again throws the exception: "Execution Timeout Expired" after about 30 seconds, which is just the default for SqlCommand.CommandTimeout. This occurs while executing the "CREATE DATABASE" command. I tried creating a database manually, and it takes about 50 seconds for some reason. This database is S0 tier and is not under any load
Running aspnet_regsql again on the already-created database (because it's idempotent, right?) leads to the "Database already exists" error, as does pre-creating an empty database for it to start from.
There's no flag that lets me increase the timeout, and I can't set command timeout using the -C (connection string) flag
Adding the -sqlexportonly flag to generate a script and just running that directly doesn't work either (yes, I know I'm not supposed to run InstallSqlState.sql directly). It throws a whole load of error messages saying things like:
Reference to database and/or server name in 'msdb.dbo.sp_add_job' is not supported in this version of SQL Server.
USE statement is not supported to switch between databases.
Which makes me think this script might have some issues with an Azure SQL database...
Does anyone have any ideas?
Update:
It looks like all the errors involving 'msdb' are related to removing and re-adding a database job called 'Job_DeleteExpiredSessions'. Azure SQL doesn't support database jobs, so the only options I can see are
Run SQL on a VM instead (vastly more expensive, and I'd rather stick with the platform services than have to manage VMs)
Implement one of those "Elastic Job Agents"
Perhaps move the same functionality elsewhere (e.g. a stored proc)?
Turns out Microsoft has an article about how to do exactly what I need, which I somehow
missed during my searching yesterday. Hopefully this answer saves someone else a few hours of frustration. All the info you need is at https://azure.microsoft.com/en-au/blog/using-sql-azure-for-session-state/ earlier.
Note that YMMV since it's from 2010 and also says in scary red letters
"Microsoft does not support SQL Session State Management using SQL Azure databases for ASP.net applications"
Nevertheless, they provide a working script that seems to do exactly what I need.

create cluster for existing mariadb database

I have an existing database for which i was looking to create a new clustered environment. I tried the following steps:
Create a new database instance (OS & DB Server).
Take a backup / snapshot from existing database server for all the databases.
Import the snapshot to the new server.
Configure the cluster - referred to various sites but all giving same solution. Example reference site - https://vexxhost.com/resources/tutorials/how-to-configure-a-galera-cluster-with-mariadb-on-ubuntu-12-04/
Ran the command (sudo galera_new_cluster) on the primary server. (Primary server - no issue starting up). But when we tried starting the secondary server - it actually crashed for some reason.
Unfortunately at this point, dont have the logs stored / backed up with me where it failed. But it seemed like it tried to sync in with the primary server - had some failure with that.
As for additional part of the actions performed above. Both the server with same username / password - created a passwordless ssh connection between both the machines. Also, the method of syncing is set to rsync.
Am i missing something or doing it wrong? Is there a better way available on it?

How to access on premise Teradata from Azure Databricks

We need to connect to on premise Teradata from Azure Databricks .
Is that possible at all ?
If yes please let me know how .
I was looking for this information as well and I recently was able to access our Teradata instance from Databricks. Here is how I was able to do it.
Step 1. Check your cloud connectivity.
%sh nc -vz 'jdbcHostname' 'jdbcPort'
- 'jdbcHostName' is your Teradata server.
- 'jdbcPort' is your Teradata server listening port. By default, Teradata listens to the TCP port 1025
Also check out Databrick’s best practice on connecting to another infrastructure.
Step 2. Install Teradata JDBC driver.
Teradata Downloads page provides JDBC drivers by version and archive type. You can also check the Teradata JDBC Driver Supported Platforms page to make sure you pick the right version of the driver.
Databricks offers multiple ways to install a JDBC library JAR for databases whose drivers are not available in Databricks. Please refer to the Databricks Libraries to learn more and pick the one that is right for you.
Once installed, you should see it listed in the Cluster details page under the Libraries tab.
Terajdbc4.jar dbfs:/workspace/libs/terajdbc4.jar
Step 3. Connect to Teradata from Databricks.
You can define some variables to let us programmatically create these connections. Since my instance required LDAP, I added LOGMECH=LDAP in the URL. Without LOGMECH=LDAP it returns “username or password invalid” error message.
(Replace the text in italic to the values in your environment)
driver = “com.teradata.jdbc.TeraDriver”
url = “jdbc:teradata://Teradata_database_server/Database=Teradata_database_name,LOGMECH=LDAP”
table = “Teradata_schema.Teradata_tablename_or_viewname”
user = “your_username”
password = “your_password”
Now that the connection variables are specified, you can create a DataFrame. You can also explicitly set this to a particular schema if you have one already. Please refer to Spark SQL Guide for more information.
Now, let’s create a DataFrame in Python.
My_remote_table = spark.read.format(“jdbc”)\
.option(“driver”, driver)\
.option(“url”, url)\
.option(“dbtable”, table)\
.option(“user”, user)\
.option(“password”, password)\
.load()
Now that the DataFrame is created, it can be queried. For instance, you can select some particular columns to select and display within Databricks.
display(My_remote_table.select(“EXAMPLE_COLUMN”))
Step 4. Create a temporary view or a permanent table.
My_remote_table.createOrReplaceTempView(“YOUR_TEMP_VIEW_NAME”)
or
My_remote_table.write.format(“parquet”).saveAsTable(“MY_PERMANENT_TABLE_NAME”)
Step 3 and 4 can also be combined if the intention is to simply create a table in Databricks from Teradata. Check out the Databricks documentation SQL Databases Using JDBC for other options.
Here is a link to the write-up I published on this topic.
Accessing Teradata from Databricks for Rapid Experimentation in Data Science and Analytics Projects
If you create a virtual network that can connect to on prem then you can deploy your databricks instance into that vnet. See https://docs.azuredatabricks.net/administration-guide/cloud-configurations/azure/vnet-inject.html.
I assume that there is a spark connector for terradata. I haven't used it myself but I'm sure one exists.
You can't. If you run Azure Databricks, all the data needs to be stored in Azure. But you can call the data using REST API from Teradata and then save data in Azure.

How to set up ODBC 10.1b for Progress DB

I'm trying to set up an ODBC client driver for Progress 10.1b. I was able to install the client software that is required, but there is apparently also an ODBC.reg script file that needs to be run to correctly set up the registry in order for me to use the ODBC drivers.
Can anyone point me to where I would find this script? Or tell me the set of registry entries that would have to be made?
That's not a standard part of the install process.
Are these the steps that you followed to get the client installed?
If, as you say, the client was properly installed you just need to setup the DSN. The following should work (stolen and lightly edited from the Progress Knowledge Center):
Start up the ODBC Data Source Administrator(found in Control Panel within the Administrative Tools folder)
Example:
1. Select the System DSN tab
2. Select the Add button to the right
3. Select the MERANT 32-BIT Progress SQL-92 driver for your version of Progress.
4. Select Finish
That brings up the configuration screen for a new DSN.
Fill-in the following information:
1. Data Source Name..... (whatever you choose)
2. Description.......(optional and whatever you think is appropriate)
3. Host Name......(the server where the database is located)
4. Port Number.... (the port your database broker was started with...if multiple
brokers..the SQL broker port)
5. Database Name....(database name you wish to connect to)
6. User ID...(the user you logged in with or if security is turned on..a user
that can connect to the database)
7. Leave all other tab settings at the defaults for the initial configuration.
8. Select the Apply button.
9. Select the Test Connect button.
10. Screen requesting a password pops up. (enter one only if the database
normally requires a user name and password from the 4GL side to enter the database)

Resources