Issues after upgrading DB to mariaDB - mariadb

I have recently built my rundeck server and created a DB using mariaDB and pointed rundeck to this. I followed the official documentation for this on the rundeck site. Since I have changed from the systemDB to mariaDB the service no longer starts.
My rundeck-config.properties file looks like this:
#loglevel.default is the default log level for jobs: ERROR,WARN,INFO,VERBOSE,DEBUG
loglevel.default=INFO
rdeck.base=/var/lib/rundeck
#rss.enabled if set to true enables RSS feeds that are public (non-authenticated)
rss.enabled=false
#change hostname here
grails.serverURL=http://IP OF SERVER:4440
dataSource.driverClassName=
dataSource.url = jdbc:mysql://IP OF SERVER/rundeck?autoReconnect=true&useSSL=false
dataSource.username = DB User
dataSource.password = Password
grails.plugin.databasemigration.updateOnStart=true
autoReconnect=true
#to store projects on backend
rundeck.projectsStorageType=db
#Encryption for key storage
rundeck.storage.provider.1.type=
rundeck.storage.provider.1.path=keys
rundeck.storage.converter.1.type=jasypt-encryption
rundeck.storage.converter.1.path=keys
rundeck.storage.converter.1.config.encryptorType=custom
rundeck.storage.converter.1.config.password=7ee99cf09ffc59e7
rundeck.storage.converter.1.config.algorithm=PBEWITHSHA256AND128BITAES-CBC-BC
rundeck.storage.converter.1.config.provider=BC
#Encryption for project config storage
rundeck.projectsStorageType=db
rundeck.config.storage.converter.1.type=jasypt-encryption
rundeck.config.storage.converter.1.path=projects
rundeck.config.storage.converter.1.config.password=7ee99cf09ffc59e7
rundeck.config.storage.converter.1.config.encryptorType=custom
rundeck.config.storage.converter.1.config.algorithm=PBEWITHSHA256AND128BITAES-CBC-BC
rundeck.config.storage.converter.1.config.provider=BC
rundeck.feature.repository.enabled=true
Can anyone help with this

A couple of things here:
Your dataSource.driverClassName is empty, set it to org.mariadb.jdbc.Driver, check the full example here.
Your rundeck.storage.provider.1.type is also empty, set it as rundeck.storage.provider.1.type=db.

Related

db.create_all() doesn't create a database in a desired directory

I am trying to create a database for my Flask application in the main directory of my project. This is my code for initializing a database:
app.config["SQLALCHEMY_DATABASE_URI"] = 'sqlite:///users.db'
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False
db = SQLAlchemy(app)
Flask requires application context, so this is how I create the database:
$ flask shell
>>> db.create_all()
I also tried doing it with:
$ python
>>> from app import app, db
>>> app.app_context().push()
>>> db.create_all()
Both of these options create the database in the /instance directory. Is there any way to get around this and create it in the main directory of the project?
The instance path is the preferred and default location for the database. I recommend you to use this one for security reasons. However, it is also possible to choose an alternative solution in which the full length of the path is specified in the configuration.
The following configuration corresponds to an outdated variant, where the database is created in the current working directory. Please don't use this anymore.
app.config['SQLALCHEMY_DATABASE_URI'] ='sqlite:///' + os.path.join(os.getcwd(), 'users.db')
This corresponds to the current solution.
app.config['SQLALCHEMY_DATABASE_URI'] ='sqlite:///' + os.path.join(app.instance_path, 'users.db')

Mariadb master-slave replication - can binlog-ignore-db be added to an existing setup?

I am using MariaDB 5.5.65 with a single-master, multiple-slave replication setup.
Here's a snippet from my.cnf:
binlog_format=ROW
log-bin=/data/mysql/pccodb22-binlog
expire_logs_days=10
max_binlog_size=1024M
server-id = 2
binlog-ignore-db=mysql
Thing is, I forgot to add
binlog-ignore-db=information_schema
binlog-ignore-db=performance_schema
to the config. Is this safe to do with a live replication setup, or do I have to mysqldump --master-data --all-databases all over again?
You should be fine, no need to reinitialize. Information and performance schemas are virtual rather than physical.
You should really upgrade from MariaDB 5.5.x to at least 10.2.x. 5.5.x is EOL and no longer supported.

Cannot create wordpress web app in azure with Terraform

I am trying to create wordpress web application on Azure with Terraform. Each web app has own database. I manage to create resource groups, database server and databases but i cannot create wordpress web app. I can create a web app and all works fine but not wordpress. When i make wordpress web app manually and import data to see what is different i see that wordpress has repo_url and branch pointing to wordpress-azure repo on github. When i try to incorporate this in code i get error message.
resource "azurerm_mysql_database" "testtt" {
name = "testtt"
resource_group_name = azurerm_resource_group.RG_mok_2024.name
server_name = azurerm_mysql_server.wp-db-mok-2024.name
charset = "utf8"
coll`enter code here`ation = "utf8_unicode_ci"
}
resource "azurerm_app_service" "testtt" {
name = "testtt"
location = azurerm_resource_group.RG_mok_2024.location
resource_group_name = azurerm_resource_group.RG_mok_2024.name
app_service_plan_id = azurerm_app_service_plan.appserviceplan-wordpress-mok-6.id
site_config {
dotnet_framework_version = "v4.0"
scm_type = "GitHub"
default_documents = ["Default.htm","Default.html","Default.asp","index.htm","index.html","iistart.htm","default.aspx","index.php","hostingstart.html"]
}
source_control {
repo_url = "https://github.com/azureappserviceoss/wordpress-azure"
branch = "master"
}
connection_string {
name = "defaultConnection"
type = "MySQL"
value = "Database=testtt;Data Source=wp-db-mok-2024.mysql.database.azure.com;User Id=mysqladminun#wp-db-mok-2024;Password=password"
}
}
The error message i get when i am using source_control part of a code is
Error: "source_control": this field cannot be set
The source_control field is only exported. It cannot be used to connect a deploymentsource.
To execute an automated deployment directly, there is currently only the workaround via a local-exec null_resource.
The sourcecontrol integration can be created using an azure cli / powershell script which is then executed by the local-exec provisioner.
This works as follows:
resource "null_resource" "scm_integration" {
provisioner "local-exec" {
command = "${path.module}/enable_scm.ps1 -webAppName ${azurerm_app_service.testtt.name} -appResourceGroupName ${azurerm_resource_group.RG_mok_2024.name} -scmBranch master -repoUrl https://github.com/azureappserviceoss/wordpress-azure"
interpreter = ["pwsh", "-Command"]
}
}
In addition you need the powershell script enable_scm.ps1.
In this GitHub Issue the workaround incl. script is described completely
According to terraform doc about azurerm app service, the field source_control is only exported. And it is ONLY exported when the scm_type is set to LocalGit. You have set it to GitHub, and it is an output value, so according to the documentation, you dont need that.
Furthermore in line 6 there is enter code here but I guess that this was pasted here by accident.
Finally, I hope that in the connection string value, your database password is not "password".
Can you try to set the source_control section before the site_config? There is an open issue for the Terraform azurerm_app_service provider that suggests this might be a work-around.
https://github.com/terraform-providers/terraform-provider-azurerm/issues/3696

Launch RancherOs with Openstack and Terraform

Hi running the latest OpenStack, Terraform and RancherOs.
From the Openstack UI I can get rancher to work and I can pass in my own ssh keys for instance but you need to explicitly click the configuration drive otherwise it will not accept the user data.
I don't think this is possible with terraform is it?
resource "openstack_compute_instance_v2" "terraform-rancher" {
name = "terraform-rancher"
image_name = "RancherOs"
flavor_name = "t2.xlarge"
security_groups = ["default"]
#This is on the same path as my terraform file.
user_data = "${file("test.txt")}"
network {
name = "provider"
}
}
The instance launches and gets created but when I look at the logs Rancher cannot seem to find the config with:
cloud-init: Datasource unavailable, skipping: cloud-drive: /media/config-2 (lastError: no such file or directory)
From Openstack UI it works fine, but as stated you have to click the config drive check box.
cloud-init: Datasource available: cloud-drive: /media/config-2
To get it work like in the UI, the config_drive parameter in the instance configuration needs to be set to true.

Should instance/config.py be uploaded to the production server when deploying a Flask app? [duplicate]

This question already has answers here:
Where should I place the secret key in Flask?
(2 answers)
Closed 4 years ago.
I'm preparing to deploy a small Flask app that I've developed for internal use. I have an old laptop with Ubuntu Server 16.04, uWSGI and Nginx which I'll use for deployment.
OPTION 1
My current app setup has an instance/config.py file that I've kept out of version control. This file contains the following:
SECRET_KEY = ...
SQLALCHEMY_DATABASE_URI = ...
# Google 'client_id' and 'client_secret' for social authentication functionality.
The instance/config.py file is loaded as follows in app/__init__.py:
def create_app(config_name):
app = Flask(__name__, instance_relative_config=true)
app.config.from_object(app_config[config_name])
app.config.from_pyfile('config.py')
Is it safe to keep this same setup for production and thus have the instance/config.py file in the production server?
OPTION 2
Alternatively, should I be using environment variables? If this were the case, should I do something like so in wsgi.py:
os.environ['FLASK_CONFIG'] = 'production'
os.environ['SECRET_KEY'] = ...
os.environ['SQL_ALCHEMY_DATABASE_URI'] = ...
and then have the following in app/__init__.py:
def create_app(config_name):
if os.getenv('FLASK_CONFIG') == 'production':
app = Flask(__name__)
app.config.update(
SECRET_KEY=os.getenv('SECRET_KEY')
SQLALCHEMY_DATABASE_URI=os.getenv('SQLALCHEMY_DATABASE_URI')
)
else:
app = Flask(__name__, instance_relative_config=true)
app.config.from_object(app_config[config_name])
app.config.from_pyfile('config.py')
To answer the question, yes it's safe as long as your server is secure. Hopefully access is only allowed using a private key. If you're using a password to login, then that may be a problem.
It's a good idea to keep the actual file used to load configuration out of version control. I actually made a mistake with one of my servers where I did put config.py in version control and now I have to be careful each time I pull not overwrite the file.
One thing that you could do is to have a config file for each environment, say prod.py and dev.py, that are both checked in. Then create a pointer.py that is not checked into version control.
prod.py
SECRET_KEY = ...
SQLALCHEMY_DATABASE_URI = ...
...
pointer.py
from prod import SECRET_KEY, SQLALCHEMY_DATABASE_URI, ...
server.py
app.config.from_pyfile('pointer.py')
In dev, simply change the import statement to point to dev.py. You could also do from prod import *, but that isn't very good practice.

Resources