Back up data from Azure cosmos DB to blob storage - azure-cosmosdb

I need to back up the data in the Azure Cosmos database to Azure blob storage (managed by storage account). I did that by copy functions in Data factory and scheduled the daily back up trigger. Currently the back up is only in the form of overwrite.
Question: how can I configure the storage account and back up pipeline in the Data Factory that the storage can back up the data in the staging form so that I can download different version of the data I backed up?

how can I configure the storage account and back up pipeline in the
Data Factory that the storage can back up the data in the staging form
so that I can download different version of the data I backed up?
There is no ready-made thing to realize your idea in ADF. You can check the snapshot by integrating the azure function, and operate the blob content inside the azure function.
Or you can directly use azure function for regular inspection:
import logging
import os
import azure.functions as func
from azure.identity import DefaultAzureCredential, ClientSecretCredential
from azure.storage.blob import BlobServiceClient
from azure.storage.queue import QueueService, QueueMessageFormat
import requests
import time
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
# DefaultAzureCredential supports managed identity or environment configuration (see docs)
credential = DefaultAzureCredential()
# parse parameters
storage_account_source = os.environ["par_storage_account_name_source"]
storage_account_source_url = "https://" + storage_account_source + ".blob.core.windows.net"
storage_account_backup = os.environ["par_storage_account_name_backup"]
storage_account_backup_url = "https://" + storage_account_backup + ".blob.core.windows.net"
# create blob client for backup and source
credential = DefaultAzureCredential()
client_source = BlobServiceClient(account_url=storage_account_source_url, credential=credential)
client_backup = BlobServiceClient(account_url=storage_account_backup_url, credential=credential)
# Create queue clients
queue_service = QueueService(account_name=os.environ['par_storage_account_name_queue'], account_key=os.environ['par_storage_account_key_queue'])
queue_service.encode_function = QueueMessageFormat.text_base64encode
# Get all blobs in sourcecontainer
container_source_list = client_source.list_containers()
for container in container_source_list:
# Log container name
logging.info(container.name)
container_source = client_source.get_container_client(container.name)
# Get all blobs in container
prev_blob_name = ""
prev_blob_etag = ""
blob_source_list = container_source.list_blobs(include=['snapshots'])
for blob in blob_source_list:
if blob.snapshot == None:
# Blob that is not snapshot.
# 1. Check if snapshot needs to be created
if prev_blob_name != blob.name:
# New blob without snapshot, create snapshot/backup
logging.info("new blob" + blob.name + ", create snapshot/backup")
create_snapshot(client_source, queue_service, container.name, blob.name, blob.etag)
elif prev_blob_etag != blob.etag:
# Existing blob that has changed, create snapshot/backup
logging.info(blob.name + "has changed, create snapshot/backup")
create_snapshot(client_source, queue_service, container.name, blob.name, blob.etag)
# 2. Check if incremental backup needs to be created
# get blob backup and source properties
blob_source = client_source.get_blob_client(container=container.name, blob=blob.name)
source_last_modified = blob_source.get_blob_properties()['last_modified']
source_etag = str(blob_source.get_blob_properties()['etag']).replace("\"","")
blob_name_backup = append_timestamp_etag(blob.name, source_last_modified, source_etag)
blob_backup = client_backup.get_blob_client(container=container.name + "bak", blob=blob_name_backup)
blob_exists = check_blob_exists(blob_backup)
# Check if blob exists
if blob_exists == False:
# Latest blob does not yet exist in backup, create message on queue to update
queue_json = "{" + "\"container\":\"{}\", \"blob_name\":\"{}\", \"etag\":\"{}\"".format(container.name, blob.name, source_etag) + "}"
logging.info("backup needed for: " + queue_json)
queue_service.put_message(os.environ['par_queue_name'], queue_json)
#asyncio.run(copy_adf_blob_source_backup(blob_source, blob_backup))
prev_blob_name = blob.name
prev_blob_etag = blob.etag
result = {"status": "ok"}
return func.HttpResponse(str(result))
def check_blob_exists(bc_blob):
# Check if blob already exists
# todo: see if this can be done without try except
try:
bc_blob.get_blob_properties()
return True
except:
return False
def create_snapshot(client_source, queue_service, container_name, blob_name, blob_etag):
# create snapshot
blob_client = client_source.get_blob_client(container=container_name, blob=blob_name)
blob_client.create_snapshot()
def append_timestamp_etag(filename, source_modified, etag):
name, ext = os.path.splitext(filename)
return "{name}_{modified}_{etag}{ext}".format(name=name, modified=source_modified, etag=etag, ext=ext)
Above is based on Httptrigger, you can change it to Timertrigger, below is the doc of Timertrigger of azure function:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-timer?tabs=python

Related

Import and parse a file to fill the form

Currently, I'm developing a custom app. So far I got the DocType ready to be filled in manually. We got files (SQLite3) that I'd like to upload, parse, extract the necessary fields of it and fill in the form. Basically like the import data tool. In my case, no bulk operation is needed and if possible do the extraction part server-side.
What I tried so far
I added a Server Action to call a whitelisted method of my app. I can get the current doc with:
#frappe.whitelist()
def upload_data_and_extract(doc: str):
"""
Uploads and processes an existing file and extracts data from it
"""
doc_dict = json.loads(doc)
custom_dt = frappe.get_doc('CustomDT', doc_dict['name'])
# parse data here
custom_dt.custom_field = "new value from parsed data"
custom_dt.save()
return doc # How do I return a JSON back to the website from the updated doc?
With this approach, I only can do the parsing when the document has been saved before. I'd rather update the fields of the form when the attach field gets modified. Thus, I tried the Server Side Script approach:
frappe.ui.form.on('CustomDT', {
original_data: function(frm, cdt, cdn) {
if(original_data) {
frappe.call({
method: "customapp.customapp.doctype.customdt.customdt.parse_file",
args: {
"doc": frm.doc
},
callback: function(r) {
// code snippet
}
});
}
}
});
Here are my questions:
What's the best approach to upload a file that needs to be parsed to fill the form?
How to access the uploaded file (attachment) the easiest way. (Is there something like frappe.get_attachment()?)
How to refresh the form fields in the callback easily?
I appreciate any help on these topics.
Simon
I have developed the same tool but that was for CSV upload. I am going to share that so it will help you to achieve your result.
JS File.
// Copyright (c) 2020, Bhavesh and contributors
// For license information, please see license.txt
frappe.ui.form.on('Car Upload Tool', {
upload: function(frm) {
frm.call({
doc: frm.doc,
method:"upload_data",
freeze:true,
freeze_message:"Data Uploading ...",
callback:function(r){
console.log(r)
}
})
}
});
Python Code
# -*- coding: utf-8 -*-
# Copyright (c) 2020, Bhavesh and contributors
# For license information, please see license.txt
from __future__ import unicode_literals
import frappe
from frappe.model.document import Document
from carrental.carrental.doctype.car_upload_tool.csvtojson import csvtojson
import csv
import json
class CarUploadTool(Document):
def upload_data(self):
_file = frappe.get_doc("File", {"file_url": self.attach_file})
filename = _file.get_full_path()
csv_json = csv_to_json(filename)
make_car(csv_json)
def csv_to_json(csvFilePath):
jsonArray = []
#read csv file
with open(csvFilePath, encoding='latin-1') as csvf:
#load csv file data using csv library's dictionary reader
csvReader = csv.DictReader(csvf,delimiter=";")
#convert each csv row into python dict
for row in csvReader:
frappe.errprint(row)
#add this python dict to json array
jsonArray.append(row)
#convert python jsonArray to JSON String and write to file
return jsonArray
def make_car(car_details):
for row in car_details:
create_brand(row.get('Marke'))
create_car_type(row.get('Fahrzeugkategorie'))
if not frappe.db.exists("Car",row.get('Fahrgestellnr.')):
car_doc = frappe.get_doc(dict(
doctype = "Car",
brand = row.get('Marke'),
model_and_description = row.get('Bezeichnung'),
type_of_fuel = row.get('Motorart'),
color = row.get('Farbe'),
transmission = row.get('Getriebeart'),
horsepower = row.get('Leistung (PS)'),
car_type = row.get('Fahrzeugkategorie'),
car_vin_id = row.get('Fahrgestellnr.'),
licence_plate = row.get('Kennzeichen'),
location_code = row.get('Standort')
))
car_doc.model = car_doc.model_and_description.split(' ')[0] or ''
car_doc.insert(ignore_permissions = True)
else:
car_doc = frappe.get_doc("Car",row.get('Fahrgestellnr.'))
car_doc.brand = row.get('Marke')
car_doc.model_and_description = row.get('Bezeichnung')
car_doc.model = car_doc.model_and_description.split(' ')[0] or ''
car_doc.type_of_fuel = row.get('Motorart')
car_doc.color = row.get('Farbe')
car_doc.transmission = row.get('Getriebeart')
car_doc.horsepower = row.get('Leistung (PS)')
car_doc.car_type = row.get('Fahrzeugkategorie')
car_doc.car_vin_id = row.get('Fahrgestellnr.')
car_doc.licence_plate = row.get('Kennzeichen')
car_doc.location_code = row.get('Standort')
car_doc.save(ignore_permissions = True)
frappe.msgprint("Car Uploaded Successfully")
def create_brand(brand):
if not frappe.db.exists("Brand",brand):
frappe.get_doc(dict(
doctype = "Brand",
brand = brand
)).insert(ignore_permissions = True)
def create_car_type(car_type):
if not frappe.db.exists("Vehicle Type",car_type):
frappe.get_doc(dict(
doctype = "Vehicle Type",
vehicle_type = car_type
)).insert(ignore_permissions = True)
So for this upload tool, I created one single doctype with the below field:
Attach File(Field Type = Attach)
Button (Field Type = Button)

terraform and kms key aliases

I am using the aws provider and trying to create an aws_workspaces_workspace with encrypted volumes.
I created an aws_kms_key with an associated alias (aws_kms_alias).
I specified the key alias (as a string) for volume_encryption_key.
The resource is created as expected and I can verify in the console that the volumes are encrypted with the specified key.
My issue is that every time I re-run terraform apply, terraform reports that the aws_workspaces_workspace needs to be replaced because of an update in the key value (from a key id to the alias)
How can I prevent this form happening? Is this a bug? Am I doing something incorrectly? Some of the relevant code is below.
resource "aws_workspaces_workspace" "workspace" {
directory_id = aws_workspaces_directory.ws-ad.id
bundle_id = var.bundle_id
user_name = var.username
root_volume_encryption_enabled = true
user_volume_encryption_enabled = true
volume_encryption_key = "alias/workspace-volume"
workspace_properties {
compute_type_name = "POWER"
user_volume_size_gib = 80
root_volume_size_gib = 50
running_mode = "AUTO_STOP"
running_mode_auto_stop_timeout_in_minutes = 60
}
}
resource "aws_kms_key" "kms-ws-volume" {
description = "Workspace Volume Encryption Key"
key_usage = "ENCRYPT_DECRYPT"
deletion_window_in_days = 30
is_enabled = true
}
resource "aws_kms_alias" "kms-ws-volume-alias" {
name = "alias/workspace-volume"
target_key_id = aws_kms_key.kms-ws-volume.key_id
}
Here's what terraform apply reports:
# aws_workspaces_workspace.workspace["1"] must be replaced
-/+ resource "aws_workspaces_workspace" "workspace" {
~ computer_name = "WSAMZN-T34E23BK" -> (known after apply)
~ id = "ws-v98b0y17z" -> (known after apply)
~ ip_address = "10.0.0.45" -> (known after apply)
~ state = "STOPPED" -> (known after apply)
tags = {
"Name" = "workspace-user1-env1"
"Owner" = "mario"
"Profile" = "dev"
"Stack" = "env1"
}
~ volume_encryption_key = "arn:aws:kms:us-west-2:927743275319:key/09de3db9-ecdd-4be1-a781-705fdd0294f9" -> "alias/workspace-volume" # forces replacement
# (6 unchanged attributes hidden)
# (1 unchanged block hidden)
}
Use the ARN of the key: aws_kms_key.kms-ws-volume.arn
volume_encryption_key is storing the ARN of the key, and therefore the plan detects a change.
The example on https://registry.terraform.io/providers/hcavarsan/aws/latest/docs/resources/workspaces_workspace might be misleading in this regard, despite an alias will also work.
Something similar happens with kms_key_id of aws_instance, in that it stores the ARN and not the key_id , and the plan always requires a volume replacement when using key_id instead of ARN. https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance#kms_key_id

DocumentDB Change Feed and saving Checkpoint

After reading the documentation, I'm having a hard time conceptualizing the change feed. Let's take the code from the documentation below. The second change feed is picking up the changes from the last time it was run via the checkpoints. Let's say it is being used to create summary data and there was an issue and it needed to be re-run from a prior time. I don't understand the following:
How to specify a particular time the checkpoint should start. I understand I can save the checkpoint dictionary and use that for each run, but how do you get the changes from X time to maybe rerun some summary data
Secondly, let's say we are rerunning some summary data and we save the last checkpoint used for each summarized data so we know where that one left off. How does one know that a record is in or before that checkpoint?
Code that runs from collection beginning and then from last checkpoint:
Dictionary < string, string > checkpoints = await GetChanges(client, collection, new Dictionary < string, string > ());
await client.CreateDocumentAsync(collection, new DeviceReading {
DeviceId = "xsensr-201", MetricType = "Temperature", Unit = "Celsius", MetricValue = 1000
});
await client.CreateDocumentAsync(collection, new DeviceReading {
DeviceId = "xsensr-212", MetricType = "Pressure", Unit = "psi", MetricValue = 1000
});
// Returns only the two documents created above.
checkpoints = await GetChanges(client, collection, checkpoints);
//
private async Task < Dictionary < string, string >> GetChanges(
DocumentClient client,
string collection,
Dictionary < string, string > checkpoints) {
List < PartitionKeyRange > partitionKeyRanges = new List < PartitionKeyRange > ();
FeedResponse < PartitionKeyRange > pkRangesResponse;
do {
pkRangesResponse = await client.ReadPartitionKeyRangeFeedAsync(collection);
partitionKeyRanges.AddRange(pkRangesResponse);
}
while (pkRangesResponse.ResponseContinuation != null);
foreach(PartitionKeyRange pkRange in partitionKeyRanges) {
string continuation = null;
checkpoints.TryGetValue(pkRange.Id, out continuation);
IDocumentQuery < Document > query = client.CreateDocumentChangeFeedQuery(
collection,
new ChangeFeedOptions {
PartitionKeyRangeId = pkRange.Id,
StartFromBeginning = true,
RequestContinuation = continuation,
MaxItemCount = 1
});
while (query.HasMoreResults) {
FeedResponse < DeviceReading > readChangesResponse = query.ExecuteNextAsync < DeviceReading > ().Result;
foreach(DeviceReading changedDocument in readChangesResponse) {
Console.WriteLine(changedDocument.Id);
}
checkpoints[pkRange.Id] = readChangesResponse.ResponseContinuation;
}
}
return checkpoints;
}
DocumentDB supports check-pointing only by the logical timestamp returned by the server. If you would like to retrieve all changes from X minutes ago, you would have to "remember" the logical timestamp corresponding to the clock time (ETag returned for the collection in the REST API, ResponseContinuation in the SDK), then use that to retrieve changes.
Change feed uses logical time in place of clock time because it can be different across various servers/partitions. If you would like to see change feed support based on clock time (with some caveats on skew), please propose/upvote at https://feedback.azure.com/forums/263030-documentdb/.
To save the last checkpoint per partition key/document, you can just save the corresponding version of the batch in which it was last seen (ETag returned for the collection in the REST API, ResponseContinuation in the SDK), like Fred suggested in his answer.
How to specify a particular time the checkpoint should start.
You could try to provide a logical version/ETag (such as 95488) instead of providing a null value as RequestContinuation property of ChangeFeedOptions.

Python - BaseHTTPServer , issue with POST and GET

I am making a very simple application with 2 webpages at the moment under URLs: localhost:8080/restaurants/ and localhost:8080/restaurants/new.
I have a sqlite database which i manipulate with SQLAlchemy in my python code.
On my first page localhost:8080/restaurants/, this just contains the lists of restaurants available in my database.
My second page localhost:8080/restaurants/new, is where i have a form in order to a new restaurant such that it displays on localhost:8080/restaurants.
However Whenever i enter a new restaurant name on form at localhost:8080/restaurants/new, it fails to redirect me back to localhost:8080/restaurants/ in order to show me the new restaurant, instead it just remains on the same url link localhost:8080/restaurants/new with the message "No data received" .
Below is my code:
import cgi
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
#import libraries and modules
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from database_setup import Base, Restaurant, MenuItem
#create and connect to database
engine = create_engine('sqlite:///restaurantmenu.db')
Base.metadata.bind=engine
DBSession = sessionmaker(bind=engine)
session = DBSession()
class webServerHandler(BaseHTTPRequestHandler):
""" class defined in the main method"""
def do_GET(self):
try:
#look for url then ends with '/hello'
if self.path.endswith("/restaurants"):
self.send_response(200)
#indicate reply in form of html to the client
self.send_header('Content-type', 'text/html')
#indicates end of https headers in the response
self.end_headers()
#obtain all restaurant names from databse
restaurants = session.query(Restaurant).all()
output = ""
output += "<html><body><a href='/restaurants/new'>Add A New Restaurant</a>"
output += "</br></br>"
for restaurant in restaurants:
output += restaurant.name
output += """<div>
<a href='#'>Edit</a>
<a href='#'>Delete</a>
</div>"""
output += "</br></br>"
output += "</body></html>"
self.wfile.write(output)
print output
return
if self.path.endswith("/restaurants/new"):
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers()
output = ""
output += "<html><body>"
output += "<h1>Add New Restaurant</h1>"
output += "<form method='POST' enctype='multipart/form-data action='/restaurants/new'>"
output += "<input name='newRestaurant' type='text' placeholder='New Restaurant Name'>"
output += "<input name='Create' type='submit' label='Create'>"
output += "</form></body></html>"
self.wfile.write(output)
return
except IOError:
self.send_error(404, "File %s not found" % self.path)
def do_POST(self):
try:
if self.path.endswith("/restaurants/new"):
ctype, pdict = cgi.parse_header(self.headers.getheader('content-type'))
#check of content-type is form
if ctype == 'mulitpart/form-data':
#collect all fields from form, fields is a dictionary
fields = cgi.parse_multipart(self.rfile, pdict)
#extract the name of the restaurant from the form
messagecontent = fields.get('newRestaurant')
#create the new object
newRestaurantName = Restaurant(name = messagecontent[0])
session.add(newRestaurantName)
session.commit()
self.send_response(301)
self.send_header('Content-type', 'text/html')
self.send_header('Location','/restaurants')
self.end_headers()
except:
pass
def main():
"""An instance of HTTPServer is created in the main method
HTTPServer is built off of a TCP server indicating the
transmission protocol
"""
try:
port = 8080
#server address is tuple & contains host and port number
#host is an empty string in this case
server = HTTPServer(('', port), webServerHandler)
print "Web server running on port %s" % port
#keep server continually listening until interrupt occurs
server.serve_forever()
except KeyboardInterrupt:
print "^C entered, stopping web server...."
#shut down server
server.socket.close()
#run main method
if __name__ == '__main__':
main()
for reference here is my database_setup file where i create the database:
import sys
#importing classes from sqlalchemy module
from sqlalchemy import Column, ForeignKey, Integer, String
#delcaritive_base , used in the configuration
# and class code, used when writing mapper
from sqlalchemy.ext.declarative import declarative_base
#relationship in order to create foreign key relationship
#used when writing the mapper
from sqlalchemy.orm import relationship
#create_engine to used in the configuration code at the
#end of the file
from sqlalchemy import create_engine
#this object will help set up when writing the class code
Base = declarative_base()
class Restaurant(Base):
"""
class Restaurant corresponds to restaurant table
in the database to be created.
table representation for restaurant which
is in the database
"""
__tablename__ = 'restaurant'
#column definitions for the restaurant table
id = Column(Integer, primary_key=True)
name = Column(String(250), nullable=False)
class MenuItem(Base):
"""
class MenuItem corresponds to restaurant table
table representation for menu_item which
is in the database
"""
__tablename__ = 'menu_item'
#column definitions for the restaurant table
name = Column(String(80), nullable=False)
id = Column(Integer, primary_key=True)
course = Column(String(250))
description = Column(String(250))
price = Column(String(8))
restaurant_id = Column(Integer, ForeignKey('restaurant.id'))
restaurant = relationship(Restaurant)
#create an instance of create_engine class
#and point to the database to be used
engine = create_engine(
'sqlite:///restaurantmenu.db')
#that will soon be added into the database. makes
#the engine
Base.metadata.create_all(engine)
I can't figure out why i cannot add new restuarants
I know this was a long time ago, but I figured out your problem.
First, the enctype='multipart/form-data' in your do_GET function under the if self.path.endswith("/restaurants/new"): portion is missing a final single quote. Second, you misspelt 'multipart' in if ctype == 'multipart/form-data':. Hope that can help you or others.
As shteeven said, the problem was with the encryption type in the form.
As the quote was missed, the 'Content-type' changed to 'application/x-www-form-urlencoded' so in that case you should parse it different as it's a string.
In order to manage both enctype you can modify your do_POST as the following
def do_POST(self):
try:
if self.path.endswith("/restaurants/new"):
ctype, pdict = cgi.parse_header(self.headers.getheader('content-type'))
print ctype
#check of content-type is form
if (ctype == 'multipart/form-data') or (ctype == 'application/x-www-form-urlencoded'):
#collect all fields from form, fields is a dictionary
if ctype == 'multipart/form-data':
fields = cgi.parse_multipart(self.rfile, pdict)
else:
content_length = self.headers.getheaders('Content-length')
length = int(content_length[0])
body = self.rfile.read(length)
fields = urlparse.parse_qs(body)
#extract the name of the restaurant from the form
messagecontent = fields.get('newRestaurant')
#create the new object
newRestaurantName = Restaurant(name = messagecontent[0])
session.add(newRestaurantName)
session.commit()
self.send_response(301)
self.send_header('Location','/restaurants')
self.end_headers()
return
Hope this extra information is useful for you!

ScalikeJDBC + SQlite: Cannot change read-only flag after establishing a connection

Trying to get working ScalikeJDBC and SQLite. Have a simple code based on provided examples:
import scalikejdbc._, SQLInterpolation._
object Test extends App {
Class.forName("org.sqlite.JDBC")
ConnectionPool.singleton("jdbc:sqlite:test.db", null, null)
implicit val session = AutoSession
println(sql"""SELECT * FROM kv WHERE key == 'seq' LIMIT 1""".map(identity).single().apply()))
}
It fails with exception:
Exception in thread "main" java.sql.SQLException: Cannot change read-only flag after establishing a connection. Use SQLiteConfig#setReadOnly and QLiteConfig.createConnection().
at org.sqlite.SQLiteConnection.setReadOnly(SQLiteConnection.java:447)
at org.apache.commons.dbcp.DelegatingConnection.setReadOnly(DelegatingConnection.java:377)
at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.setReadOnly(PoolingDataSource.java:338)
at scalikejdbc.DBConnection$class.readOnlySession(DB.scala:138)
at scalikejdbc.DB.readOnlySession(DB.scala:498)
...
I've tried both scalikejdbc 1.7 and 2.0, error remains. As sqlite driver I use "org.xerial" % "sqlite-jdbc" % "3.7.+".
What can I do to fix the error?
The following will create two separate connections, one for read-only operations and the other for writes.
ConnectionPool.add("mydb", s"jdbc:sqlite:${db.getAbsolutePath}", "", "")
ConnectionPool.add(
"mydb_ro", {
val conf = new SQLiteConfig()
conf.setReadOnly(true)
val source = new SQLiteDataSource(conf)
source.setUrl(s"jdbc:sqlite:${db.getAbsolutePath}")
new DataSourceConnectionPool(source)
}
)
I found that the reason is that you're using "org.xerial" % "sqlite-jdbc" % "3.7.15-M1". This version looks still unstable.
Use "3.7.2" as same as #kawty.
Building on #Synesso's answer, I expanded slightly to be able to get config value from config files and to set connection settings:
import scalikejdbc._
import scalikejdbc.config.TypesafeConfigReader
case class SqlLiteDataSourceConnectionPool(source: DataSource,
override val settings: ConnectionPoolSettings)
extends DataSourceConnectionPool(source)
// read settings for 'default' database
val cpSettings = TypesafeConfigReader.readConnectionPoolSettings()
val JDBCSettings(url, user, password, driver) = TypesafeConfigReader.readJDBCSettings()
// use those to create two connection pools
ConnectionPool.add("db", url, user, password, cpSettings)
ConnectionPool.add(
"db_ro", {
val conf = new SQLiteConfig()
conf.setReadOnly(true)
val source = new SQLiteDataSource(conf)
source.setUrl(url)
SqlLiteDataSourceConnectionPool(source, cpSettings)
}
)
// example using 'NamedDB'
val name: Option[String] = NamedDB("db_ro") readOnly { implicit session =>
sql"select name from users where id = $id".map(rs => rs.string("name")).single.apply()
}
This worked for me with org.xerial/sqlite-jdbc 3.28.0:
String path = ...
SQLiteConfig config = new SQLiteConfig();
config.setReadOnly(true);
return DriverManager.getConnection("jdbc:sqlite:" + path, config.toProperties());
Interestingly, I wrote a different solution on the issue on the xerial repo:
PoolProperties props = new PoolProperties();
props.setDriverClassName("org.sqlite.JDBC");
props.setUrl("jdbc:sqlite:...");
Properties extraProps = new Properties();
extraProps.setProperty("open_mode", SQLiteOpenMode.READONLY.flag + "");
props.setDbProperties(extraProps);
// This line can be left in or removed; it no longer causes a problem
// as long as the open_mode code is present.
props.setDefaultReadOnly(true);
return new DataSource(props);
I don't recall why I needed the second, and was then able to simplify it back to the first one. But if the first doesn't work, you might try the second. It uses a SQLite-specific open_mode flag that then makes it safe (but unnecessary) to use the setDefaultReadOnly call.

Resources