List all raster layers in Geopackage using pyqgis - raster

With the following code it is super easy to list all vector layers in a geopackage:
my_gpkg = r'PATH_TO_GEOPACKAGE'
gpkg_layers = [l.GetName() for l in ogr.Open(my_gpkg )]
Is there also a way to list all raster layers in a geopackage?

Could solve my problem with the help of this post: https://gis.stackexchange.com/questions/287997/counting-the-number-of-layers-in-a-geopackage
And here is my solution:
import sqlite3
my_gpkg = r'PATH_TO_GEOPACKAGE'
sqliteConnection = sqlite3.connect(my_gpkg)
cursor = sqliteConnection.cursor()
# the table gpkg_contents is mandatory in every geopackage
sqlite_select_query = """Select table_name from gpkg_contents where data_type = 'tiles'"""
cursor.execute(sqlite_select_query)
records = cursor.fetchall()
raster_layers = []
for row in records:
layer_name = row[0]
raster_layers.append(layer_name)
print('These are the raster layers in your geopackage: {}'.format(raster_layers))
cursor.close()

Related

can I search two sqlite fts5 index's across two tables?

I have a tasks application with two tables. One table has the task name, date, owner etc and the other has the comments for the task linked to the task number so there can be multiple comments attached to a single task.
Both tables have FTS5 indexes. Within my app I want to search both tables for a word and present the rows to the user. I have the below working for each table individually but how do I construct a query that returns data from both FTS5 tables?
(python3.6)
c.execute("select * from task_list where task_list = ? ", [new_search])
c.execute("select * from comments where comments = ? ", [new_search])
thanks #tomalak never thought of doing that, was focused on the query. Here's what I came up with and works for my purposes. Probably better ways to achieve the same result but I'm a beginner. This is a Tkinter app.
def db_search():
rows = ''
conn = sqlite3.connect('task_list_database.db')
c = conn.cursor()
d = conn.cursor()
new_search = entry7.get()
c.execute("select * from task_list where task_list = ? ", [new_search])
d.execute("select * from comments where comments = ? ", [new_search])
rows1 = c.fetchall()
rows2 = d.fetchall()
rows = rows1 + rows2
clear_tree(tree)
for row in rows:
tree.insert("", END, values=row)
conn.close()

Insert R list into RPostgreSQL query

I'm running a postgreSQL query based on an automated list of ID's stored in an R list. I'm trying to determine how to include that R list in my query so I don't have to hard-code the ID's each time I run my query.
For example, I have a script that produces the list
id <- c("001","002","003")
and my query looks something like this:
SELECT *
FROM my_query
WHERE my_query.id_col IN ('001', '002', '003')
which I run using Rpostgres:
library(Rpostgres)
snappConnection <- DBI::dbConnect(RPostgres::Postgres(),
host = "host",
dbname = "dbname",
user = "user",
password = "pword",
port = 0000)
core.data <- dbGetQuery(conn = snappConnection,statement = SELECT * FROM my_query WHERE my_query.id_col IN ('001', '002', '003'))
Is there a way to reference my "id" list from R in my query so that when "id" updates to new values, the query also updates to those new values?
glue_sql from glue package should work:
query <- glue::glue_sql("
SELECT *
FROM my_query
WHERE my_query.id_col IN ({id*})
", .con = snappConnection)
core.data <- dbGetQuery(conn = snappConnection, statement = query)
#dave-edison's answer solved my problem. Concurrent to trying his, I got this to work.
I saved the query below as "my_query.sql"
SELECT *
FROM my_query
WHERE my_query.id_col IN ('string_to_replace')
then created a string and used gsub on the string.
library(tidyverse)
temp.script <- read_file("my_query.sql")
core.data.script <- gsub('string_to_replace',paste0(id,collapse = "', '"),temp.script)
From there I just ran my RPostgres script like above.

Trying to extract column reference from ME23N

cntlMEALV_GRID_CONTROL_MMHIPO/shellcont/shell").selectColumn "XBLNR"
I'm trying to extract this into excel.
I was only able to extract row count using watch but not sure how extract this using excel
Any code or sample or help will be appreciated thanks.
Use getcellValue to cast to excel.
Set SapGuiAuto = GetObject("SAPGUI")
Set SAPApp = SapGuiAuto.GetScriptingEngine
Set SAPCon = SAPApp.Children(0)
Set session = SAPCon.Children(0)
MyGrid = "cntlMEALV_GRID_CONTROL_MMHIPO/shellcont/shell"
iRows = session.findById(MyGrid).RowCount()
'Fill excel sheet
Do While i < iRows
Cells(i + 1, 1).Value = session.findById(MyGrid).getcellValue(i, "XBLNR")
i = i + 1
Loop

How to read fixed width files using Spark in R

I need to read a 10GB fixed width file to a dataframe. How can I do it using Spark in R?
Suppose my text data is the following:
text <- c("0001BRAjonh ",
"0002USAmarina ",
"0003GBPcharles")
I want the 4 first characters to be associated to the column "ID" of a data frame; from character 5-7 would be associated to a column "Country"; and from character 8-14 to be associated to a column "Name"
I would use function read.fwf if the dataset was small, but that is not the case.
I can read the file as a text file using sparklyr::spark_read_text function. But I don't know how to attribute the values of the file to a data frame properly.
EDIT: Forgot to say substring starts at 1 and array starts at 0, because reasons.
Going through and adding the code I talked about in the column above.
The process is dynamic and is based off a Hive table called Input_Table. The table has 5 columns: Table_Name, Column_Name, Column_Ordinal_Position, Column_Start, and Column_Length. It is external so any user can change, drop, and remove any file into the folder location. I quickly built this from scratch to not take actual code, does everything make sense?
#Call Input DataFrame and the Hive Table. For hive table we make sure to only take correct column as well as the columns in correct order.
val inputDF = spark.read.format(recordFormat).option("header","false").load(folderLocation + "/" + tableName + "." + tableFormat).rdd.toDF("Odd_Long_Name")
val inputSchemaDF = spark.sql("select * from Input_Table where Table_Name = '" + tableName + "'").sort($"Column_Ordinal_Position")
#Build all the arrays from the columns, rdd to map to collect changes a dataframe col to a array of strings. In this format I can iterator through the column.
val columnNameArray = inputSchemaDF.selectExpr("Column_Name").rdd.map(x=>x.mkString).collect
val columnStartArray = inputSchemaDF.selectExpr("Column_Start_Position").rdd.map(x=>x.mkString).collect
val columnLengthArray = inputSchemaDF.selectExpr("Column_Length").rdd.map(x=>x.mkString).collect
#Make the iteraros as well as other variables that are meant to be overwritten
var columnAllocationIterator = 1
var localCommand = ""
var commandArray = Array("")
#Loop as there are as many columns in input table
while (columnAllocationIterator <= columnNameArray.length) {
#overwrite the string command with the new command, thought odd long name was too accurate to not place into the code
localCommand = "substring(Odd_Long_Name, " + columnStartArray(columnAllocationIterator-1) + ", " + columnLengthArray(columnAllocationIterator-1) + ") as " + columnNameArray(columnAllocationIterator-1)
#If the code is running the first time it overwrites the command array, else it just appends
if (columnAllocationIterator==1) {
commandArray = Array(localCommand)
} else {
commandArray = commandArray ++ Array(localCommand)
}
#I really like iterating my iterators like this
columnAllocationIterator = columnAllocationIterator + 1
}
#Run all elements of the string array indepently against the table
val finalDF = inputDF.selectExpr(commandArray:_*)

Select Top 1 From a Table For Each row in another Table

I am just starting to work with openedge and I need to join information from two tables but I just need the first row from the second one.
Basically I need to do a typical SQL Cross Apply but in progress. I look in the documentation and the Statement FETCH FIRST 10 ROWS ONLY only in OpenEdge 11.
My query is:
SELECT * FROM la_of PUB.la_ofart ON la_of.empr_cod = la_ofart.empr_cod
AND la_of.Cod_Ordf = la_ofart.Cod_Ordf
AND la_of.Num_ordex = la_ofart.Num_ordex AND la_of.Num_partida = la_ofart.Num_partida
CROSS APPLY (
SELECT TOP 1 ofart.Cod_Ordf AS Cod_Ordf_ofart ,
ofart.Num_ordex AS Num_ordex_ofart
FROM la_ofart AS ofart
WHERE ofart.empr_cod = la_ofart.empr_cod
AND ofart.Num_partida = la_ofart.Num_partida
AND la_ofart.doc1_num = ofart.doc1_num
AND la_ofart.doc2_linha = ofart.doc2_linha
ORDER BY ofart.Cod_Ordf DESC) ofart
I am using SSMS to extract data from OE10 using an ODBC connector and querying to OE using OpenQuery.
Thanks for all help.
If I correctly understood your question, maybe you can use something like this. Maybe this isn't the best solution for your problem, but may suit your needs.
DEF BUFFER ofart FOR la_ofart.
DEF TEMP-TABLE tt-ofart NO-UNDO LIKE ofart
FIELD seq AS INT
INDEX ch-seq
seq.
DEF VAR i-count AS INT NO-UNDO.
EMPTY TEMP-TABLE tt-ofart.
blk:
FOR EACH la_ofart NO-LOCK,
EACH la_of NO-LOCK
WHERE la_of.empr_cod = la_ofart.empr_cod
AND la_of.Cod_Ordf = la_ofart.Cod_Ordf
AND la_of.Num_ordex = la_ofart.Num_ordex
AND la_of.Num_partida = la_ofart.Num_partida,
EACH ofart NO-LOCK
WHERE ofart.empr_cod = la_ofart.empr_cod
AND ofart.Num_partida = la_ofart.Num_partida
AND ofart.doc1_num = la_ofart.doc1_num
AND ofart.doc2_linha = la_ofart.doc2_linha
BREAK BY ofart.Cod_Ordf DESCENDING:
ASSIGN i-count = i-count + 1.
CREATE tt-ofart.
BUFFER-COPY ofart TO tt-ofart
ASSIGN ofart.seq = i-count.
IF i-count >= 10 THEN
LEAVE blk.
END.
FOR EACH tt-ofart USE-INDEX seq:
DISP tt-ofart WITH SCROLLABLE 1 COL 1 DOWN NO-ERROR.
END.

Resources