DynamodbMapper python run locally - amazon-dynamodb

I have a code in python something like this which is using dynamodbmapper
from dynamodb_mapper.model import DynamoDBModel
class Employee(DynamoDBModel):
__table__ = u"employee"
__hash_key__ = u"emp_id"
__schema__ = {
u"emp_id": int,
u"name": unicode,
u"address": set,
}
__defaults__ = {
u"address": set([u"Konami"]),
}
I have all the credentials set on aws.
Just wrote a small python client to create a table in dynamodb using dynamodbmapper in python.
from dynamodb_mapper.model import ConnectionBorg
from influencers.data_access.Employee import Employee
conn = ConnectionBorg()
conn.create_table(Employee, 10, 10, wait_for_active=True)
I am tryinng to run this locally . I am running dynamodb locally . My question is how do i say the endpoint name for this client as http://localhost:8000. I looked at the java code and there is a cleaner way of entting endpoint on dynamodbclient. But in python i dont see it. Any help would be greatly appreciated.

Related

What is the best way to check if a file exists on an Azure Datalake using Apache Airflow?

I have a DAG that shall check if a file has been uploaded to Azure DataLake in a specific directory. If so, it allow other DAGs to run.
I thought about using a FileSensor, but I assume a fsconnid parameter is not enough to authenticate against a DataLake
There is no AzureDataLakeSensor in the Azure provider but you can easily implement one since the AzureDataLakeHook has check_for_file function so all needed is to wrap this function with Sensor class implementing poke() function of BaseSensorOperator. By doing so you can use Microsoft Azure Data Lake Connection directly.
I didn't test it but this should work:
from airflow.providers.microsoft.azure.hooks.data_lake import AzureDataLakeHook
from airflow.sensors.base import BaseSensorOperator
class MyAzureDataLakeSensor(BaseSensorOperator):
"""
Sense for files in Azure Data Lake
:param path: The Azure Data Lake path to find the objects. Supports glob
strings (templated)
:param azure_data_lake_conn_id: The Azure Data Lake conn
"""
template_fields: Sequence[str] = ('path',)
ui_color = '#901dd2'
def __init__(
self, *, path: str, azure_data_lake_conn_id: str = 'azure_data_lake_default', **kwargs
) -> None:
super().__init__(**kwargs)
self.path = path
self.azure_data_lake_conn_id = azure_data_lake_conn_id
def poke(self, context: "Context") -> bool:
hook = AzureDataLakeHook(azure_data_lake_conn_id=self.azure_data_lake_conn_id)
self.log.info('Poking for file in path: %s', self.path)
try:
hook.check_for_file(file_path=self.path)
return True
except FileNotFoundError:
pass
return False
Usage example:
MyAzureDataLakeSensor(
task_id='adls_sense',
path='folder/file.csv',
azure_data_lake_conn_id='azure_data_lake_default',
mode='reschedule'
)
First of all, have a look at official Microsoft Operators for Airflow.
We can see that there are dedicated Operators to Azure DataLake Storage, unfortunately, only the ADLSDeleteOperator seems available at the moment.
This ADLSDeleteOperator uses a AzureDataLakeHook which you should reuse in your own custom operator to check for file presence.
My advice for you is to create a Child class of CheckOperator using the ADLS hook check if the file provided in input exists with check_for_file function of the hook.
UPDATE: as pointed in comments, CheckOperator seems to by tied to SQL queries and is deprecated. Using your own custom Sensor or custom Operator is the way to go.
I had severe issues using the proposed API. So I embedded the Microsoft API into Airflow. This was working fine. All you need to do then is to use this operator and pass account_url and access_token.
from azure.storage.filedatalake import DataLakeServiceClient
from airflow.sensors.base import BaseSensorOperator
class AzureDataLakeSensor(BaseSensorOperator):
def __init__(self, path, filename, account_url, access_token, **kwargs):
super().__init__(**kwargs)
self._client = DataLakeServiceClient(
account_url=account_url,
credential=access_token
)
self.path = path
self.filename = filename
def poke(self, context):
container = self._client.get_file_system_client(file_system="raw")
dir_client = container.get_directory_client(self.path)
file = dir_client.get_file_client(self.filename)
return file.exists()

Calling a REST API using Azure function App and store data in Azure container

I have a requirement to call a rest api and store the resulting json in azure storage container. I have tried standalone python coding to extract the data from rest api and able to successfully receive the data from api that has pagination. Now I need to integrate/modify this python coding inside Azure Function and will ultimately store the resulting json data in a azure storage container. I am fairly new to Azure and hence need your guidance on how to tweak this code to suit in Azure function that will in turn push the json to azure container finally.
response = requests.post(base_url,
auth=(client_id, client_secret), data={'grant_type':grant_type,'client_id':client_id,'client_secret':client_secret,'resource':resource})
acc_token_json = response.json()
access_token = json.loads(response.text)
token = access_token['access_token']
#call API to know total pages
API_Key = 'xxxxx'
api_url='https://api.example.com?pageSize=10&page=1&sortBy=orderid&sortDirection=asc'
headers = {
'Authorization': token,
'API-Key': API_Key,
}
r = requests.get(url=api_url, headers=headers).json()
total_record=int(r['pagination']['total'])
total_page=round(total_record/500)+1
#loop through all pages
all_items = []
for page in range(0, total_page):
url = "https://api.example.com?pageSize=500&sortBy=orderid&sortDirection=asc&page="+str(page)
response = requests.get(url=url, headers=headers).json()
response_data=response['data']
all_items.append(response_data)
Your inputs/guidances are very much appreciated.
You can put the logic in the body of the function.(Function is just set the condition of trigger.)
For example, if you are based on HttpTrigger:
import logging
import azure.functions as func
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
'''
#Put the your logic code here.
'''
return func.HttpResponse(
"This is a test.",
status_code=200
)
And you can also use blob output to achieve your requirement, it is easier, have a look of this offcial doc:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob-output?tabs=python#example
Let me know if have any problem.

How to setup SQLite in Kotlin?

I'm currently writing a chat application in Kotlin and want to implement authentication, by storing hashed passwords on my server in a database.
I don't have any experience with Databases, so I chose the most simple looking one I found after about 30 minutes of google search. SQLite.
Unfortunatly there isn't any real setup guide for SQLite in Kotlin.
Could someone please write a small step by step guide on how to :
install SQLite
connect to it
use it in source code(e.g. create a table with one or two values)
all in Kotlin if possible
I'm grateful for any help!
Here's an MWE using JDBC API on Ubuntu 20.04:
sudo apt install sqlite3
SQLITE_VERSION=`sqlite3 --version | cut -d ' ' -f 1` # 3.31.1 on Ubuntu 20.04
curl -s https://get.sdkman.io | bash
sdk i java # for JDBC
sdk i maven # for JDBC interface to SQLite (see later)
sdk i kotlin 1.4.10 # later versions are currently affected by:
# https://youtrack.jetbrains.com/issue/KT-43520
cat > demo.main.kts <<EOF
#!/usr/bin/env kotlin
# uses maven to install the dependency from maven central:
# reference: https://github.com/Kotlin/KEEP/blob/master/proposals/scripting-support.md#kotlin-main-kts
#file:DependsOn("org.xerial:sqlite-jdbc:$SQLITE_VERSION")
import java.sql.DriverManager
import java.sql.Connection
import java.sql.Statement
import java.sql.ResultSet
# creates or connects to database.sqlite3 file in the current directory:
var connection = DriverManager.getConnection("jdbc:sqlite:database.sqlite3")
var statement = connection.createStatement()
statement.executeUpdate("drop table if exists people")
statement.executeUpdate("create table people (id integer, name string)")
statement.executeUpdate("insert into people values(1, 'leo')")
statement.executeUpdate("insert into people values(2, 'yui')")
var resultSet = statement.executeQuery("select * from people")
while ( resultSet.next() ) {
println( "id = " + resultSet.getInt("id") )
println( "name = " + resultSet.getString("name") )
println()
}
connection.close()
EOF
chmod +x demo.main.kts
./demo.main.kts
Sqlite doesn't work with a client-server model. The datas are stored in file of your choice, so there is no installation to do.
Maybe you can look Exposed. It is a kotlin library for sql database (sqlite included).
There is a documentation here.
You just need to add the 'org.jetbrains.exposed:exposed' depedency to gradle or maven (+ the jdbc library depedency, for SQLlite it is 'org.xerial:sqlite-jdbc').
import com.sun.net.httpserver.HttpServer
import java.io.PrintWriter
import java.net.InetSocketAddress
import java.sql.* // Connection, DriverManager, SQLException
import java.util.Properties
/**
https://www.tutorialkart.com/kotlin/connect-to-mysql-database-from-kotlin-using-jdbc/
$ wget https://repo1.maven.org/maven2/org/xerial/sqlite-jdbc/3.27.2.1/sqlite-jdbc-3.27.2.1.jar
$ kotlinc sqws.kt; kotlin -cp ".:./sqlite-jdbc-3.27.2.1.jar" SqwsKt
Minimal embedded HTTP server in Kotlin using Java built in HttpServer
**/
fun main(args: Array<String>) {
val conn = DriverManager.getConnection( "jdbc:sqlite:./sampledb.db")
var stmt: Statement? = null
var resultset: ResultSet? = null
try {
stmt = conn.createStatement()
resultset = stmt!!.executeQuery("SELECT * FROM items;")
if (stmt.execute("SELECT * FROM items;")) {
resultset = stmt.resultSet
}
} catch (ex: SQLException) {
// handle any errors
ex.printStackTrace()
}
HttpServer.create(InetSocketAddress(8080), 0).apply {
println("browse http://localhost:8080/hello")
createContext("/hello") { http ->
http.responseHeaders.add("Content-type", "text/plain")
http.sendResponseHeaders(200, 0);
PrintWriter(http.responseBody).use { out ->
out.println( "ok")
while (resultset!!.next()) {
out.println(resultset.getString("name"))
}
}
}
start()
}
}
Please check the full documentation on Github.
As soon as you mention server then you are perhaps looking in the wrong direction. SQLite is intended as an embedded database, each device having it's unique database. Synchronisation between server and clients would have to be written and can be problematic, whilst there are many RDBMS's that cater for better for client-server solutions.
Have a look at Appropriate Uses For SQLite.

Automating component deletion in Nexus 3

I am attempting to delete some components in a repository via the nexus 3 api
I have followed the instructions in the following question
Using the Nexus3 API how do I get a list of artifacts in a repository
and have modified it as follows to delete an artifact
import groovy.json.JsonOutput
import org.sonatype.nexus.repository.storage.Component
import org.sonatype.nexus.repository.storage.Query
import org.sonatype.nexus.repository.storage.StorageFacet
def repoName = "eddie-test"
def startDate = "2016/01/01"
def artifactName = "you-artifact-name"
def artifactVersion = "1.0.6"
log.info(" Attempting to delete for repository: ${repoName} as of startDate: ${startDate}")
def repo = repository.repositoryManager.get(repoName)
StorageFacet storageFacet = repo.facet(StorageFacet)
def tx = storageFacet.txSupplier().get()
tx.begin()
// build a query to return a list of components scoped by name and version
Iterable<Component> foundComponents = tx.findComponents(Query.builder().where('name = ').param(artifactName).and('version = ').param(artifactVersion).build(), [repo])
// extra logic for validation goes here
if (foundComponents.size() == 1) {
tx.deleteComponent(foundComponents[0])
}
tx.commit()
log.info("done")
however when I interrogate the maven-metadata.xml in
http://localhost:32769/repository/eddie-test/com/company/you-artifact-name/maven-metadata.xml
the version is still listed.
i.e.
<metadata>
<groupId>com.company</groupId>
<artifactId>you-artifact-name</artifactId>
<versioning>
<release>1.0.7</release>
<versions>
<version>1.0.6</version>
<version>1.0.7</version>
</versions>
<lastUpdated>20161213115754</lastUpdated>
</versioning>
(deleting the component via the delete component button in the ui, updates the maven-metadata.xml as expected)
So is there a way to make sure that the file is updated when deleting via the API?
After running this, you can run the Rebuild Maven repository metadata Scheduled Task and that will accomplish what you aim to achieve.
There is currently no Public API for calling that task. If you want, pop on over to https://issues.sonatype.org/browse/NEXUS and make an issue for that :)

Red5: Server application skeleton and helloworld

Can anyone provide an updated application skeleton for a Red5 application? From what I have found the logging system changed from Log4j. I've been looking for some tutorials just to setup everything but can't really find something that simply works.
In addiction, can anyone provide a simple tutorial with a server application and Flex client?
Thanks in advance!
I struggled a lot with that.. This reference worked for me:
http://fossies.org/unix/privat/red5-1.0.0-RC2.tar.gz:a/red5-1.0.0/doc/reference/html/logging-setup.html
The trick was to Remove any log4j.properties or log4j.xml files and Remove any "log4j" listeners from the web.xml
Create a logback-myApp.xml where myApp is the name for your webapp and place it on your webapp classpath (WEB-INF/classes or in your application jar within WEB-INF/lib)
and im my app i did:
import org.slf4j.Logger;
import org.red5.logging.Red5LoggerFactory;
and then:
private static Logger log = Red5LoggerFactory.getLogger(MyClassName.class, "myApp");
the clients actionscript looks like this:
// Initializiing Connection
private function initConnection():void{
nc = new NetConnection();
nc.client = new NetConnectionClient();
nc.objectEncoding = flash.net.ObjectEncoding.AMF0;
nc.connect(rtmpPath.text,true); //Path to FMS Server e.g. rtmp://<hostname>/<application name>
nc.addEventListener("netStatus", publishStream); //Listener to see if connection is successful
}
private function publishStream(event:NetStatusEvent):void{
if(nc.connected){
nsPublish = new NetStream(nc); //Initializing NetStream
nsPublish.attachCamera(Camera.getCamera());
nsPublish.attachAudio(Microphone.getMicrophone()); //Attaching Camera & Microphone
nsPublish.publish(streamName.text,'live'); //Publish stream
mx.controls.Alert.show("Published");
}
else{
mx.controls.Alert.show("Connection Error");
}
}

Resources