I am attempting to delete some components in a repository via the nexus 3 api
I have followed the instructions in the following question
Using the Nexus3 API how do I get a list of artifacts in a repository
and have modified it as follows to delete an artifact
import groovy.json.JsonOutput
import org.sonatype.nexus.repository.storage.Component
import org.sonatype.nexus.repository.storage.Query
import org.sonatype.nexus.repository.storage.StorageFacet
def repoName = "eddie-test"
def startDate = "2016/01/01"
def artifactName = "you-artifact-name"
def artifactVersion = "1.0.6"
log.info(" Attempting to delete for repository: ${repoName} as of startDate: ${startDate}")
def repo = repository.repositoryManager.get(repoName)
StorageFacet storageFacet = repo.facet(StorageFacet)
def tx = storageFacet.txSupplier().get()
tx.begin()
// build a query to return a list of components scoped by name and version
Iterable<Component> foundComponents = tx.findComponents(Query.builder().where('name = ').param(artifactName).and('version = ').param(artifactVersion).build(), [repo])
// extra logic for validation goes here
if (foundComponents.size() == 1) {
tx.deleteComponent(foundComponents[0])
}
tx.commit()
log.info("done")
however when I interrogate the maven-metadata.xml in
http://localhost:32769/repository/eddie-test/com/company/you-artifact-name/maven-metadata.xml
the version is still listed.
i.e.
<metadata>
<groupId>com.company</groupId>
<artifactId>you-artifact-name</artifactId>
<versioning>
<release>1.0.7</release>
<versions>
<version>1.0.6</version>
<version>1.0.7</version>
</versions>
<lastUpdated>20161213115754</lastUpdated>
</versioning>
(deleting the component via the delete component button in the ui, updates the maven-metadata.xml as expected)
So is there a way to make sure that the file is updated when deleting via the API?
After running this, you can run the Rebuild Maven repository metadata Scheduled Task and that will accomplish what you aim to achieve.
There is currently no Public API for calling that task. If you want, pop on over to https://issues.sonatype.org/browse/NEXUS and make an issue for that :)
Related
I have a DAG that shall check if a file has been uploaded to Azure DataLake in a specific directory. If so, it allow other DAGs to run.
I thought about using a FileSensor, but I assume a fsconnid parameter is not enough to authenticate against a DataLake
There is no AzureDataLakeSensor in the Azure provider but you can easily implement one since the AzureDataLakeHook has check_for_file function so all needed is to wrap this function with Sensor class implementing poke() function of BaseSensorOperator. By doing so you can use Microsoft Azure Data Lake Connection directly.
I didn't test it but this should work:
from airflow.providers.microsoft.azure.hooks.data_lake import AzureDataLakeHook
from airflow.sensors.base import BaseSensorOperator
class MyAzureDataLakeSensor(BaseSensorOperator):
"""
Sense for files in Azure Data Lake
:param path: The Azure Data Lake path to find the objects. Supports glob
strings (templated)
:param azure_data_lake_conn_id: The Azure Data Lake conn
"""
template_fields: Sequence[str] = ('path',)
ui_color = '#901dd2'
def __init__(
self, *, path: str, azure_data_lake_conn_id: str = 'azure_data_lake_default', **kwargs
) -> None:
super().__init__(**kwargs)
self.path = path
self.azure_data_lake_conn_id = azure_data_lake_conn_id
def poke(self, context: "Context") -> bool:
hook = AzureDataLakeHook(azure_data_lake_conn_id=self.azure_data_lake_conn_id)
self.log.info('Poking for file in path: %s', self.path)
try:
hook.check_for_file(file_path=self.path)
return True
except FileNotFoundError:
pass
return False
Usage example:
MyAzureDataLakeSensor(
task_id='adls_sense',
path='folder/file.csv',
azure_data_lake_conn_id='azure_data_lake_default',
mode='reschedule'
)
First of all, have a look at official Microsoft Operators for Airflow.
We can see that there are dedicated Operators to Azure DataLake Storage, unfortunately, only the ADLSDeleteOperator seems available at the moment.
This ADLSDeleteOperator uses a AzureDataLakeHook which you should reuse in your own custom operator to check for file presence.
My advice for you is to create a Child class of CheckOperator using the ADLS hook check if the file provided in input exists with check_for_file function of the hook.
UPDATE: as pointed in comments, CheckOperator seems to by tied to SQL queries and is deprecated. Using your own custom Sensor or custom Operator is the way to go.
I had severe issues using the proposed API. So I embedded the Microsoft API into Airflow. This was working fine. All you need to do then is to use this operator and pass account_url and access_token.
from azure.storage.filedatalake import DataLakeServiceClient
from airflow.sensors.base import BaseSensorOperator
class AzureDataLakeSensor(BaseSensorOperator):
def __init__(self, path, filename, account_url, access_token, **kwargs):
super().__init__(**kwargs)
self._client = DataLakeServiceClient(
account_url=account_url,
credential=access_token
)
self.path = path
self.filename = filename
def poke(self, context):
container = self._client.get_file_system_client(file_system="raw")
dir_client = container.get_directory_client(self.path)
file = dir_client.get_file_client(self.filename)
return file.exists()
I'm currently writing a chat application in Kotlin and want to implement authentication, by storing hashed passwords on my server in a database.
I don't have any experience with Databases, so I chose the most simple looking one I found after about 30 minutes of google search. SQLite.
Unfortunatly there isn't any real setup guide for SQLite in Kotlin.
Could someone please write a small step by step guide on how to :
install SQLite
connect to it
use it in source code(e.g. create a table with one or two values)
all in Kotlin if possible
I'm grateful for any help!
Here's an MWE using JDBC API on Ubuntu 20.04:
sudo apt install sqlite3
SQLITE_VERSION=`sqlite3 --version | cut -d ' ' -f 1` # 3.31.1 on Ubuntu 20.04
curl -s https://get.sdkman.io | bash
sdk i java # for JDBC
sdk i maven # for JDBC interface to SQLite (see later)
sdk i kotlin 1.4.10 # later versions are currently affected by:
# https://youtrack.jetbrains.com/issue/KT-43520
cat > demo.main.kts <<EOF
#!/usr/bin/env kotlin
# uses maven to install the dependency from maven central:
# reference: https://github.com/Kotlin/KEEP/blob/master/proposals/scripting-support.md#kotlin-main-kts
#file:DependsOn("org.xerial:sqlite-jdbc:$SQLITE_VERSION")
import java.sql.DriverManager
import java.sql.Connection
import java.sql.Statement
import java.sql.ResultSet
# creates or connects to database.sqlite3 file in the current directory:
var connection = DriverManager.getConnection("jdbc:sqlite:database.sqlite3")
var statement = connection.createStatement()
statement.executeUpdate("drop table if exists people")
statement.executeUpdate("create table people (id integer, name string)")
statement.executeUpdate("insert into people values(1, 'leo')")
statement.executeUpdate("insert into people values(2, 'yui')")
var resultSet = statement.executeQuery("select * from people")
while ( resultSet.next() ) {
println( "id = " + resultSet.getInt("id") )
println( "name = " + resultSet.getString("name") )
println()
}
connection.close()
EOF
chmod +x demo.main.kts
./demo.main.kts
Sqlite doesn't work with a client-server model. The datas are stored in file of your choice, so there is no installation to do.
Maybe you can look Exposed. It is a kotlin library for sql database (sqlite included).
There is a documentation here.
You just need to add the 'org.jetbrains.exposed:exposed' depedency to gradle or maven (+ the jdbc library depedency, for SQLlite it is 'org.xerial:sqlite-jdbc').
import com.sun.net.httpserver.HttpServer
import java.io.PrintWriter
import java.net.InetSocketAddress
import java.sql.* // Connection, DriverManager, SQLException
import java.util.Properties
/**
https://www.tutorialkart.com/kotlin/connect-to-mysql-database-from-kotlin-using-jdbc/
$ wget https://repo1.maven.org/maven2/org/xerial/sqlite-jdbc/3.27.2.1/sqlite-jdbc-3.27.2.1.jar
$ kotlinc sqws.kt; kotlin -cp ".:./sqlite-jdbc-3.27.2.1.jar" SqwsKt
Minimal embedded HTTP server in Kotlin using Java built in HttpServer
**/
fun main(args: Array<String>) {
val conn = DriverManager.getConnection( "jdbc:sqlite:./sampledb.db")
var stmt: Statement? = null
var resultset: ResultSet? = null
try {
stmt = conn.createStatement()
resultset = stmt!!.executeQuery("SELECT * FROM items;")
if (stmt.execute("SELECT * FROM items;")) {
resultset = stmt.resultSet
}
} catch (ex: SQLException) {
// handle any errors
ex.printStackTrace()
}
HttpServer.create(InetSocketAddress(8080), 0).apply {
println("browse http://localhost:8080/hello")
createContext("/hello") { http ->
http.responseHeaders.add("Content-type", "text/plain")
http.sendResponseHeaders(200, 0);
PrintWriter(http.responseBody).use { out ->
out.println( "ok")
while (resultset!!.next()) {
out.println(resultset.getString("name"))
}
}
}
start()
}
}
Please check the full documentation on Github.
As soon as you mention server then you are perhaps looking in the wrong direction. SQLite is intended as an embedded database, each device having it's unique database. Synchronisation between server and clients would have to be written and can be problematic, whilst there are many RDBMS's that cater for better for client-server solutions.
Have a look at Appropriate Uses For SQLite.
In my Plone 4 site, I have installed quintagroup.transmogrifier (I tried both release 0.5 and the bleeding-edge github version) and collective.transmogrifier 1.5.
I found an example for an export based on a portal_catalog search here.
I have the following export configuration, registered as catalogsearch:
[transmogrifier]
pipeline =
catalog
fileexporter
marshaller
datacorrector
portletsexporter
writer
EXPORTING
[catalog]
blueprint = quintagroup.transmogrifier.catalogsource
path = query= /Plone/some/existing/folder/
[fileexporter]
blueprint = quintagroup.transmogrifier.fileexporter
[marshaller]
blueprint = quintagroup.transmogrifier.marshaller
[datacorrector]
blueprint = quintagroup.transmogrifier.datacorrector
sources =
marshall
[portletsexporter]
blueprint = quintagroup.transmogrifier.portletsexporter
[writer]
blueprint = quintagroup.transmogrifier.writer
prefix = structure
[EXPORTING]
blueprint = quintagroup.transmogrifier.logger
keys =
_type
_path
The idea is to specify the search expression when calling the transmogrifier:
$ bin/instance debug
>>> portal = app.Plone
>>> from collective.transmogrifier.transmogrifier import Transmogrifier
>>> tm = Transmogrifier(portal)
>>> tm('catalogsearch')
>>> tm('catalogsearch', catalog={'path': '/Plone/some/existing/folder/'})
However, both calls to the Transmogrifier object never return; I need to terminate them with Ctrl+C.
Shouldn't this work, regardless of the debug session?
What is wrong?
The following blog propose how to fetch an artifact directly from java using ivy (http://developers-blog.org/blog/default/2010/11/08/Embed-Ivy-How-to-use-Ivy-with-Java).
public class IvyArtifactResolver {
public File resolveArtifact(String groupId, String artifactId, String version) throws Exception {
//creates clear ivy settings
IvySettings ivySettings = new IvySettings();
//url resolver for configuration of maven repo
URLResolver resolver = new URLResolver();
resolver.setM2compatible(true);
resolver.setName("central");
//you can specify the url resolution pattern strategy
resolver.addArtifactPattern(
"http://repo1.maven.org/maven2/"
+ "[organisation]/[module]/[revision]/[artifact](-[revision]).[ext]");
//adding maven repo resolver
ivySettings.addResolver(resolver);
//set to the default resolver
ivySettings.setDefaultResolver(resolver.getName());
//creates an Ivy instance with settings
Ivy ivy = Ivy.newInstance(ivySettings);
File ivyfile = File.createTempFile("ivy", ".xml");
ivyfile.deleteOnExit();
String[] dep = null;
dep = new String[]{groupId, artifactId, version};
DefaultModuleDescriptor md =
DefaultModuleDescriptor.newDefaultInstance(ModuleRevisionId.newInstance(dep[0],
dep[1] + "-caller", "working"));
DefaultDependencyDescriptor dd = new DefaultDependencyDescriptor(md,
ModuleRevisionId.newInstance(dep[0], dep[1], dep[2]), false, false, true);
md.addDependency(dd);
//creates an ivy configuration file
XmlModuleDescriptorWriter.write(md, ivyfile);
String[] confs = new String[]{"default"};
ResolveOptions resolveOptions = new ResolveOptions().setConfs(confs);
//init resolve report
ResolveReport report = ivy.resolve(ivyfile.toURL(), resolveOptions);
//so you can get the jar library
File jarArtifactFile = report.getAllArtifactsReports()[0].getLocalFile();
return jarArtifactFile;
}
}
I'm wondering if sbt exposes this kind of interface since it uses ivy.
resolve :: ModuleId -> File
Scripts, REPL, and Dependencies
There's a document called Scripts, REPL, and Dependencies you might be interested in. Script runner for example lets you write something like this:
#!/usr/bin/env scalas
!#
/***
scalaVersion := "2.9.0-1"
libraryDependencies ++= Seq(
"net.databinder" %% "dispatch-twitter" % "0.8.3",
"net.databinder" %% "dispatch-http" % "0.8.3"
)
*/
import dispatch.{ json, Http, Request }
import dispatch.twitter.Search
driving sbt programmatically
You can also use any subparts of sbt as a library and drive it yourself. Because of the plugin ecosystem, it's pretty good about maintaining binary compatibility among the point releases. The key task that grabs jars would be update, so def updateTask (Defaults.scala#L1113) could be a good place to start. If you are driving sbt from the client code, however, wouldn't you end up re-implementing sbt shell or including all the sbt's dependencies? You might as well have a separate sbt shell window or sbt script section.
Custom Resolvers
sbt ships with variety of customizable resolvers, so the first place to check out should be: Resolvers:
sbt provides an interface to the repository types available in Ivy: file, URL, SSH, and SFTP. A key feature of repositories in Ivy is using patterns to configure repositories.
Construct a repository definition using the factory in sbt.Resolver for the desired type. This factory creates a Repository object that can be further configured. The following table contains links to the Ivy documentation for the repository type and the API documentation for the factory and repository class. The SSH and SFTP repositories are configured identically except for the name of the factory. Use Resolver.ssh for SSH and Resolver.sftp for SFTP.
For example you can do:
resolvers += Resolver.file("my-test-repo", file("test")) transactional()
RawRepository
But if you truly want a programmable resolver, there is RawRepository:
final class RawRepository(val resolver: DependencyResolver) extends Resolver
{
def name = resolver.getName
override def toString = "Raw(" + resolver.toString + ")"
}
This is a thin wrapper around org.apache.ivy.plugins.resolver.DependencyResolver, which you should be able to write by extending one of the resolvers they have. (I haven't tried this myself.)
I have some jar file (custom) which I need to publish to Sonatype Nexus repository from Groovy script.
I have jar located in some path on machine where Groovy script works (for instance: c:\temp\module.jar).
My Nexus repo url is http://:/nexus/content/repositories/
On this repo I have folder structure like: folder1->folder2->folder3
During publishing my jar I need to create in folder3:
New directory with module's revision (my Groovy script knows this revision)
Upload jar to this directory
Create pom, md5 and sha1 files for jar uploaded
After several days of investigation I still have no idea how to create such script but this way looks very clear instead of using direct uploading.
I found http://groovy.codehaus.org/Using+Ant+Libraries+with+AntBuilder and some other stuff (stackoverflow non script solution).
I got how to create ivy.xml in my Groovy script, but I don't understand how to create build.xml and ivysetting.xml on the fly and setup whole system to work.
Could you please help to understand Groovy's way?
UPDATE:
I found that the following command works fine for me:
curl -v -F r=thirdparty -F hasPom=false -F e=jar -F g=<my_groupId> -F a=<my_artifactId> -F v=<my_artifactVersion> -F p=jar -F file=#module.jar -u admin:admin123 http://<my_nexusServer>:8081/nexus/service/local/repositories
As I understand curl perform POST request to Nexus services. Am I correct?
And now I'm trying to build HTTP POST request using Groovy HTTPBuilder.
How I should transform curl command parameters into Groovy's HTTPBuilder request?
Found a way to do this with the groovy HttpBuilder.
based on info from sonatype, and a few other sources.
This works with http-builder version 0.7.2 (not with earlier versions)
And also needs an extra dependency: 'org.apache.httpcomponents:httpmime:4.2.1'
The example also uses basic auth against nexus.
import groovyx.net.http.Method
import groovyx.net.http.ContentType;
import org.apache.http.HttpRequest
import org.apache.http.HttpRequestInterceptor
import org.apache.http.entity.mime.MultipartEntity
import org.apache.http.entity.mime.content.FileBody
import org.apache.http.entity.mime.content.StringBody
import org.apache.http.protocol.HttpContext
import groovyx.net.http.HttpResponseException;
class NexusUpload {
def uploadArtifact(Map artifact, File fileToUpload, String user, String password) {
def path = "/service/local/artifact/maven/content"
HTTPBuilder http = new HTTPBuilder("http://my-nexus.org/")
String basicAuthString = "Basic " + "$user:$password".bytes.encodeBase64().toString()
http.client.addRequestInterceptor(new HttpRequestInterceptor() {
void process(HttpRequest httpRequest, HttpContext httpContext) {
httpRequest.addHeader('Authorization', basicAuthString)
}
})
try {
http.request(Method.POST, ContentType.ANY) { req ->
uri.path = path
MultipartEntity entity = new MultipartEntity()
entity.addPart("hasPom", new StringBody("false"))
entity.addPart("file", new FileBody(fileToUpload))
entity.addPart("a", new StringBody("my-artifact-id"))
entity.addPart("g", new StringBody("my-group-id"))
entity.addPart("r", new StringBody("my-repository"))
entity.addPart("v", new StringBody("my-version"))
req.entity = entity
response.success = { resp, reader ->
if(resp.status == 201) {
println "success!"
}
}
}
} catch (HttpResponseException e) {
e.printStackTrace()
}
}
}
`
Ivy is an open source library, so, one approach would be to call the classes directly. The problem with that approach is that there are few examples on how to invoke ivy programmatically.
Since groovy has excellent support for generating XML, I favour the slightly dumber approach of creating the files I understand as an ivy user.
The following example is designed to publish files into Nexus generating both the ivy and ivysettings files:
import groovy.xml.NamespaceBuilder
import groovy.xml.MarkupBuilder
// Methods
// =======
def generateIvyFile(String fileName) {
def file = new File(fileName)
file.withWriter { writer ->
xml = new MarkupBuilder(writer)
xml."ivy-module"(version:"2.0") {
info(organisation:"org.dummy", module:"dummy")
publications() {
artifact(name:"dummy", type:"pom")
artifact(name:"dummy", type:"jar")
}
}
}
return file
}
def generateSettingsFile(String fileName) {
def file = new File(fileName)
file.withWriter { writer ->
xml = new MarkupBuilder(writer)
xml.ivysettings() {
settings(defaultResolver:"central")
credentials(host:"myrepo.com" ,realm:"Sonatype Nexus Repository Manager", username:"deployment", passwd:"deployment123")
resolvers() {
ibiblio(name:"central", m2compatible:true)
ibiblio(name:"myrepo", root:"http://myrepo.com/nexus", m2compatible:true)
}
}
}
return file
}
// Main program
// ============
def ant = new AntBuilder()
def ivy = NamespaceBuilder.newInstance(ant, 'antlib:org.apache.ivy.ant')
generateSettingsFile("ivysettings.xml").deleteOnExit()
generateIvyFile("ivy.xml").deleteOnExit()
ivy.resolve()
ivy.publish(resolver:"myrepo", pubrevision:"1.0", publishivy:false) {
artifacts(pattern:"build/poms/[artifact].[ext]")
artifacts(pattern:"build/jars/[artifact].[ext]")
}
Notes:
More complex? Perhaps... however, if you're not generating the ivy file (using it to manage your dependencies) you can easily call the makepom task to generate the Maven POM files prior to upload into Nexus.
The REST APIs for Nexus work fine. I find them a little cryptic and of course a solution that uses them cannot support more than one repository manager (Nexus is not the only repository manager technology available).
The "deleteOnExit" File method call ensures the working files are cleaned up properly.