Distributing APK splits with Firebase App Distribution - firebase

“Is it possible to use Firebase App Distribution with APK split? It doesn't declare a dependency on assemble task, are there any workarounds for this?"

The problem with the gradle plugin is that it
doesn't declare dependency on assemble task (in general, regardless of apk splits, by gradle convention, you shouldn't just "expect" the apks to be there)
doesn't generate tasks per apk splits -- but you do for flavors
Try the following work around:
// Generate firebase app distribution task variants for all abis
applicationVariants.all { variant ->
variant.outputs.all { output ->
def abi = output.getFilter(com.android.build.OutputFile.ABI)
if (abi == null) return
def abiName = abi.replace("_", "").replace("-", "")
task("appDistributionUpload${abiName.capitalize()}${variant.name.capitalize()}", type: com.google.firebase.appdistribution.gradle.UploadDistributionTask_Decorated) {
appDistributionProperties = new com.google.firebase.appdistribution.gradle.AppDistributionProperties(
new com.google.firebase.appdistribution.gradle.AppDistributionExtension(),
project,
variant
)
appDistributionProperties.apkPath = output.outputFile.absolutePath
appDistributionProperties.serviceCredentialsFile = project.file("secrets/ci-firebase-account.json")
appDistributionProperties.releaseNotes = abi
appDistributionProperties.groups = "ra-testers"
// Add dependsOn respective assemble task, so it actually
// builds apk it wants to upload, not just expect it to be there
dependsOn "assemble${variant.name.capitalize()}"
}
}
}

Related

What version of rusqlite should I use?

I'm learning the rust language. So, I try to build a simple web app using sqlite3. But it gets multiple packages link error.
I saw some solution for this error(ex. this), but they didn't work.
The cause seems to be that the version specification of rusqlite is wrong, but I don't know the correct version specification.
How should I configure the cargo.toml?
Source codes are here.
cargo.toml
[package]
name = "todo"
version = "0.1.0"
edition = "2018"
[dependencies]
actix-web = "4.0.0-beta.3"
actix-rt = "2.2.0"
thiserror = "1.0.29"
askama = "0.10.5"
rusqlite = { version = "0.23", features = ["bundled"] }
r2d2 = "0.8.9"
r2d2-sqlite3 = "0.1.1"
main.rs
use actix_web::{get, App, HttpResponse, HttpServer, ResponseError};
use thiserror::Error;
use askama::Template;
use r2d2::Pool;
use r2d2_sqlite3::SqliteConnectionManager;
use rusqlite::params;
struct TodoEntry {
id: u32,
text: String,
}
#[derive(Template)]
#[template(path = "index.html")]
struct IndexTemplate {
entries: Vec<TodoEntry>,
}
#[derive(Error, Debug)]
enum MyError {
#[error("Failed to render HTML")]
AskamaError(#[from] askama::Error),
}
impl ResponseError for MyError {}
#[get("/")]
async fn index() -> Result<HttpResponse, MyError> {
let mut entries = Vec::new();
entries.push(TodoEntry {
id: 1,
text: "First entry".to_string(),
});
entries.push(TodoEntry {
id: 2,
text: "Second entry".to_string(),
});
let html = IndexTemplate { entries };
let response_body = html.render()?;
Ok(HttpResponse::Ok()
.content_type("text/html")
.body(response_body))
}
#[actix_rt::main]
async fn main() -> Result<(), actix_web::Error> {
let manager = SqliteConnectionManager::file("todo.db");
let pool = Pool::new(manager).expect("Failed to initialize the connection pool.");
let conn = pool
.get()
.expect("Failed to get the connection from the pool.");
conn.execute(
"CREATE TABLE IF NOT EXISTS todo (
id INTEGER PRIMARY KEY AUTOINCREMENT,
text TEXT NOT NULL
)",
params![],
)?
.expect("Failed to create a table `todo`");
HttpServer::new(move || App::new().service(index))
.bind("127.0.0.1:8080")?
.run()
.await?;
Ok(())
}
And the error messages are here.
error: multiple packages link to native library `sqlite3`, but a native library can be linked only once
package `libsqlite3-sys v0.18.0`
... which is depended on by `rusqlite v0.23.1`
... which is depended on by `todo v0.1.0 (/Users/******/Documents/IntelliJ project/Rust-project/todo)`
links to native library `sqlite3`
package `sqlite3-src v0.2.9`
... which is depended on by `sqlite3-sys v0.12.0`
... which is depended on by `sqlite3 v0.24.0`
... which is depended on by `r2d2-sqlite3 v0.1.1`
... which is depended on by `todo v0.1.0 (/Users/*****/Documents/IntelliJ project/Rust-project/todo)`
also links to native library `sqlite3`
You're directly depending on rusqlite and using r2d2-sqlite3 which itself depends on rusqlite.
Since rusqlite binds to a native library as the message indicates you can't have two versions of rusqlite linking to different versions of sqlite3(-sys), so you need to ensure you use the same version of rusqlite as r2d2.
If you're not going to publish on Cargo, the easiest by far is to leave rusqlite's version as a wildcard ("*"), that way the dependency resolver will give you whatever works for r2d2-sqlite3. Otherwise you need to check the version of r2d2-sqlite3 you're using and match it.
Incidentally... r2d2-sqlite3 0.1.1? That seems to be over 4 years old, the current version seems to be 0.18. I'm slightly surprised r2d2 works, though I guess it changes relatively little (0.8.0 was 4 years ago, current is 0.8.9). Though I'm not sure what the utility of r2d2 is for sqlite3, especially for "a simple web app".

How do you restore private NuGet packages from private VSTS feeds with Cake

I have a task which restores our NuGet package for our dotnet core application:
Task("Restore-Packages")
.Does(() =>
{
DotNetCoreRestore(sln, new DotNetCoreRestoreSettings {
Sources = new[] {"https://my-team.pkgs.visualstudio.com/_packaging/my-feed/nuget/v3/index.json"},
Verbosity = DotNetCoreVerbosity.Detailed
});
});
However when run on VSTS it errors with the following:
2018-06-14T15:10:53.3857512Z C:\Program Files\dotnet\sdk\2.1.300\NuGet.targets(114,5): error : Unable to load the service index for source https://my-team.pkgs.visualstudio.com/_packaging/my-feed/nuget/v3/index.json. [D:\a\1\s\BitCoinMiner.sln]
2018-06-14T15:10:53.3857956Z C:\Program Files\dotnet\sdk\2.1.300\NuGet.targets(114,5): error : Response status code does not indicate success: 401 (Unauthorized). [D:\a\1\s\BitCoinMiner.sln]
How do I authorize access for the build agent to our private VSTS?
I literally just had this same problem, apparently the build agents in VSTS can't get to your private VSTS feed without an access token so you are going to have to create a Personal Access Token in VSTS and provide that to the built in Cake method to add an authenticated VSTS Nuget feed as one of the sources. Here, I have wrapped it in my own convenience Cake method that checks to see if the package feed is already present, if not, then it adds it:
void SetUpNuget()
{
var feed = new
{
Name = "<feedname>",
Source = "https://<your-vsts-account>.pkgs.visualstudio.com/_packaging/<yournugetfeed>/nuget/v3/index.json"
};
if (!NuGetHasSource(source:feed.Source))
{
var nugetSourceSettings = new NuGetSourcesSettings
{
UserName = "<any-odd-string>",
Password = EnvironmentVariable("NUGET_PAT"),
Verbosity = NuGetVerbosity.Detailed
};
NuGetAddSource(
name:feed.Name,
source:feed.Source,
settings:nugetSourceSettings);
}
}
and then I call it from the "Restore" task:
Task("Restore")
.Does(() => {
SetUpNuget();
DotNetCoreRestore("./<solution-name>.sln");
});
Personally, I prefer to keep PATs away from the source control so here I am reading from env vars. In VSTS you can create an environment variable under the Variables tab of your CI build configuration.
Hope this helps! Here is a link to Cake's documentation.
As pointed out by both #KevinSmith and #NickTurner, a better approach to accessing the VSTS feed is by using the pre-defined system variable System.AccessToken as opposed to using limited validity, manually created and cumbersome PATs. This variable is available on the build agents for the current build to use. More info here.
One way of using this token in the Cake script is as follows:
First, expose the system variable as an environment variable for the Cake task in azure-pipelines.yml
steps:
- task: cake-build.cake.cake-build-task.Cake#0
displayName: 'Cake '
inputs:
target: Pack
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
Then in Cake you can access it like you would any environment variable, so in my case:
if (!NuGetHasSource(source:feed.Source))
{
Information($"Nuget feed {feed.Source} not found, adding...");
var nugetSourceSettings = new NuGetSourcesSettings
{
UserName = "whoosywhatsit",
Password = EnvironmentVariable("SYSTEM_ACCESSTOKEN"),
Verbosity = NuGetVerbosity.Detailed
};
NuGetAddSource(
name:feed.Name,
source:feed.Source,
settings:nugetSourceSettings);
}
This seems to work! If there are better approaches of accessing this variable in Cake please let me know. Please also note in my case, I am only using this to restore packages from my VSTS feed, not for pushing to it. That I do via a DotNetCoreCLI#2 task in the YML like so:
- task: DotNetCoreCLI#2
displayName: 'dotnet nuget push'
inputs:
command: push
packagesToPush: 'artifacts/package.nupkg'
publishVstsFeed: '<id of my VSTS feed>'
And Azure pipeline handles the rest.

How can I make a task depend on another task?

I'm new to sbt and I try to create a script for either deploy my application or to deploy and run the application.
What already works for me is
sbt deploy
which will successfully deploy the final .jar file to the remove location.
However, I don't know how to make deployAndRunTask dependent on deployTask. I've tried several things but none of them worked so far.
My last hope was
deployAndRunTask := {
val d = deployTask.value
}
However, this does not seem to work.
This is the script that I'm currently at but sbt deploy-run will only execute the deployAndRunTask task but not the deyployTask.
// DEPLOYMENT
val deployTask = TaskKey[Unit]("deploy", "Copies assembly jar to remote location")
deployTask <<= assembly map { (asm) =>
val account = "user#example.com"
val local = asm.getPath
val remote = account + ":" + "/home/user/" + asm.getName
println(s"Copying: $local -> $account:$remote")
Seq("scp", local, remote) !!
}
val deployAndRunTask = TaskKey[Unit]("deploy-run", "Deploy and run application.")
deployAndRunTask := {
val d = deployTask.value
}
deployAndRunTask <<= assembly map { (asm) =>
println(s"Running the script ..")
}
What is the problem here?
The problem is that you define your task and then redefine it. So only the latter definition is taken into account. You cannot separate task definition and its dependency on another task. Also you're using a couple of outdated things in sbt:
use taskKey macro and you don't need to think about task name, because it's the same as the key name:
val deploy = taskKey[Unit]("Copies assembly jar to remote location")
val deployAndRun = taskKey[Unit]("Deploy and run application.")
Then you can refer to them as deploy and deployAndRun both in build.sbt and in the sbt shell
replace <<= with := and keyname map { (keyvalue) => ... } with just keyname.value. Things are more concise and easier to write.
You can read more about Migrating from sbt 0.13.x.
So here's your deployAndRun task definition with these changes:
deployAndRun := {
val d = deploy.value
val asm = assembly.value
println(s"Running the script ..")
}
It's dependent both on deploy and assembly tasks and will run them both before doing anything else. You can also use dependsOn, but I think it's unnecessary here.
You may also be interested in looking into Defining a sequential task with Def.sequential and Defining a dynamic task with Def.taskDyn.

kotlin realm demo canot be run

kotlin version 1.0.0,realm version 0.88.0-SNAPSHOT
I download realm kotlin demo ,and run
if code like this:
var person = Person()
person.id = 1
person.name = "Young Person"
person.age = 14
realm.beginTransaction()
realm.copyToRealm(person)
realm.commitTransaction()
throw Exception: Caused by: java.lang.ClassCastException: io.realm.examples.kotlin.model.Person cannot be cast to io.realm.PersonRealmProxyInterface
else I change code like this:
realm.beginTransaction()
// Add a person
var person = realm.createObject(Person::class.java)
person.id = 1
person.name = "Young Person"
person.age = 14
// When the transaction is committed, all changes a synced to disk.
realm.commitTransaction()
then realm insert a data but person.name ,id,age is empty value or 0;
how to solve it
With Realm 0.88.0-SNAPSHOT you have to use their Gradle Plugin as well. If you Google this exception (realm java.lang.ClassCastException ProxyInterface) you will find this Github Issue - 2353 which says:
We just merged our byte code weaver into master, and it sounds like it isn't being triggered in your case. Note that from 0.88.0-SNAPSHOT you have to use our Gradle plugin: https://realm.io/news/android-installation-change/
Previously you would install Realm like:
repositories {
jcenter()
}
dependencies {
compile 'io.realm:realm-android:<version>'
}
Now you must install it to also include the Gradle plugin:
buildscript {
repositories {
jcenter()
}
dependencies {
classpath "io.realm:realm-gradle-plugin:<version>"
}
}
apply plugin: 'realm-android'
So that byte code weaving is turned on. Without this, you receive an error exactly like your first use case (which was correct code, but for the snapshot release you chose to use, you also needed this extra step of the Gradle plugin). There are other important notes in the link above, along with the release notes for changes in recent versions of Realm.

Cleanest way in Gradle to get the path to a jar file in the gradle dependency cache

I'm using Gradle to help automate Hadoop tasks. When calling Hadoop, I need to be able to pass it the path to some jars that my code depends on so that Hadoop can send that dependency on during the map/reduce phase.
I've figured out something that works, but it feels messy and I'm wondering if there's a feature I'm missing somewhere.
This is a simplified version of my gradle script that has a dependency on the solr 3.5.0 jar, and a findSolrJar task that iterates through all of the jar files in the configuration to find the right one:
apply plugin: 'groovy'
repositories {
mavenCentral()
}
dependencies {
compile 'org.apache.solr:solr-solrj:3.5.0'
}
task findSolrJar() {
println project.configurations.compile*.toURI().find { URI uri -> new File(uri).name == 'solr-solrj-3.5.0.jar'}
}
running this gives me output like this:
gradle findSolrJar
file:/Users/tnaleid/.gradle/caches/artifacts-8/filestore/org.apache.solr/solr-solrj/3.5.0/jar/74cd28347239b64fcfc8c67c540d7a7179c926de/solr-solrj-3.5.0.jar
:findSolrJar UP-TO-DATE
BUILD SUCCESSFUL
Total time: 2.248 secs
Is there a better way to do this?
Your code can be simplified a bit, for example project.configurations.compile.find { it.name.startsWith("solr-solrj-") }.
You can also create a dedicated configuration for an artifact, to keep it clean; and use asPath if the fact that it can potentially return several locations works well for your use case (happens if it resolves same jar in several locations):
configurations {
solr
}
dependencies {
solr 'org.apache.solr:solr-solrj:3.5.0'
}
task findSolrJars() {
println configurations.solr.asPath
}
To avoid copy-paste, in case you as well need that jar in compile configuration, you may add this dedicated configuration into compile one, like:
dependencies {
solr 'org.apache.solr:solr-solrj:3.5.0'
compile configurations.solr.dependencies
}
I needed lombok.jar as a java build flag to gwt builds this worked great !
configurations {
lombok
}
dependencies {
lombok 'org.projectlombok:lombok+'
}
ext {
lombok = configurations.lombok.asPath
}
compileGwt {
jvmArgs "-javaagent:${lombok}=ECJ"
}
I was surprised that the resolution worked early enough in the configuraiton phase, but it does.
Here is how I did it:
project.buildscript.configurations.classpath.each {
String jarName = it.getName();
print jarName + ":"
}
I recently had this problem as well. If you are building a java app, the problem at hand is normally that want to get the group:module (groupId:artifactId) to path-to-jar mapping (i.e. the version is not a search criteria as in one app there is normally only one version of each specific jar).
In my gradle 5.1.1 (kotlin-based) gradle build I solved this problem with:
var spec2File: Map<String, File> = emptyMap()
configurations.compileClasspath {
val s2f: MutableMap<ResolvedModuleVersion, File> = mutableMapOf()
// https://discuss.gradle.org/t/map-dependency-instances-to-file-s-when-iterating-through-a-configuration/7158
resolvedConfiguration.resolvedArtifacts.forEach({ ra: ResolvedArtifact ->
s2f.put(ra.moduleVersion, ra.file)
})
spec2File = s2f.mapKeys({"${it.key.id.group}:${it.key.id.name}"})
spec2File.keys.sorted().forEach({ it -> println(it.toString() + " -> " + spec2File.get(it))})
}
The output would be some like:
:jing -> /home/tpasch/scm/db-toolchain/submodules/jing-trang/build/jing.jar
:prince -> /home/tpasch/scm/db-toolchain/lib/prince-java/lib/prince.jar
com.github.jnr:jffi -> /home/tpasch/.gradle/caches/modules-2/files-2.1/com.github.jnr/jffi/1.2.18/fb54851e631ff91651762587bc3c61a407d328df/jffi-1.2.18-native.jar
com.github.jnr:jnr-constants -> /home/tpasch/.gradle/caches/modules-2/files-2.1/com.github.jnr/jnr-constants/0.9.12/cb3bcb39040951bc78a540a019573eaedfc8fb81/jnr-constants-0.9.12.jar
com.github.jnr:jnr-enxio -> /home/tpasch/.gradle/caches/modules-2/files-2.1/com.github.jnr/jnr-enxio/0.19/c7664aa74f424748b513619d71141a249fb74e3e/jnr-enxio-0.19.jar
After that, it is up to you to do something useful with this Map. In my case I add some --path-module options to my Java 11 build like this:
val patchModule = listOf(
"--patch-module", "commons.logging=" +
spec2File["org.slf4j:jcl-over-slf4j"].toString(),
"--patch-module", "org.apache.commons.logging=" +
spec2File["org.slf4j:jcl-over-slf4j"].toString()
)
patchModule.forEach({it -> println(it)})
tasks {
withType<JavaCompile> {
doFirst {
options.compilerArgs.addAll(listOf(
"--release", "11",
"--module-path", classpath.asPath
) + patchModule)
// println("Args for for ${name} are ${options.allCompilerArgs}")
}
}
}

Resources