I have a very simple project and I am trying to publish the first version to hex. However I cannot run the hex.publish task.
I get the error ** (Mix) The task "hex.publish" could not be found.
I am following these hex instructions.
My mix.exs file looks like the following.
defmodule Ace.Mixfile do
use Mix.Project
def project do
[app: :ace,
version: "0.2.0",
elixir: "~> 1.0",
build_embedded: Mix.env == :prod,
start_permanent: Mix.env == :prod,
deps: deps]
end
def application do
[
applications: [:logger],
mod: {Ace, []}
]
end
defp deps do
[]
end
end
You might not have hex installed. According to hex usage, please use
mix local.hex
in your terminal or CMD console.
Then mix hex.publish should work.
Related
I have following code:
defmodule MyApp.Http do
use Application
require Logger
def start(_type, _args) do
import Supervisor.Spec, warn: false
MyApp.PlugPipelineInstrumenter.setup()
MyApp.MetricsExporter.setup()
opts = [strategy: :one_for_one, name: MyApp.Http.Supervisor]
Supervisor.start_link([], opts)
end
end
defmodule MyApp.MetricsExporter do
use Prometheus.PlugExporter
end
defmodule MyApp.PlugPipelineInstrumenter do
use Prometheus.PlugPipelineInstrumenter
end
But it does nothing. When I add:
plug MyApp.MetricsExporter
plug MyApp.PlugPipelineInstrumenter
I get compilation error: undefined function plug/1. I use elixir 1.10.3. What do I do wrong here?
I made a hex package called prometheus_sidecar that creates a ranch server dedicated to prometheus (so you could run it in any elixir application) and wires up prometheus_plugs for you.
It leverages erlang's application start lifecycle so there is no required configuration, besides adding it as a library to your application.
Let me know if you find it useful.
Running the .NET Core Pack task, how do I get the outputted NuGet package version to auto-increment itself?
So, for example, if my current version is 1.0.0, then the next time I call the Pack task, I would like to see 1.0.1.
I'm using environment build variables with Build.BuildNumber and getting outputs at the moment of e.g. 20180913-.2.0, etc. I would like to establish to a more traditional versioning system.
From the docs, the variable Rev:.r is the daily build revision count. The accepted "solution" would lead to one day finishing having a version of 1.0.12, then the next day it will be 1.0.1.
If you want a simple incremental and unique semver, use 1.0.$(BuildID).
$(BuildID) is an internal immutable counter for your builds, and thus far cleaner than $(BuildNumber).
BuildID will always be incrementing - no reset.
Thus after a minor bump, you'd end up having say 1.2.123 becoming 1.3.124.
If you want to perform this task well, this can be done using npm version or similar, such as pubspec_version for Dart or Flutter builds.
- script: npm version $RELEASE_TYPE
where $RELEASE_TYPE is a variable you can set based on build (ie: CI, PR etc), having a value of major, minor, patch, prerelease etc.
- script: npm version $RELEASE_TYPE
condition: startsWith(variables['build.sourceBranch'], 'refs/head/release/')
env:
releaseType: minor
Update: Bump Repo Version and Use In Build (using npm)
To have the repo version update, I ended up including npm version as a DevDependency, with it's precommit hook to bump the project version on any commit.
This technique can be applied to other project types, placing them in a subfolder - although can lead to complications with server OS requirements.
To use this version in your build, add this bash script task, which gets and exports the version as a task variable:
v=`node -p "const p = require('./package.json'); p.version;"`
echo "##vso[task.setvariable variable=packageVersion]$v"
.Net Core Task only version
Unfortunately, no repo-bump.
Workaround 1:
jobs:
- job: versionJob #reads version number from the source file
steps:
- powershell: |
$fv = Get-Content versionFile
Write-Host ("##vso[task.setvariable variable=versionFromFile;isOutput=true]$fv")
displayName: 'version from file'
name: setVersionStep
- job: buildJob # consumes version number, calculates incremental number and set version using assemblyinfo.cs
dependsOn: versionJob
variables:
versionFromFile: $[ dependencies.versionJob.outputs['setVersionStep.versionFromFile'] ] # please note that spaces required between $[ and dependencies
buildIncrementalNumber: $[ counter(dependencies.versionJob.outputs['setVersionStep.versionFromFile'],1) ] #can't use $versionFromFile here
steps:
- powershell: |
Write-Host ($env:versionFromFile)
Write-Host ($env:versionFromFile + '.' + $env:buildIncrementalNumber)
displayName: 'version from file output'
Workaround 2:
This post describes a couple of others, using version-prefix and automatically applying the BuildNumber as a version-suffix.
I may have figured it out. For anyone tearing their hair out, try this:
Pack Task:
Automatic Package Versioning: Use an environment variable
Environment variable: Build.BuildNumber
Then, up in the top menu where you have Tasks, Variables, Triggers, Options, click Options and set:
Build number format: 1.0$(Rev:.r)
Save and queue. This will produce e.g. 1.0.1.
(Please Correct me if I am wrong, or if this does not work long-term.)
If you're just looking to bump the major, minor or revision version number, using counter operator in a variable is a simple and elegant approach. It will automatically add one to the current value.
Here's what I use:
variables:
major: '1'
minor: '0'
revision: $[counter(variables['minor'], 1)] #this will get reset when minor gets bumped. The number after Counter is the seed number (in my case, I started at 1).
app_version: '$(major).$(minor).$(revision)'
If you would like to see a real-world 4-job pipeline that uses this, I have one here https://github.com/LanceMcCarthy/DevReachCompanion/blob/master/azure-pipelines.yml
For me it's enough to set Build number format on Options tab to
$(date:yyyy).$(date:MMdd)$(rev:.r)
and add next build argument:
/p:Version=1.$(Build.BuildNumber) /p:AssemblyVersion=1.$(Build.BuildNumber)
In this case we manage major version manually, but minor version and build number will be set automatically. Easy to understand what version you have deployed.
I am using the ado pipeline and a yaml build. What I've done is utilized the pipeline variables, a counter function, and an inline powershell function to create the version number. It auto-increments and has made the entire build process nice.
Another SO Post about something similar
There are multiple ways to install Elixir dependencies. I wonder what happens in the following cases:
1.
mix deps.get --only prod
What exactly dependencies are being fetched then?
2.
defp deps do
[
{:credo, "~> 0.8", only: ~w(dev)a, runtime: false},
]
end
How only option impact on a particular dependency?
3.
def project do
[
# ...
deps: deps(Mix.env()),
]
end
What is the difference if we specify dependencies like that?
I'm confused a little bit when to use what regarding defining dependencies.
When you write this :
mix deps.get --only prod
It will fetch all dependencies for the prod environment, namely the dependencies where there is no only option, and dependencies where the only option is specified and containing :prod (e.g {:some_dep, "~> 0.8", only: [:prod]})
When you write this :
defp deps do
[
{:some_dep, "~> 0.8"}
]
end
That tells mix to install some_dep in any environment it is run into.
When you write this :
defp deps do
[
{:another_dep, "~> 0.8", only: [:dev]}
]
end
It tells mix to install another_dep only when your environment is dev (MIX_ENV=dev).
If you are in any other environment (e.g. prod), mix deps.get will simply ignore another_dep and won't install it.
writing this :
def project do
[
# ...
deps: deps(Mix.env()),
]
end
will result in ** (CompileError) mix.exs:13: undefined function deps/1 because in your mix.exs, only deps/0 is defined. Now you might tell me why not implement deps(:dev), deps(:prod) and so on... Well, if you read what I explained before, you'll see that it is pointless, since the deps separation for each environment is already taken care of :)
I'm going to address these in reverse order.
using deps(Mix.env) would force you to specify each of your dependencies multiple times if they are used across multiple environments. Something along the lines of
def deps(:dev) do
[
{:ecto, "~> 2.1"}
{:credo, "~> 0.8", runtime: false}
]
end
def deps(:test) do
[
{:ecto, "~> 2.1"}
]
end
I will admit that I do not even know if this would work, but this is adding too much code for something that is already handled for you if you just specify the :only option.
Using :only allows you to specify which environments a dependency should be used in. In your example, {:credo, "~> 0.8", only: [:dev], runtime: false} you are telling mix that the credo package should only be used in the dev environment. If you do not include the :only option, the package will be used in all environments.
$ mix deps.get --only prod will only retrieve the packages relevant to the production environment. From the previous example, the credo package will not be retrieved because you told mix that credo should only be used in the dev environment.
When I try to publish a new version of my package on hex, it prints the following warning:
WARNING! Excluded dependencies (not part of the Hex package):
ex_doc
Full text of me running the command:
$ mix hex.publish
Publishing usefulness 0.0.5
Dependencies:
earmark >= 0.0.0
Files:
lib/usefulness.ex
lib/usefulness/stream.ex
lib/usefulness/string.ex
config/config.exs
test/test_helper.exs
test/usefulness_test.exs
mix.exs
README.md
LICENSE
App: usefulness
Name: usefulness
Description: Useful things
Version: 0.0.5
Build tools: mix
Licenses: Apache 2.0
Maintainers: afasdasd
Links:
Github: https://github.com/b-filip/usefulness
Elixir: ~> 1.2
WARNING! Excluded dependencies (not part of the Hex package):
ex_doc
Before publishing, please read Hex Code of Conduct: https://hex.pm/docs/codeofconduct
Proceed? [Yn]
I have no idea what this warning means
Here is what my project.deps in mix.exs consists of:
defp deps do
[
{:ex_doc, "~> 0.11", only: :dev},
{:earmark, ">= 0.0.0"}
]
end
It means you have a dependency in your project that will not be a dependency of your package that you publish to hex. This is normal, projects often have development dependencies for testing, static analysis, generating documentation etc.
Hex lists them so you can have a quick look and make sure you didn't leave out an actual dependency of your code, that would result in a broken package.
ExDoc should most likely not be a dependency of your package. You're good to go. Good work creating your hex package!
I am trying to use phantomjs as installed via npm to perform my unit tests for ScalaJS.
When I run the tests I am getting the following error:
/usr/bin/env: node: No such file or directory
I believe that is because of how phatomjs when installed with npm loads node:
Here is the first line from phantomjs:
#!/usr/bin/env node
If I change that first line to hardcode to the node executable (this involves modifying a file installed by npm so it's only a temporary solution at best):
#!/home/bjackman/cgta/opt/node/default/bin/node
Everything works.
I am using phantom.js btw because moment.js doesn't work in the NodeJSEnv.
Work Around:
After looking through the plugin source is here the workaround:
I am forwarding the environment from sbt to the PhantomJSEnv:
import scala.scalajs.sbtplugin.ScalaJSPlugin._
import scala.scalajs.sbtplugin.env.nodejs.NodeJSEnv
import scala.scalajs.sbtplugin.env.phantomjs.PhantomJSEnv
import scala.collection.JavaConverters._
val env = System.getenv().asScala.toList.map{case (k,v)=>s"$k=$v"}
olibCross.sjs.settings(
ScalaJSKeys.requiresDOM := true,
libraryDependencies += "org.webjars" % "momentjs" % "2.7.0",
ScalaJSKeys.jsDependencies += "org.webjars" % "momentjs" % "2.7.0" / "moment.js",
ScalaJSKeys.postLinkJSEnv := {
if (ScalaJSKeys.requiresDOM.value) new PhantomJSEnv(None, env)
else new NodeJSEnv
}
)
With this I am able to use moment.js in my unit tests.
UPDATE: The relevant bug in Scala.js (#865) has been fixed. This should work without a workaround.
This is indeed a bug in Scala.js (issue #865). For future reference; if you would like to modify the environment of a jsEnv, you have two options (this applies to Node.js and PhantomJS equally):
Pass in additional environment variables as argument (just like in #KingCub's example):
new PhantomJSEnv(None, env)
// env: Map[String, String]
Passed-in values will take precedence over default values.
Override getVMEnv:
protected def getVMEnv(args: RunJSArgs): Map[String, String] =
sys.env ++ additionalEnv // this is the default
This will allow you to:
Read/Modify the environment provided by super.getVMEnv
Make your environment depend on the arguments to the runJS method.
The same applies for arguments that are passed to the executable.