I'm pulling my hair out. I've got an On Prem installation of Azure DevOps, and a pair of build agents. We're trying to move to .net core, but we have never been able to get it to work to push the nuget packages into DevOps feed. This should be straight forward.
The whole environment is hidden behind corporate firewall and proxy, and while the proxy config is good for nuget pull, and any other activity you care to name, we cannot invoke nuget push (or dotnet push) to our internal package repository. The only error I get is a 502 (bad gateway) from tunnel.js, but I've explicitly set the address of the DevOps server in NO_PROXY (environment variables, .proxy & .proxybypass for the devops agent, netsh winhttp proxy, build agent user internet connection settings, and %AppData%\Nuget\Nuget.Config file). Git works, nuget restore works, build works, packaging works, but the dotnet push (or nuget push) fail with this error.
Can anyone suggest any other places I might need to set a proxy bypass or no_proxy setting?
There can be many reasons why you are getting this problem, which may be related to your organization network, users roles and permissions or even task may be restricted by any policy.
But if above mentioned things are not true in your case then you should configure NuGet tools to authenticate with Azure Artifacts and other NuGet repositories. If all of the Azure Artifacts feeds you use are in the same organization as your pipeline, you can use the NuGetAuthenticate task without specifying any inputs. Check this Restore and push NuGet packages within your organization document for more information.
This task must run before you use a NuGet tool to restore or push packages to an authenticated package source such as Azure Artifacts. This task installs the Azure Artifacts Credential Provider into the NuGet plugins directory if it is not already installed.
If your agent is behind a web proxy, the NuGetAuthenticate will not set up nuget.exe, dotnet, and MSBuild to use the proxy. Then set the environment variable http_proxy and optionally no_proxy to your proxy settings as shown below.
nuget.exe config -set http_proxy=http://my.proxy.address:port
nuget.exe config -set http_proxy.user=mydomain\myUserName
Check this NuGet CLI environment variables for more information.
Related
Visual Studio now generates Dockerfile for dotnet projects, and we are using it (with slight tweaks) for our continuous integration.
However that Dockerfile does not have any provision for configuring nuget. It even only copies the .csproj file from context before running dotnet restore to avoid re-running that step during development.
But our project requires some modules from internal, password-protected repository, so I need to provide package sources and credentials to the dotnet restore command inside.
What is the best current practice for injecting a (environment-specific) nuget configuration?
This is documented here: https://github.com/dotnet/dotnet-docker/blob/main/documentation/scenarios/nuget-credentials.md.
To summarize, there are a variety of ways in which this can be done:
Use a multi-stage build to protect nuget.config that contains hard-coded credentials. Only recommended if you ensure that credentials are kept out of source code control and the nuget.config file is ephemeral.
Passing secrets by file with BuildKit. This is similar to the previous option but makes use of Dockerfile secrets to provide access to the nuget.config file.
Use environment variables in nuget.config. In this scenario, the nuget.config file would reference environment variables for its credential values. The environment variables would then be set by the build machine when executing a docker build.
Use the Azure Artifact Credential Provider. This is only possible if you make use of Azure Artifacts for your package feed.
No matter which option you choose, be sure that credentials are never stored within an image layer that is published.
Bamboo supports shared credentials.
I would like to use these shared credentials to commit tags to git using the artifactory maven addon.
According to this ticket: https://www.jfrog.com/jira/browse/BAP-189 it should work.
I do however not know where to configure it.
We are using bamboo 5.14.3.1
Artifactory plugin 2.1.0
Was this feature removed?
What are the options for deploying a web-application that is built daily using Visual Studio Online (and hosted controller) and its new build definitions to an on-premises IIS behind a firewall?
If opening up the firewall, would it be possible to add some kind of WebDeploy-build step to the Visual Studio Online build? Haven't seen any WebDeploy build steps for now though...
...or could we write a PowerShell script running daily on the IIS-server that fetches the output of the daily build from Visual Studio Online? If that's possible, how can those files be accessed?
...or could something like OctopusDeploy help out here?
would like to refrain from having to set up an on-premises build controller.
VSO Agents are lightweight and easy to setup if you have a server available. Octopus Deploy integrates nicely most on premise scenarios.
That said, if you still want to keep hosted builds, Octopus would still work.
Create a VSO build definition and include OctoPack.
Pick a hosted Nuget Server, presumably with a private repo subscription. A couple of options are MyGet and Artifactory.
In "MSBuild Arguements" include the parameters.
/p:RunOctoPack=true
/p:OctoPackPublishPackageToHttp=http://nugetrepoofyourchoice.com/nuget/packages
/p:OctoPackPublishApiKey=$(NugetAPI)
/p:OctoPackPackageVersion=$(Build.BuildNumber)
"NugetAPI" is actually a user defined variable (name of your choice) that references a Secret build variable. You will get this API Key from your Nuget Repo vendor.
On premise, in your Octopus installation, you would define your hosted Nuget feed as an external feed.
In your deployment project, it would pull the Nuget package from your hosted repo.
VSO pushes to hosted Nuget feed and Octopus pulls from the hosted Nuget feed.
Octopus deploy would work for this in a couple of ways.
1) Install a tentacle on the IIS server (either polling or listening will work depending on where you install octopus manager)
or
2) Install the main octopus deploy on a machine that has WebDeploy access to your IIS server, and use the custom library MS Deploy script
Getting the package to the main octopus deploy machine could be a bit tricky. Easiest way would be setting up a MyGet server and have the octopus server check it periodically, that way you don't need to open up firewalls to the public.
These feature are coming for VSO and were announced at Build. You will be able to deploy on-premises without needing to open a firewall port using agents.
While this is still in the works you can use Release Management for Visual Studio 2015 to pull the bits from VSO and deploy locally.
I recently created a droplet on Digital Ocean, and then just used Meteor Up to deploy my site to it.
As awesome as it was to not have to mess with all of the details, I'm feeling a little worried and out of the loop about what's happening with my server.
For example, I was using the console management that Digital Ocean provides, and I tried to use the meteor mongo command to investigate what was happening with my database. It just errored, with command not found: meteor.
I know my database works, since records are persistent across accesses, but it seems like Meteor Up accomplished this without retaining any of the testing and development interfaces I grew used to on my own machine.
What does it do??? And how can I get a closer look at things going on behind the scenes?
Meteor Up installs your application to the remote server, but does not install the global meteor command-line utilities.
For those, simply run curl https://install.meteor.com | /bin/sh.
MUP does a few things. Note that this MUP is currently under active development and some of this process will likely change soon. The new version will manage deployment via Docker, add support for meteor build options, and other cool stuff. Notes on the development version (mupx) can be found here: https://github.com/arunoda/meteor-up/tree/mupx.
mup setup installs (depending on your mup.json file) Node, PhantomJS, MongoDB, and stud (for SSL support). It also installs the shell script to setup your environment variables, as well as your upstart configuration file.
mup deploy runs meteor build on your local machine to package your meteor app as a bundled and zipped node app for deployment. It then copies the packaged app to the remote server, unbundles it, installs npm modules, and runs as a node app.
Note that meteor build packages your app in production mode rather than the debug mode that runs by default on localhost when you call meteor or meteor run. The next version of MUP will have a buildOptions property in mup.json that you can use to set the debug and mobileSettings options when you deploy.
Also, since your app is running directly via Node (rather than Meteor), meteor mongo won't work. Instead, you need to ssh into the remote server and call mongo appName.
From there, #SLaks is right about how it sets things up on the server (from https://github.com/arunoda/meteor-up#server-setup-details):
This is how Meteor Up will configure the server for you based on the given appName or using "meteor" as default appName. This information will help you customize the server for your needs.
your app lives at /opt/<appName>/app
mup uses upstart with a config file at /etc/init/<appName>.conf
you can start and stop the app with upstart: start <appName> and stop <appName>
logs are located at: /var/log/upstart/<appName>.log
MongoDB installed and bound to the local interface (cannot access from the outside)
the database is named <appName>
Can anyone provide insights of using Jenkins for automating deployment under controlled and uncontrolled enviroments. We have different environments - dev/qa/uat/prod and currently we are using batch files that call msbuild/nant scripts to deploy on web and DB servers (web farm). Developers only have access to dev/qa and production support will deploy on uat/prod. Prod. support will get the source code from SVN tag folder and run the batch file to deploy the application.
By using Jenkins, is it possible to eliminate the step of prod. support team getting the script from SVN by running the jobs using their credentials via url. And what is the general practice using source control and CI tool for deploying applications.
My recommendation is to reserve Jenkins for just building the software. That way the user of Jenkins only have access to development and perhaps QA systems.
To decouple the build system from the process that deploys the software I recommend the use of a binary repository manager like:
Nexus
Artifactory
Archiva
In that way deployment scripts could retrieve any version of a previous build. The use of a repository manager would enable your QA team to certify a release prior to it's deployment onto production.
Finally, consider one of the emerging deployment automation tools. Tools like Chef, Puppet, Rundeck can be used to further version control the configuration of your infrastructure.