After switching from Artifactory OSS to Artifactory PRO, I tried to connect s3, but when loading artifacts, an error appears
"java.lang.RuntimeException: S3 binary provider is supported only with HA license addon.filestore.type.s3.S3AwsBinaryProvider.permittedByLicense"
I use binary file. Where am i going wrong?
<config version="2">
<chain template="s3"/>
<provider id="s3" type="s3">
<endpoint>https://s3.amazonaws.com</endpoint>
<identity>***</identity>
<credential>****</credential>
<path>filestore</path>
<region>eu-central-1</region>
<bucketName>bucket-name</bucketName>
</provider>
</config>
I will answer myself. Prior to version 6.15, S3 was only available in enterprise. After 6.15 S3 is also available in pro license
Related
In our .NET Core 3.1 project (REST API), we've multiple NuGet packages. General packages comes from the nuget.org source, some custom made packages are retrieved from a private source.
In Azure DevOps, we've a build pipeline with a task to restore the NuGet packages. Here we saw that every packages was checked with every source. A general package such as Swashbuckle.AspNetCore.SwaggerGen was also searched on our private source.
Due to the amount of requests from DevOps, the first attempt of the pipeline was interpreted as a DOS attack on our system. When the failed run was started again, the task succeeds without any error.
- task: DotNetCoreCLI#2
displayName: dotnet restore
inputs:
command: 'restore'
projects: '**/*.sln'
feedsToUse: 'config'
nugetConfigPath: 'src/NuGet.config'
In the tasks detail, we see the below message returning for every package.
GET private_source/nuget/FindPackagesById()?id='xunit.analyzers'&semVerLevel=2.0.0
Retrying 'FindPackagesByIdAsyncCore' for source 'private_source/nuget/FindPackagesById()?id='Microsoft.AspNetCore.Mvc.Razor'&semVerLevel=2.0.0'.
An error occurred while sending the request.
The response ended prematurely.
How can we avoid that every package in our solution, is checked at every NuGet source? Or what can we change to get a successfull build the first time?
NuGet recently introduced the feature, called Package Source Mapping: https://devblogs.microsoft.com/nuget/introducing-package-source-mapping/
Here's the nuget.config snippet from the blog post:
<!-- Define my package sources, nuget.org and contoso.com. -->
<!-- `clear` ensures no additional sources are inherited from another config file. -->
<packageSources>
<clear />
<!-- `key` can be any identifier for your source. -->
<add key="nuget.org" value="https://api.nuget.org/v3/index.json" />
<add key="contoso.com" value="https://contoso.com/packages/" />
</packageSources>
<!-- Define mappings by adding package ID patterns beneath the target source. -->
<!-- Contoso.* packages will be restored from contoso.com, everything else from nuget.org. -->
<packageSourceMapping>
<!-- key value for <packageSource> should match key values from <packageSources> element -->
<packageSource key="nuget.org">
<package pattern="*" />
</packageSource>
<packageSource key="contoso.com">
<package pattern="Contoso.*" />
</packageSource>
</packageSourceMapping>
Regarding the error message:
An error occurred while sending the request. The response ended prematurely.
This suggests there's something wrong with the server or networking. A good nuget server should return HTTP 404 for packages that doesn't exist on it. Implementing package source mapping might not solve your restore problem.
I suggest creating an Azure Devops Artifacts feed having upstream source both from nuget.org and your private feed. There is no other way you can use multiple sources to do partial restore.
How can we avoid that every package in our .NET Core 3.1 project, is checked at every NuGet source during Azure DevOps pipeline?
I am afraid there is no such out of box way to resolve this restriction.
That's because no matter how we set the resource, when we restore the package for the first time, nuget.exe will iterate over each resource for every package. This problem will be alleviated when we run the pipeline again, because it is from nuget.org The packages will be cached in our private feed. When we restore again, it will be retrieved from the private feed first:
Check my previous thread for some more details.
Besides, If you want to avoid this problem the first time, you can try not to restore the entire .sln file, you can choose package.configs for the specify reject:
Short verion:
-Is Midonet still on the roadmap for VPC support in Eucalyptus?
-If so what version from their non-enterprise repo should work with Euca 4.4.5 VPC? (http://builds.midonet.org/)
Long version with context:
I was trying to install Eucalyptus 4.4.5 with VPC and midonet. It appears that the enterprise midonet repos/services are not available and that Midokura isn't taking emails at sale# or info# addresses. This is broken for example: https://www.midokura.com/midonet-enterprise/
From my perspective it looks like Midokura dropped enterprise support entirely and midonet.org is the only resource available.
I took a swing at the installation with midonet 5.2 from their builds (http://builds.midonet.org/) based on the most recent Eucalyptus 4.4.5 install docs which specify enterprise version mem-5.2
Trying this I ran into tons of .rpm dependency issues installing on RHEL 7.6/7.7 and never got off the ground.
Midonet VPC support is currently planned for Eucalyptus 5.
5.2.x is the correct version, you would need these yum repositories enabled:
http://builds.midonet.org/midonet-5.2/stable/el7/
http://builds.midonet.org/misc/stable/el7/
Which use the gpg key:
http://builds.midonet.org/midorepo.key
So something like:
# midokura.repo
[midokura]
name=Midokura Enterprise MidoNet
baseurl=http://builds.midonet.org/midonet-5.2/stable/el7/
enabled=1
fastestmirror_enabled=0
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key
#midokura-misc.repo
[midokura-misc]
name=MEM 3rd Party Tools and Libraries
baseurl=http://builds.midonet.org/misc/stable/el7/
enabled=1
fastestmirror_enabled=0
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key
I have a burn/bootstrapper WiX bundle with:
<?define DotNetVersion = "2.1.11"?>
<!-- The Min and Next .Net version that our installed version must be between -->
<Variable Name='MinDotNetVersion' Type='string' Value='2.1.0' bal:Overridable='no'/>
<Variable Name='NextDotNetVersion' Type='string' Value='2.2.0' bal:Overridable='no'/>
<?define HostingBundleUrl = "https://www.microsoft.com/net/download/thank-you/dotnet-runtime-$(var.DotNetVersion)-windows-hosting-bundle-installer" ?>
<util:RegistrySearch Id="DotNetHostingBundle86"
Root="HKLM"
Key="SOFTWARE\dotnet\Setup\InstalledVersions\$(var.Platform)\sharedhost"
Value="Version"
Variable="DotNetHostingBundleVersion" />
<!-- If running installer in a 32-bit process will change above key path to wow6432 etc, then won't find it. So, if didn't find it and version is x64, get the variable
this way: -->
<util:RegistrySearch Id="DotNetHostingBundle64"
After="DotNetHostingBundle86"
Condition="NOT DotNetHostingBundleVersion AND VersionNT64"
Root="HKLM"
Key="SOFTWARE\dotnet\Setup\InstalledVersions\$(var.Platform)\sharedhost"
Value="Version"
Variable="DotNetHostingBundleVersion"
Win64="yes"/>
<Chain>
<ExePackage Id="DotNetCoreHostingBundle"
Vital="yes"
Name=".Net Hosting Bundle Setup"
DownloadUrl="$(var.HostingBundleUrl)"
Compressed="no"
SourceFile=".\ExtResourcesCopy\dotnet-hosting-$(var.DotNetVersion)-win.exe"
InstallCondition="NOT DotNetHostingBundleVersion OR (DotNetHostingBundleVersion < MinDotNetVersion OR DotNetHostingBundleVersion >= NextDotNetVersion)"
Description="Installing .Net Hosting Bundle $(var.DotNetVersion). This includes the 32 bit and 64 bit runtimes, the Asp.net runtime packages (Microsoft.AspNetCore.App and .All), and the IIS Hosting Components."
/>
...
I've downloaded the dotnet-hosting-2.1.11-win.exe file into the ExtResourcesCopy folder, and since I set the SourceFile to this it gets its own Payload data.
I then build my .exe and run it on another computer and get the following error:
Acquiring package: DotNetCoreHostingBundle, payload: DotNetCoreHostingBundle, download from: https://www.microsoft.com/net/download/thank-you/dotnet-runtime-2.1.11-windows-hosting-bundle-installer
Error 0x80091007: Hash mismatch for path: C:\ProgramData\Package Cache\.unverified\DotNetCoreHostingBundle, expected: 1ED626AD403D6E5D99AB69DB7C281FB8E8A8D0A2, actual: F21BF2F13F89D1C9DFD2844D57728102D5714EAA
Error 0x80091007: Failed to verify hash of payload: DotNetCoreHostingBundle
Failed to verify payload: DotNetCoreHostingBundle at path: C:\ProgramData\Package Cache\.unverified\DotNetCoreHostingBundle, error: 0x80091007. Deleting file.
I then checked the SHA1 hash on the target computer by downloading the dotnet-hosting-2.1.11-win.exe file manually from Microsoft, using the exact url from the log file, and running:
certutil -hashfile dotnet-hosting-2.1.11-win.exe
This gave me the expected hash of: 1ed626ad403d6e5d99ab69db7c281fb8e8a8d0a2
So where is this "actual" hash of F21BF2F13F89D1C9DFD2844D57728102D5714EAA coming from? Is there a way of pausing the installer so I can inspect the file in the .unverified folder? And/or what can I do about this?
The URL you're using is an HTML page that uses JavaScript to download the package. Burn just sees the HTML.
You should be able to use dark.exe to decompile your bundle executable and see the hashes for all of the payloads.
Note: Those hashes are present for security purposes. They prevent bad actors from tampering with the install content.
I'm attempting to use a .tpk file with Maximo Anywhere Work Execution on Android 6, but I'm experiencing an application crash when trying to access the Map view.
LogCat shows that the code recognises and processes the .tpk but then produces the error:
11-02 12:46:19.873: W/System.err(8251): java.lang.ClassNotFoundException:
org.apache.cordova.geolocation.GeoBroker
I've rebuilt the APK and updated Mobilefirst Studio to the latest IFIX version but this doesn't seem to help. Am I missing something?
Maximo Anywhere 7.6.1
Mobilefirst Studio 7.1.0.00-20171026-1607
mobilefirst team have changed way to use gps in ifix from 2017. Basically they removed org.apache.cordova.geolocation.GeoBroker. On Anywhere 762HF we have applied necessary fixes to support recent ifix. Suggestion is remove GeoBroker from config.xml file:
<feature name="Geolocation">
<param name="android-package" value="org.apache.cordova.geolocation.GeoBroker" />
</feature>
You will find this config file into appName/android/native/res/xml folder.
Make sure that you have into this config.xml the new element used to provide gps feature, that is:
<feature name="WLGeolocationPlugin">
<param name="android-package" value="com.worklight.androidgap.plugin.WLGeolocationPlugin"/>
</feature>
Let me know if this is enough to fix your problem.
Best Regards.
I'm moving a Flex 3 site to Flex 4, but when I run the application, it attempts to download a .swz file from Adobe, and gives the following error:
*** Security Sandbox Violation ***
Connection to http://fpdownload.adobe.com/pub/swz/tlf/1.1.0.604/textLayout_1.1.0.604.swz halted - not permitted from http://localhost/Fl/CityGIS/main.swf
Error #2048: Security sandbox violation: http://localhost/Fl/CityGIS/main.swf cannot load data from http://fpdownload.adobe.com/pub/swz/tlf/1.1.0.604/textLayout_1.1.0.604.swz.
Failed to load RSL http://fpdownload.adobe.com/pub/swz/tlf/1.1.0.604/textLayout_1.1.0.604.swz
Failing over to RSL textLayout_1.1.0.604.swz
Following this is an attempt to download the same file from localhost.
Is there a way to configure the SDK to get these files, or an issue with the configuration of my application?
I found I only had this problem when I using: -use-network=false, and I was attempting to run the html locally and the .swf was accessing locally files (outside the flex security free folders).
My workaround is to update sdks//frameworks/flex-config.xml (in the Flash Builder directory), and swap the order runtime shared paths: for example:
<runtime-shared-library-path>
<path-element>libs/textLayout.swc</path-element>
<rsl-url>http://fpdownload.adobe.com/pub/swz/tlf/1.1.0.604/textLayout_1.1.0.604.swz</rsl-url>
<policy-file-url>http://fpdownload.adobe.com/pub/swz/crossdomain.xml</policy-file-url>
<rsl-url>textLayout_1.1.0.604.swz</rsl-url>
<policy-file-url></policy-file-url>
</runtime-shared-library-path>
TO:
<runtime-shared-library-path>
<path-element>libs/textLayout.swc</path-element>
<rsl-url>textLayout_1.1.0.604.swz</rsl-url>
<policy-file-url></policy-file-url>
<rsl-url>http://fpdownload.adobe.com/pub/swz/tlf/1.1.0.604/textLayout_1.1.0.604.swz</rsl-url>
<policy-file-url>http://fpdownload.adobe.com/pub/swz/crossdomain.xml</policy-file-url>
</runtime-shared-library-path>
You'll have to do this for the other 5 or so entries.
Adobe should really look at this, and fix the problem.
Hope this helps.
Cheers
Parmy
I think the problem is that the location it is using for the textLayout swc
http://fpdownload.adobe.com/pub/swz/tlf/1.1.0.604/textLayout_1.1.0.604.swz
redirects to
/pub/swz/flex/4.1.0.15186/textLayout_1.1.0.601.swf
and the cross domains policy is not happy about that.
I think this points to an issue with the version of the sdk that you are using. You can go into sdks/<FRAMEWORK_VERSION>/frameworks/flex-config.xml (in the Flash Builder directory) and see exactly how the runtime shared library path is configured for textLayout.swc. This is what I have for flex_sdk_4.1.0.15186:
<!-- TextLayout SWC -->
<runtime-shared-library-path>
<path-element>libs/textLayout.swc</path-element>
<rsl-url>textLayout_1.1.0.601.swf</rsl-url>
<policy-file-url></policy-file-url>
<rsl-url>http://fpdownload.adobe.com/pub/swz/flex/4.1.0.15186/textLayout_1.1.0.601.swf</rsl-url>
<policy-file-url>http://fpdownload.adobe.com/pub/swz/crossdomain.xml</policy-file-url>
</runtime-shared-library-path>
I suggest try switching to the latest 4.1 sdk and recompiling.
Hmmmm - generally this is because the site you are accessing does not contain a crossdomain.xml file. However I can download it from here.
Try adding this to your compiler options: -use-network=false
Then clean and force build your app.
If that doesn't work, and just grasping for straws, but have you tried to manually download it and place it in your project lib space?
Also, are you sure you are updated to Flex 4.1?
I just checked my local KB (evernote) and mentioned that FireFox sometimes has an issue with caching, and that restarting FF solved it once for me.