java elastic-agent can access the apm-server normally,
and there is data reported by the agent in the apm index,
but why the kibana apm ui cannot query the data
es kibana apm-server versions are 7.17.5
apm index data |kiban apm ui | java agent
Well I reinstalled a new es and it worked, I guess there might be a problem with the apm-server initialization index
Related
Our organization has two separate collections
Application Development
Foobar Inc (Project)
Repo / Build pipeline (Pipeline)
External Applications (Collection)
External Applications (Project)
Artifacts
XYZ_SharedPackages (Nuget feed)
When I run the restore command for a project in Appliction Development for the Foobar Inc project I get
"C:\agent\_work\76\s\Foobar_Inc\Foobar_IncUI\Foobar_IncUI.csproj" (Restore target) (1) ->
(Restore target) ->
C:\Program Files\dotnet\sdk\6.0.200\NuGet.targets(130,5): error : Unable to load the service index for source
http://svp042iis/tfs/Application%20Development/_packaging/XYZ_SharedPackages/nuget/v3/index.json.
[C:\agent\_work\76\s\Foobar_Inc\Foobar_IncUI\Foobar_IncUI.csproj]
C:\Program Files\dotnet\sdk\6.0.200\NuGet.targets(130,5): error : Response status code does not indicate
success: 404 (Not Found - The feed with ID 'XYZ_SharedPackages' doesn't exist. (DevOps Activity ID: 5C76EC84-96B7-4125-BA30-296CF33B1754)).
[C:\agent\_work\76\s\Foobar_Inc\Foobar_IncUI\Foobar_IncUI.csproj]
The 404 error is coming from the source not existing, however I selected that feed by going into the restore command and selecting that option under feeds to use. My question is, does anyone know if it's possible to share across collections in dev ops.
If your two organziations are NOT in same AAD, you can follow below methods:
Method1:
You can use a NuGet authenticate task and a powershell task to run the nuget install command in the pipeline with the Feed URL of the target feed.
Create a Nuget Service connection to the target organization.
Target feed URL:https://pkgs.dev.azure.com/{orgname}/_packaging/{feedname}/nuget/v3/index.json, The password is PAT.
2.Add NuGet authenticate task before your restore.
Use the powershell task in line script.The in line script :
nuget install {package name} -version {package version} -Source https://pkgs.dev.azure.com/{orgname}/_packaging/{feedname}/nuget/v3/index.json.
It will restore the package successfully.
Method 2:
Directly use a powershell task with inline script:
nuget install {package name} -version {package version} -Source https://pkgs.dev.azure.com/{orgname}/_packaging/{feedname}/nuget/v3/index.json.
The two Environment Variables needs to be set:
• NUGET_CREDENTIALPROVIDER_SESSIONTOKENCACHE_ENABLED=true
• VSS_NUGET_EXTERNAL_FEED_ENDPOINTS= {"endpointCredentials": [{"endpoint":"https://pkgs.dev.azure.com/{orgname}/_packaging/{feedname}/nuget/v3/index.json", "username":"optional", "password":"$(PAT)"}]}
If your two organizations are in same AAD, you can follow below steps:
Please check the permissions in Feed Settings -> Views. Then change the access permissions for Local view in more options-> edit, choose “All feeds and people in organizations associated with my Azure Active.
Setup upstream resource :
On the upstream feed, add project collection build service account as contributor or collab or owner role.
I have recently built my rundeck server and created a DB using mariaDB and pointed rundeck to this. I followed the official documentation for this on the rundeck site. Since I have changed from the systemDB to mariaDB the service no longer starts.
My rundeck-config.properties file looks like this:
#loglevel.default is the default log level for jobs: ERROR,WARN,INFO,VERBOSE,DEBUG
loglevel.default=INFO
rdeck.base=/var/lib/rundeck
#rss.enabled if set to true enables RSS feeds that are public (non-authenticated)
rss.enabled=false
#change hostname here
grails.serverURL=http://IP OF SERVER:4440
dataSource.driverClassName=
dataSource.url = jdbc:mysql://IP OF SERVER/rundeck?autoReconnect=true&useSSL=false
dataSource.username = DB User
dataSource.password = Password
grails.plugin.databasemigration.updateOnStart=true
autoReconnect=true
#to store projects on backend
rundeck.projectsStorageType=db
#Encryption for key storage
rundeck.storage.provider.1.type=
rundeck.storage.provider.1.path=keys
rundeck.storage.converter.1.type=jasypt-encryption
rundeck.storage.converter.1.path=keys
rundeck.storage.converter.1.config.encryptorType=custom
rundeck.storage.converter.1.config.password=7ee99cf09ffc59e7
rundeck.storage.converter.1.config.algorithm=PBEWITHSHA256AND128BITAES-CBC-BC
rundeck.storage.converter.1.config.provider=BC
#Encryption for project config storage
rundeck.projectsStorageType=db
rundeck.config.storage.converter.1.type=jasypt-encryption
rundeck.config.storage.converter.1.path=projects
rundeck.config.storage.converter.1.config.password=7ee99cf09ffc59e7
rundeck.config.storage.converter.1.config.encryptorType=custom
rundeck.config.storage.converter.1.config.algorithm=PBEWITHSHA256AND128BITAES-CBC-BC
rundeck.config.storage.converter.1.config.provider=BC
rundeck.feature.repository.enabled=true
Can anyone help with this
A couple of things here:
Your dataSource.driverClassName is empty, set it to org.mariadb.jdbc.Driver, check the full example here.
Your rundeck.storage.provider.1.type is also empty, set it as rundeck.storage.provider.1.type=db.
Snowflake is not showing in the connections dropdown.
I am using MWAA 2.0 and the providers are already in the requirements.txt
MWAA uses python 3.7 dont know if this can be a thing
Requirements.txt:
--constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.0.2/constraints-3.7.txt"
asn1crypto
azure-common
azure-core
azure-storage-blob
boto3
botocore
certifi
cffi
chardet
cryptography
greenlet
idna
isodate
jmespath
msrest
numpy
oauthlib
oscrypto
pandas
pyarrow
pycparser
pycryptodomex
PyJWT
pyOpenSSL
python-dateutil
pytz
requests
requests-oauthlib
s3transfer
six
urllib3
apache-airflow-providers-http
apache-airflow-providers-snowflake
#apache-airflow-providers-snowflake[slack]
#apache-airflow-providers-slack
snowflake-connector-python >=2.4.1
snowflake-sqlalchemy >=1.1.0
\
If anyone is in this trouble, instead of choosing Snowflake in the dropdown, you can choose AWS as the connection and will work fine.
It took me a while to finally figure this one out after trying many different parameter combinations.
My full Snowflake URL is:
https://xx12345.us-east-2.aws.snowflakecomputing.com
The correct format for the Host field is:
xx12345.us-east-2.snowflakecomputing.com
For the Extra field, this is what worked for me:
{
"account": "xx12345.us-east-2.aws",
"warehouse": "my_warehouse_name",
"database": "my_database_name"
}
Make sure you put Amazon Web Services for the Conn Type, like #AXI said.
Also, I have these modules defined in my requirements.txt file:
apache-airflow-providers-snowflake==1.3.0
snowflake-connector-python==2.4.5
snowflake-sqlalchemy==1.2.4
My Airflow version is 2.0.2.
According to MWAA docs, it should be enough to add apache-airflow-providers-snowflake==1.3.0 to the requirements file. When I added it to the existing MWAA env, where I had already tried many different combinations of packages, it helped partially. It was possible to create a connection using CLI, but not with UI.
But, when I created a new clean MWAA env with the requirements file as stated in mentioned AWS doc, it worked well. The connection was available in UI.
I'm trying to create a new collection with solr-cloud setup, fails with the following:
ERROR: Error loading config name for collection test
I tried deleting the collection:
sudo /opt/solr/bin/solr delete -c test
but with the same results
My setup: solr-cloid with external zookeeper and 5 solr nodes
How do I purge it or reload it again ?
thanks
Solr is not able to find configuration files in zookeeper. Solrcloud try to recreate the core from zookeeper configuration file.
Looks like you have deleted zookeeper configuration node for collection test.
Two steps to completely purge collection test:
Stop solr and Delete folder "test" if exists from Solr home folder.(default: /var/lib/solr)
Navigate to zookeeper node and edit clusterstate.json . Remove entries of collection test. I wanted to start fresh so reset clusterstate.json file to default i.e {}
you should disable autoAddReplicas property of solr before shutting down any solrnode.
Check if the config folder exists in zookeeper. If this is the case, try to link the collection with the configname using the commmand:
zkcli.sh -cmd linkconfig -collection collectionname -confname configname
Creating a Proxy repository on Nexus that proxies Oracle Maven Repo (http://download.oracle.com/maven/) marks the Oracle Repo as "Attempting to Proxy and Remote Unavailable"
The problem might be that Oracle disabled directory listing and every attempt to get the content without the full GAV returns a 404 code.
How to workaround this on Nexus?
Using Nexus OS Edition 1.9.2.2
Configuration:
Remote Storage Location = http://download.oracle.com/maven/
Download Remote Indexes = True
Auto Blocking Active = False
File Content Validation = True
CheckSum Policy = Warn
There should be no need to proxy the oracle repos, we've merged all of that content into Central now, so you can safely remove these from your Nexus.
The url you are using is wrong. Did you mean the java.net repo at http://download.java.net/maven/2/