I was trying the free trial version of recaptcha solver by apify for the following page https://www.google.com/recaptcha/api2/demo
Input :
{
"key": "apify_api_KKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK",
"webUrl": "https://www.google.com/recaptcha/api2/demo",
"siteKey": "6Le-wvkSAAAAAPBMRTvw0Q4Muexq9bi0DJwx_mJ-"
}
I tried running the same with the above inputs, but I get the error
2022-08-14T10:13:34.970Z ACTOR: Pulling Docker image from repository.
2022-08-14T10:13:35.109Z ACTOR: Creating Docker container.
2022-08-14T10:13:35.168Z ACTOR: Starting Docker container.
2022-08-14T10:13:36.323Z
2022-08-14T10:13:36.327Z WARNING: The npm start script not found in package.json. Using "node main.js" instead. Please update your package.json file. For more information see https://github.com/apifytech/apify-cli/blob/master/MIGRATIONS.md
2022-08-14T10:13:36.330Z
2022-08-14T10:13:38.579Z Solving re-captcha with Anticaptcha: https://www.google.com/recaptcha/api2/demo
2022-08-14T10:13:38.780Z User function threw an exception:
2022-08-14T10:13:38.782Z Account authorization key not found in the system
You need to have an anti-captcha subscription to be able to use it. https://apify.com/petr_cermak/anti-captcha-recaptcha
"key": ANTI_CAPTCHA_KEY
You trying to use Apify API key to authorize access to anti-captcha.com service, it fails with error Account authorization key not found in the system
Related
I'm trying to setup Flyway for Google Cloud Spanner (beta) using the flyway gradle plugin but it gets the error below when executing ./gradlew flywayinfo.
> Error occured while executing flywayInfo
No database found to handle jdbc:cloudspanner:/projects/<my-project>/instances/<my-instance>/databases/<my-db>
build.gradle
plugins {
id 'java'
id 'org.flywaydb.flyway' version '7.13.0'
}
...
dependencies {
implementation(
'org.flywaydb:flyway-gcp-spanner:7.13.0-beta'
)
}
flyway {
url = 'jdbc:cloudspanner:/projects/<my-project>/instances/<my-instance>/databases/<my-db>'
}
The values in the url correspond to my project and instance names.
I've also tried:
using a service account key in the end of the URL
adding the com.google.cloud:google-cloud-spanner-jdbc:2.3.2 JDBC driver dependency (implementation)
I'm behind a proxy but I have set it in my gradle.properties with systemProp.http.proxyHost and systemProp.http.proxyPort (also for https)
Using Flyway CLI and the API programmatically works.
It seems like the error comes from flyway implementation here. Your issue seems somewhat similar with https://github.com/flyway/flyway/issues/3028.
Consider opening a new issue here: https://github.com/flyway/flyway/issues
We moved to a new authenticated Nexus to act as a proxy to get dependencies.
I've tried to to give SBT (1.1.1) the credentials it needs, in multiple ways, but I always endup getting :
[error] Unable to find credentials for [Sonatype Nexus Repository Manager # nexus3.company.com]
[debug] CLIENT ERROR: Unauthorized url=https://nexus3.company.com/repository/maven2-proxy-all/org/scala-sbt/actions_2.12/1.1.1/actions_2.12-1.1.1.pom
It's repeated for a lot of dependencies.
I've created a .credentials file in my project as follow:
realm=Sonatype Nexus Repository Manager
host=nexus3.company.com
user=xxxxx
password=xxxxx
Here's what I've tried, based on inputs I got from other threads on the internet:
Adding the path to this credentials file in the command : -Dsbt.boot.credentials=.credentials
Adding the path to this credentials file to an environment variable : $SBT_CREDENTIALS = PATH
Adding the following line in the build.sbt : credentials += Credentials(new File(".credentials"))
Adding the following line in the build.sbt : credentials += Credentials("Sonatype Nexus Repository Manager", "nexus3.company.com", "xxxxx", "xxxxxx")
Checking what's going on with a proxy : my requests don't seem to have any authorization header and all come back as HTTP 401
And yet, when I access the URL mentioned from the same machine, with the credentials in the file, there is no issue at all.
I'm running out of ideas here :(
After more attempts, adding :
~/.sbt/1.0/credentials.sbt
credentials += Credentials("Sonatype Nexus Repository Manager", "nexus3.company.com", USER, PWD)
AND
The SBT_CREDENTIALS variable mentioned above,
Seems to do the job.
I also updated the image we use for our pipelines, not sure if it helped.
I'm trying to use Concourse to grab a dockerfile defintion from a git repository, do some work, build the docker image, and push the new image to Artifactory. See below for the pipeline definition. At this time I have all stages up to the artifactory stage (the one that pushes to Artifactory) working. The artifactory stage exits with error with the following output:
waiting for docker to come up...
sha256:c6039bfb6ac572503c8d97f42b6a419b94139f37876ad331d03cb7c3e8811ff2
The push refers to repository [artifactory.server.com:2077/base/golang/alpine]
a4ab5bf94afd: Preparing
unauthorized: The client does not have permission to push to the repository.
This would seem straight-forward as an Artifactory permissions issue, except that I've tested locally with the docker cli and am able to push using the same user/pass as specified within destination_username and destination_password. I double checked the credentials to make sure I'm using the same ones and find that I am.
Question #1: is there any other known cause for getting this error? I've scoured the resource github page without finding anything. Any ideas why I may be getting the permissions error?
Without having an answer to the above question, I'd really like to dig deeper into troubleshooting the problem. To do so I use fly hijack to get a shell in the corresponding container. I notice that docker is installed on the container, so next step I think would be to do a docker import on the tarball for the image I'm trying to push and then perform a docker push to push it to the repo. When attempting to run the import I get the error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is
the docker daemon running?
Question #2: Why can't I use docker commands from within the container? Perhaps this has something to do with the issue I'm seeing with pushing to repo when running the pipeline (I don't think so)? Is it because the container isn't running with privilege? I thought that the privileged argument would be supplied in the resource type definition, but if not, how can I run with privilege?
resources:
- name: image-repo
type: git
source:
branch: master
private_key: ((private_key))
uri: ssh://git#git-server/repo.git
- name: artifactory
type: docker-image
source:
repository: artifactory.server.com:2077/((repo))
tag: latest
username: ((destination_username))
password: ((destination_password))
jobs:
- name: update-image
plan:
- get: image-repo
- task: do-stuff
file: image-repo/scripts/do-stuff.yml
vars:
repository-directory: ((repo))
- task: build-image
privileged: true
file: image-repo/scripts/build-image.yml
- put: artifactory
params:
import_file: image/image.tar
Arghhhh. Found after much troubleshooting that the destination_password wasn't being picked up properly due to special characters and a lack of quotes. Fixed the issue by properly setting the password within yaml file being included with the --load-vars flag.
I am attempting to deploy new resources and update to existing using my original ARM Templates that I deployed with a few months ago. Unfortunately, the deployment does not get to Azure, as I do not see a deployment entry in my resource group. The error that is presented is local before things start calling into Azure.
I am deploying using the Visual Studio 2017 Arm Template Deployment Context menu.
Here is the error that is output. I get this and a nearly identical one when running the validation command as well.
08:58:22 - VERBOSE: Performing the operation "Creating Deployment" on target "MigrationPlaybook_Prod".
08:58:23 - New-AzureRmResourceGroupDeployment : Multiple error occurred: Forbidden,Forbidden. Please see details.
08:58:23 - At C:\workspaces\Migration Playbook\MigrationPlaybookRegion\ProductionResourceGroup\bin\Debug\staging\ProductionResourc
08:58:23 - eGroup\Deploy-AzureResourceGroup.ps1:108 char:5
08:58:23 - + New-AzureRmResourceGroupDeployment -Name ((Get-ChildItem $Templat ...
08:58:23 - + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
08:58:23 - + CategoryInfo : CloseError: (:) [New-AzureRmResourceGroupDeployment], CloudException
08:58:23 - + FullyQualifiedErrorId : Microsoft.Azure.Commands.ResourceManager.Cmdlets.Implementation.NewAzureResourceGroupDep
08:58:23 - loymentCmdlet
08:58:23 -
08:58:24 -
08:58:24 - Template deployment returned the following errors:
08:58:24 - Multiple error occurred: Forbidden,Forbidden. Please see details.
Mitigations:
The template involves a KeyVault - I made sure that the ARM Template permission was Enabled
The project is several months old - I generated a new project to make sure that the PowerShell script, that is generated into the project at creation time, did not have any significant change
Account permissions - I have verified that my account permissions on the subscription has not changed in a way to prevent me from adding/modifying resources
While the mitigations address issues that arise when the ARM Template is being deployed the error, and its resulting records, suggest that there is an issue before it gets to Azure.
What could be the issue here and what can I do to remedy this?
I have seen the same error message when using a Key Vault to store username and password secrets but forgot to 'Enable for template deployment' on the Key Vault resource, and then referencing those secrets to pass credentials in the deployment of SQL server. Have you ensured that the Key Vault section in the ARM template has the option enabled where "enabledForTemplateDeployment": true?
Mine looks something like:
{
"type": "Microsoft.KeyVault/vaults",
"apiVersion": "2016-10-01",
"name": "[variables('keyVaultName')]",
"location": "[resourceGroup().location]",
"tags": "[parameters('baseParameters').tagValues]",
"scale": null,
"dependsOn": [],
"properties": {
"sku": {
"family": "A",
"name": "standard"
},
"tenantId": "[subscription().tenantId]",
"accessPolicies": [],
"enabledForDeployment": true,
"enabledForDiskEncryption": false,
"enabledForTemplateDeployment": true
}
},
When enabled, it will look like the following in the portal:
Just to check, I intentionally removed (disabled) the setting and the result looks similar to your error. Using the arguments -Verbose -Debug helped me to see the details.
I know that this isn't a real answer, but I don't have enough reputation yet to just comment. I've seen similar error messages in a subscription that had a limited number of cores and when I was trying to deploy more VMs. However I also am not convinced that this is your problem ...
what I wanted to ask was what happens when you try and deploy the ARM template from the Azure portal using the "Deploy a Custom Template"? This might give you some better hints as to what exactly is going wrong.
As part of our chef infrastructure I'm trying to set up and configure a berks-api server. I have created an Ubuntu server in azure and i have bootstrapped it and it appears as a node in my chef-server.
I have followed the instructions at github - bekshelf-api installation to install the berks-api via a cookbook. I have run
sudo chef-client
on my node and the cookbook appears to have been run successfully.
The problem is that the berks-api doesn't appear to run. My Linux terminology isn't great so sorry if I'm making mistakes in what I say but it appears as if the berks-api service isn't able to run. If I navigate to /etc/service/berks-api and run this command
sudo berks-api
I get this error
I, [2015-07-23T11:56:37.490075 #16643] INFO -- : Cache manager starting...
I, [2015-07-23T11:56:37.491006 #16643] INFO -- : Cache Builder starting...
E, [2015-07-23T11:56:37.493137 #16643] ERROR -- : Actor crashed!
Errno::EACCES: Permission denied # rb_sysopen - /etc/chef/client.pem
/opt/berkshelf-api/v2.1.1/vendor/bundle/ruby/2.1.0/gems/ridley-4.1.2/lib/ridley/client.rb:144:in `read'
/opt/berkshelf-api/v2.1.1/vendor/bundle/ruby/2.1.0/gems/ridley-4.1.2/lib/ridley/client.rb:144:in `initialize'
If anyone could help me figure out what is going on, I'd really appreciate it. If you need to explain the setup any more let me know.
It turns out I misunderstood the configuration of the berks-api. I needed to get a new private key for my client (berkshelf) from manage.chef.io for our organization. I then needed to upload the new key (berkshelf.pem) to /etc/berkshelf/api-server and reconfigure the berks-api to use the new key. so my config for the berks-api now looks like below:
{
"home_path":"/etc/berkshelf/api-server",
"endpoints":[
{
"type":"chef_server",
"options":
{
"url":"https://api.opscode.com/organizations/my-organization",
"client_key":"/etc/berkshelf/api-server/berkshelf.pem",
"client_name":"berkshelf"
}
}
],
"build_interval":5.0
}
I couldn't upload berkshelf.pem directly to the target location, i had to upload it to my home location, then copy it from within linux.
Having done this, the service starts and works perfectly.