Can't control rate limit on Google Cloud Tasks API - google-cloud-tasks

I'm trying to rate limit Google Cloud Tasks to no more than 1 processed task per second.
I've created my queue with:
gcloud tasks queues create my-queue \
--max-dispatches-per-second=1 \
--max-concurrent-dispatches=1 \
--max-attempts=2 \
--min-backoff=60s
Describing it gives me:
name: projects/my-project/locations/us-central1/queues/my-queue
rateLimits:
maxBurstSize: 10
maxConcurrentDispatches: 1
maxDispatchesPerSecond: 1.0
retryConfig:
maxAttempts: 2
maxBackoff: 3600s
maxDoublings: 16
minBackoff: 60s
state: RUNNING
After creating a bunch of tasks I can see in the logs that many of them are undesirably being processed in the time period of 1 second:
2019-07-27 02:37:48 default[20190727t043306] Received task with payload: {'id': 51}
2019-07-27 02:37:48 default[20190727t043306] "POST /my_handler HTTP/1.1" 200
2019-07-27 02:37:49 default[20190727t043306] Received task with payload: {'id': 52}
2019-07-27 02:37:49 default[20190727t043306] "POST /my_handler HTTP/1.1" 200
2019-07-27 02:37:49 default[20190727t043306] Received task with payload: {'id': 53}
2019-07-27 02:37:49 default[20190727t043306] "POST /my_handler HTTP/1.1" 200
2019-07-27 02:37:49 default[20190727t043306] Received task with payload: {'id': 54}
2019-07-27 02:37:49 default[20190727t043306] "POST /my_handler HTTP/1.1" 200
2019-07-27 02:37:49 default[20190727t043306] Received task with payload: {'id': 55}
2019-07-27 02:37:49 default[20190727t043306] "POST /my_handler HTTP/1.1" 200
2019-07-27 02:37:49 default[20190727t043306] Received task with payload: {'id': 56}
2019-07-27 02:37:49 default[20190727t043306] "POST /my_handler HTTP/1.1" 200
2019-07-27 02:37:49 default[20190727t043306] Received task with payload: {'id': 57}
2019-07-27 02:37:49 default[20190727t043306] "POST /my_handler HTTP/1.1" 200
2019-07-27 02:37:49 default[20190727t043306] Received task with payload: {'id': 58}
How do I properly enforce it to run no more than 1 task during this 1 second time interval?
Update 30/06:
I've tried it again with a basic setup, same issue.
More details on setup and process:
Source code https://github.com/GoogleCloudPlatform/python-docs-samples/tree/master/appengine/flexible/tasks, no modifications
Deploy app.yaml, not app.flexible.yaml
Trigger a task several times: python create_app_engine_queue_task.py --project=$PROJECT_ID --queue=$QUEUE_ID --location=$LOCATION_ID --payload=hello
Check logs: gcloud app logs read
This time they took a while to start processing, but after that it seems they were all processed more or less simultaneously:
Full logs:
2019-07-30 00:22:37 default[20190730t021951] [2019-07-30 00:22:37 +0000] [9] [INFO] Starting gunicorn 19.9.0
2019-07-30 00:22:37 default[20190730t021951] [2019-07-30 00:22:37 +0000] [9] [INFO] Listening at: http://0.0.0.0:8081 (9)
2019-07-30 00:22:37 default[20190730t021951] [2019-07-30 00:22:37 +0000] [9] [INFO] Using worker: threads
2019-07-30 00:22:37 default[20190730t021951] [2019-07-30 00:22:37 +0000] [23] [INFO] Booting worker with pid: 23
2019-07-30 00:22:37 default[20190730t021951] [2019-07-30 00:22:37 +0000] [26] [INFO] Booting worker with pid: 26
2019-07-30 00:27:41 default[20190730t021951] "POST /example_task_handler HTTP/1.1" 200
2019-07-30 00:27:41 default[20190730t021951] Received task with payload: hello
2019-07-30 00:27:41 default[20190730t021951] "POST /example_task_handler HTTP/1.1" 200
2019-07-30 00:27:41 default[20190730t021951] Received task with payload: hello
2019-07-30 00:27:41 default[20190730t021951] "POST /example_task_handler HTTP/1.1" 200
2019-07-30 00:27:41 default[20190730t021951] Received task with payload: hello
2019-07-30 00:37:41 default[20190730t021951] "POST /example_task_handler HTTP/1.1" 200
2019-07-30 00:37:41 default[20190730t021951] Received task with payload: hello
2019-07-30 00:37:41 default[20190730t021951] "POST /example_task_handler HTTP/1.1" 200
2019-07-30 00:37:41 default[20190730t021951] Received task with payload: hello
2019-07-30 00:37:41 default[20190730t021951] "POST /example_task_handler HTTP/1.1" 200
2019-07-30 00:37:41 default[20190730t021951] Received task with payload: hello
2019-07-30 00:37:41 default[20190730t021951] "POST /example_task_handler HTTP/1.1" 200
2019-07-30 00:37:41 default[20190730t021951] Received task with payload: hello
2019-07-30 00:37:41 default[20190730t021951] "POST /example_task_handler HTTP/1.1" 200
2019-07-30 00:37:41 default[20190730t021951] Received task with payload: hello
2019-07-30 00:37:41 default[20190730t021951] "POST /example_task_handler HTTP/1.1" 200
2019-07-30 00:37:41 default[20190730t021951] Received task with payload: hello
2019-07-30 00:37:41 default[20190730t021951] "POST /example_task_handler HTTP/1.1" 200
2019-07-30 00:37:41 default[20190730t021951] Received task with payload: hello
2019-07-30 00:37:41 default[20190730t021951] "POST /example_task_handler HTTP/1.1" 200
2019-07-30 00:37:41 default[20190730t021951] Received task with payload: hello
2019-07-30 00:37:41 default[20190730t021951] "POST /example_task_handler HTTP/1.1" 200
2019-07-30 00:37:41 default[20190730t021951] Received task with payload: hello
2019-07-30 00:37:41 default[20190730t021951] "POST /example_task_handler HTTP/1.1" 200
2019-07-30 00:37:41 default[20190730t021951] Received task with payload: hello
2019-07-30 00:37:42 default[20190730t021951] "POST /example_task_handler HTTP/1.1" 200
2019-07-30 00:37:42 default[20190730t021951] Received task with payload: hello
2019-07-30 00:37:43 default[20190730t021951] "POST /example_task_handler HTTP/1.1" 200
2019-07-30 00:37:43 default[20190730t021951] Received task with payload: hello

tl;dr  It's probably working as intended.  You can expect an initial burst of maxBurstSize tasks which then slows down to maxDispatchesPerSecond.
The reason for this is the "token bucket" algorithm: there's a bucket which can hold at most maxBurstSize tokens, and initially has that many tokens in it.  A task is dispatched if it's scheduled time has arrived AND there's a token in the bucket AND fewer than maxConcurrentDispatches are in flight, otherwise we wait for those conditions to be met.  When a task is dispatched, a token is removed from the bucket.  When the bucket is not full, tokens are added at a rate of maxDispatchesPerSecond.
So the rate isn't precisely a limit on task dispatch.  Tasks can be sent at an arbitrary rate as long as there are tokens in the bucket and tasks ready to run.  Only when tasks have to wait for tokens do we have to slow down to the given rate.  Since the bucket starts full you can get an initial burst.
In the Cloud Tasks API and console the bucket size is read only (and the API calls it max_burst_size).  But using the older queue.yaml configuration you can control bucket size along with the other parameters, e.g.
queue:
- name: my-appengine-queue
  rate: 2/s
  bucket_size: 20
  max_concurrent_requests: 5
Then gcloud app deploy queue.yaml.  If you do that however, be aware of these pitfalls: https://cloud.google.com/tasks/docs/queue-yaml#pitfalls
FYI there's an issue open to see if docs can be improved.

The queues created using queue.yaml and the queues created by Cloud Tasks - however you do it - are the same queues. There are, however, issues that can arise if you mix using queue.yaml and Cloud Tasks Queue Management methods. See https://cloud.google.com/tasks/docs/queue-yaml for more information.

Related

Upstream http and https protocol under one upstream

I have been trying to add two target under one upstream, one of which is on HTTP and the other one is HTTPS. I am not sure how to acheive that, I tried adding target like: https:10.32.9.123:443 but that didn't work.
It seems like there is a limitation that the targets could either be on HTTP or HTTPS, is there a workaround to this. My kong config file looks like below:
_format_version: "2.1"
_transform: true
services:
- name: test-server-public
protocol: http
host: test-endpoint-upstream
port: 8000
retries: 3
connect_timeout: 5000
routes:
- name: test-route
paths:
- /test
upstreams:
- name: test-endpoint-upstream
targets:
- target: target-url:8080
weight: 999
- target: target-https-url:443
weight: 1
healthchecks:
active:
concurrency: 2
http_path: /
type: http
healthy:
interval: 0
successes: 1
http_statuses:
- 200
- 302
unhealthy:
http_failures: 3
interval: 10
tcp_failures: 3
timeouts: 3
http_statuses:
- 429
- 404
- 500
- 501
- 502
- 503
- 504
- 505
passive:
type: http
healthy:
successes: 1
http_statuses:
- 200
- 201
- 202
- 203
- 204
- 205
- 206
- 207
- 208
- 226
- 300
- 301
- 302
- 303
- 304
- 305
- 306
- 307
- 308
unhealthy:
http_failures: 1
tcp_failures: 1
timeouts: 1
http_statuses:
- 429
- 500
- 503
slots: 1000
There're no workaround for that.
You should set to redirect http to https at your webserver configurations.
This http-to-https-redirect plugin will be helpful in your case.

Apache Airflow : Dag task marked zombie, with background process running on remote server

**Apache Airflow version:**1.10.9-composer
Kubernetes Version : Client Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.12-gke.6002", GitCommit:"035184604aff4de66f7db7fddadb8e7be76b6717", GitTreeState:"clean", BuildDate:"2020-12-01T23:13:35Z", GoVersion:"go1.12.17b4", Compiler:"gc", Platform:"linux/amd64"}
Environment: Airflow, running on top of Kubernetes - Linux version 4.19.112
OS : Linux version 4.19.112+ (builder#7fc5cdead624) (Chromium OS 9.0_pre361749_p20190714-r4 clang version 9.0.0 (/var/cache/chromeos-cache/distfiles/host/egit-src/llvm-project c11de5eada2decd0a495ea02676b6f4838cd54fb) (based on LLVM 9.0.0svn)) #1 SMP Fri Sep 4 12:00:04 PDT 2020
Kernel : Linux gke-europe-west2-asset-c-default-pool-dc35e2f2-0vgz
4.19.112+ #1 SMP Fri Sep 4 12:00:04 PDT 2020 x86_64 Intel(R) Xeon(R) CPU # 2.20GHz GenuineIntel GNU/Linux
What happened ?
A running task is marked as Zombie after the execution time crossed the latest heartbeat + 5 minutes.
The task is running in background in another application server, triggered using SSHOperator.
[2021-01-18 11:53:37,491] {taskinstance.py:888} INFO - Executing <Task(SSHOperator): load_trds_option_composite_file> on 2021-01-17T11:40:00+00:00
[2021-01-18 11:53:37,495] {base_task_runner.py:131} INFO - Running on host: airflow-worker-6f6fd78665-lm98m
[2021-01-18 11:53:37,495] {base_task_runner.py:132} INFO - Running: ['airflow', 'run', 'dsp_etrade_process_trds_option_composite_0530', 'load_trds_option_composite_file', '2021-01-17T11:40:00+00:00', '--job_id', '282759', '--pool', 'default_pool', '--raw', '-sd', 'DAGS_FOLDER/dsp_etrade_trds_option_composite_0530.py', '--cfg_path', '/tmp/tmpge4_nva0']
Task Executing time:
dag_id dsp_etrade_process_trds_option_composite_0530
duration 7270.47
start_date 2021-01-18 11:53:37,491
end_date 2021-01-18 13:54:47.799728+00:00
Scheduler Logs during that time:
[2021-01-18 13:54:54,432] {taskinstance.py:1135} ERROR - <TaskInstance: dsp_etrade_process_etrd.push_run_date 2021-01-18 13:30:00+00:00 [running]> detected as zombie
{
textPayload: "[2021-01-18 13:54:54,432] {taskinstance.py:1135} ERROR - <TaskInstance: dsp_etrade_process_etrd.push_run_date 2021-01-18 13:30:00+00:00 [running]> detected as zombie"
insertId: "1ca8zyfg3zvma66"
resource: {
type: "cloud_composer_environment"
labels: {3}
}
timestamp: "2021-01-18T13:54:54.432862699Z"
severity: "ERROR"
logName: "projects/asset-control-composer-prod/logs/airflow-scheduler"
receiveTimestamp: "2021-01-18T13:54:55.714437665Z"
}
Airflow-webserver log :
X.X.X.X - - [18/Jan/2021:13:54:39 +0000] "GET /_ah/health HTTP/1.1" 200 187 "-" "GoogleHC/1.0"
{
textPayload: "172.17.0.5 - - [18/Jan/2021:13:54:39 +0000] "GET /_ah/health HTTP/1.1" 200 187 "-" "GoogleHC/1.0"
"
insertId: "1sne0gqg43o95n3"
resource: {2}
timestamp: "2021-01-18T13:54:45.401670481Z"
logName: "projects/asset-control-composer-prod/logs/airflow-webserver"
receiveTimestamp: "2021-01-18T13:54:50.598807514Z"
}
Airflow Info logs :
2021-01-18 08:54:47.799 EST
{
textPayload: "NoneType: None
"
insertId: "1ne3hqgg47yzrpf"
resource: {2}
timestamp: "2021-01-18T13:54:47.799661030Z"
severity: "INFO"
logName: "projects/asset-control-composer-prod/logs/airflow-scheduler"
receiveTimestamp: "2021-01-18T13:54:50.914461159Z"
}
[2021-01-18 13:54:47,800] {taskinstance.py:1192} INFO - Marking task as FAILED.dag_id=dsp_etrade_process_trds_option_composite_0530, task_id=load_trds_option_composite_file, execution_date=20210117T114000, start_date=20210118T115337, end_date=20210118T135447
Copy link
{
textPayload: "[2021-01-18 13:54:47,800] {taskinstance.py:1192} INFO - Marking task as FAILED.dag_id=dsp_etrade_process_trds_option_composite_0530, task_id=load_trds_option_composite_file, execution_date=20210117T114000, start_date=20210118T115337, end_date=20210118T135447"
insertId: "1ne3hqgg47yzrpg"
resource: {2}
timestamp: "2021-01-18T13:54:47.800605248Z"
severity: "INFO"
logName: "projects/asset-control-composer-prod/logs/airflow-scheduler"
receiveTimestamp: "2021-01-18T13:54:50.914461159Z"
}
Airflow Database shows the latest heartbeat as:
select state, latest_heartbeat from job where id=282759
--------------------------------------
state | latest_heartbeat
running | 2021-01-18 13:48:41.891934
Airflow Configurations:
celery
worker_concurrency=6
scheduler
scheduler_health_check_threshold=60
scheduler_zombie_task_threshold=300
max_threads=2
core
dag_concurrency=6
Kubernetes Cluster :
Worker nodes : 6
What was expected to happen ?
The backend process takes around 2hrs 30 minutes to finish. During
such long running jobs the task is detected as zombie. Eventhough the
worker node is still processing the task. The state of the job is
still marked as 'running'. State if the task is not known during the
run time.

Why my website get many "/favicon.icon" request?

I know that the browser will automatically request "/favicon.icon".
But many requests did not visit my page, it only visit "/favicon.icon".
They don't hava Referer and User-Agent.
The nginx log like this:
47.88.135.135 - - [02/Sep/2019:03:31:22 -0400] "HEAD /favicon.ico HTTP/1.1" 404 0 "-" "-"
47.74.138.142 - - [02/Sep/2019:03:31:24 -0400] "HEAD /favicon.ico HTTP/1.1" 404 0 "-" "-"
47.254.113.21 - - [02/Sep/2019:03:31:24 -0400] "HEAD /favicon.ico HTTP/1.1" 404 0 "-" "-"
64.71.142.65 - - [02/Sep/2019:03:31:25 -0400] "HEAD /favicon.ico HTTP/1.1" 404 0 "-" "-"
Then I tried to set the "favicon.icon" file, but I still receive a lot of "/favicon.icon" requests. And the nginx log like :
161.117.143.163 - - [02/Sep/2019:06:25:26 -0400] "HEAD /favicon.ico HTTP/1.1" 200 0 "-" "-"
64.71.142.65 - - [02/Sep/2019:06:25:26 -0400] "HEAD /favicon.ico HTTP/1.1" 200 0 "-" "-"
47.91.195.140 - - [02/Sep/2019:06:25:27 -0400] "HEAD /favicon.ico HTTP/1.1" 200 0 "-" "-"
80.231.126.202 - - [02/Sep/2019:06:25:27 -0400] "HEAD /favicon.ico HTTP/1.1" 200 0 "-" "-"
47.244.73.59 - - [02/Sep/2019:06:25:28 -0400] "HEAD /favicon.ico HTTP/1.1" 200 0 "-" "-"

Alfresco: Error accepting site invitation happening randomly

I am facing an issue while accepting site invitation while email, when I am trying to accept by clicking the link from email, getting the error, but user is added to site.
It is working fine in case if I have a browser session open for share. when I close browser and trying to accept the link its errored. When I got error I see the accept link request 2 times in logs. First request success and 2nd one failure.
"Processing invite acceptance failed Unfortunately, your invite acceptance could not be registered. Either you have already accepted or rejected the invite, or the inviter cancelled your invitation."
Fiddler session summary from client side:
Result Protocol Host URL Body Caching Content-Type Process Comments Custom
1 200 HTTPS www.fiddler2.com /UpdateCheck.aspx?isBeta=False 659 private text/plain; charset=utf-8
2 200 HTTP Tunnel to safebrowsing.google.com:443 2,018 chrome:13808 [#1]
3 200 HTTP share.xxx.com /share/page/accept-invite?inviteId=activiti$2287867&inviteeUserName=user1&siteShortName=test-invitation-site&inviteTicket=30457b99-a49f-4159-a304-8730920bc932 17,708 no-cache text/html;charset=utf-8 chrome:13808 [#2]
4 401 HTTP share.xxx.com /share/page/site-index?site=test-invitation-site 164 text/html;charset=ISO-8859-1 chrome:13808 [#3]
5 302 HTTP share.xxx.com /share/page/site-index?site=test-invitation-site 0 no-cache text/html;charset=utf-8 chrome:13808 [#4]
6 401 HTTP share.xxx.com /share/proxy/alfresco/api/people/guest/preferences 975 no-cache; Expires: Thu, 01 Jan 1970 00:00:00 GMT text/html;charset=utf-8 chrome:13808 [#5]
7 200 HTTP share.xxx.com /share/page/accept-invite?inviteId=activiti$2287867&inviteeUserName=user1&siteShortName=test-invitation-site&inviteTicket=30457b99-a49f-4159-a304-8730920bc932 20,767 no-cache text/html;charset=utf-8 chrome:13808 [#6]
here is the access log:
10.64.11.250 - A11949874768A1CAD071FD9CD9DF5BB9.alfresco02 - [28/Feb/2017:16:14:53 -0500] "GET /share/page/accept-invite?inviteId=activiti$2233022&inviteeUserName=myuser&siteShortName=test-invitation-error&inviteTicket=7ce8693f-e105-4eee-b53a-7e605cb28fba HTTP/1.1" 200 17683 200
127.0.0.1 - - - [28/Feb/2017:16:14:53 -0500] "PUT /alfresco/s/api/invite/activiti$2233022/7ce8693f-e105-4eee-b53a-7e605cb28fba/accept?inviteeUserName=myuser HTTP/1.1" 200 79 187
127.0.0.1 - 5FF1ACC4670F3607F8104779F3BB8E0A.alfresco02 - [28/Feb/2017:16:14:54 -0500] "GET /alfresco/wcs/touch HTTP/1.1" 200 - 3
10.64.11.249 - 345B5286F8F72790BBF8484D125344F5.alfresco02 - [28/Feb/2017:16:14:54 -0500] "GET /share/keepAlive.jsp HTTP/1.1" 200 317 1
127.0.0.1 - 70C5283382FC5038FAEF2603CCAE82B8.alfresco02 - [28/Feb/2017:16:14:54 -0500] "GET /alfresco/wcs/touch HTTP/1.1" 401 162 1
10.64.11.250 - A11949874768A1CAD071FD9CD9DF5BB9.alfresco02 - [28/Feb/2017:16:14:54 -0500] "GET /share/page/site-index?site=test-invitation-error HTTP/1.1" 401 164 3
127.0.0.1 - D1754CF6E7351D045BD6B15D6764890B.alfresco02 - [28/Feb/2017:16:14:54 -0500] "GET /alfresco/wcs/touch HTTP/1.1" 401 162 0
127.0.0.1 - D1754CF6E7351D045BD6B15D6764890B.alfresco02 - [28/Feb/2017:16:14:54 -0500] "GET /alfresco/wcs/touch HTTP/1.1" 200 - 31
10.64.11.251 - BC8EDCF38E2520AD658140F65C3B1E57.alfresco02 - [28/Feb/2017:16:14:54 -0500] "GET /share/keepAlive.jsp HTTP/1.1" 200 317 1
127.0.0.1 - 5FF1ACC4670F3607F8104779F3BB8E0A.alfresco02 - [28/Feb/2017:16:14:54 -0500] "GET /alfresco/wcs/common/downloadFiles?nodeRef=workspace://SpacesStore/439802eb-bc72-479e-b5d5-ba88f717600c&a=true HTTP/1.1" 200 2588898 496
127.0.0.1 - D1754CF6E7351D045BD6B15D6764890B.alfresco02 - [28/Feb/2017:16:14:54 -0500] "GET /alfresco/wcs/touch HTTP/1.1" 200 - 4
127.0.0.1 - D1754CF6E7351D045BD6B15D6764890B.alfresco02 - [28/Feb/2017:16:14:55 -0500] "GET /alfresco/wcs/api/people/myuser?groups=true HTTP/1.1" 200 4809 656
127.0.0.1 - D1754CF6E7351D045BD6B15D6764890B.alfresco02 - [28/Feb/2017:16:14:55 -0500] "GET /alfresco/wcs/webframework/content/metadata?user=myuser HTTP/1.1" 200 5082 46
10.64.11.250 - A11949874768A1CAD071FD9CD9DF5BB9.alfresco02 - [28/Feb/2017:16:14:55 -0500] "GET /share/page/site-index?site=test-invitation-error HTTP/1.1" 302 - 757
127.0.0.1 - D1754CF6E7351D045BD6B15D6764890B.alfresco02 - [28/Feb/2017:16:14:55 -0500] "GET /alfresco/wcs/remoteadm/has/alfresco/site-data/pages/user/myuser/dashboard.xml?s=sitestore HTTP/1.1" 200 4 7
127.0.0.1 - D1754CF6E7351D045BD6B15D6764890B.alfresco02 - [28/Feb/2017:16:14:55 -0500] "GET /alfresco/wcs/api/people/myuser?groups=true HTTP/1.1" 200 4809 595
127.0.0.1 - D1754CF6E7351D045BD6B15D6764890B.alfresco02 - [28/Feb/2017:16:14:55 -0500] "GET /alfresco/wcs/touch HTTP/1.1" 200 - 4
127.0.0.1 - D1754CF6E7351D045BD6B15D6764890B.alfresco02 - [28/Feb/2017:16:14:55 -0500] "GET /alfresco/wcs/remoteadm/has/alfresco/site-data/pages/site/test-invitation-error/dashboard.xml?s=sitestore HTTP/1.1" 200 4 12
10.64.11.250 - A11949874768A1CAD071FD9CD9DF5BB9.alfresco02 - [28/Feb/2017:16:14:55 -0500] "GET /share/proxy/alfresco/api/people/guest/preferences HTTP/1.1" 401 975 616
127.0.0.1 - D1754CF6E7351D045BD6B15D6764890B.alfresco02 - [28/Feb/2017:16:14:55 -0500] "GET /alfresco/wcs/api/people/guest/preferences HTTP/1.1" 401 514 15
127.0.0.1 - B6FE21069F0F5E7F1282FD2596560B63.alfresco02 - [28/Feb/2017:16:14:55 -0500] "GET /alfresco/wcs/touch HTTP/1.1" 200 - 1
127.0.0.1 - D1754CF6E7351D045BD6B15D6764890B.alfresco02 - [28/Feb/2017:16:14:55 -0500] "GET /alfresco/wcs/remoteadm/get/alfresco/site-data/pages/site/test-invitation-error/dashboard.xml?s=sitestore HTTP/1.1" 200 657 32
127.0.0.1 - D1754CF6E7351D045BD6B15D6764890B.alfresco02 - [28/Feb/2017:16:14:55 -0500] "GET /alfresco/wcs/api/people/myuser/preferences HTTP/1.1" 200 402 17
127.0.0.1 - D1754CF6E7351D045BD6B15D6764890B.alfresco02 - [28/Feb/2017:16:14:55 -0500] "GET /alfresco/wcs/api/sites/test-invitation-error HTTP/1.1" 200 514 13
127.0.0.1 - D1754CF6E7351D045BD6B15D6764890B.alfresco02 - [28/Feb/2017:16:14:55 -0500] "GET /alfresco/wcs/api/admin/usage HTTP/1.1" 200 288 14
127.0.0.1 - D1754CF6E7351D045BD6B15D6764890B.alfresco02 - [28/Feb/2017:16:14:55 -0500] "GET /alfresco/wcs/remoteadm/has/alfresco/site-data/components/page.full-width-dashlet.site~test-invitation-error~dashboard.xml?s=sitestore HTTP/1.1" 200 4 15
127.0.0.1 - D1754CF6E7351D045BD6B15D6764890B.alfresco02 - [28/Feb/2017:16:14:55 -0500] "GET /alfresco/wcs/remoteadm/get/alfresco/site-data/components/page.full-width-dashlet.site~test-invitation-error~dashboard.xml?s=sitestore HTTP/1.1" 200 423 24
127.0.0.1 - D1754CF6E7351D045BD6B15D6764890B.alfresco02 - [28/Feb/2017:16:14:55 -0500] "GET /alfresco/wcs/remoteadm/has/alfresco/site-data/components/page.component-1-1.site~test-invitation-error~dashboard.xml?s=sitestore HTTP/1.1" 200 4 8
127.0.0.1 - - - [28/Feb/2017:16:14:55 -0500] "PUT /alfresco/s/api/invite/activiti$2233022/7ce8693f-e105-4eee-b53a-7e605cb28fba/accept?inviteeUserName=myuser HTTP/1.1" 409 5993 47
10.64.11.250 - A11949874768A1CAD071FD9CD9DF5BB9.alfresco02 - [28/Feb/2017:16:14:55 -0500] "GET /share/page/accept-invite?inviteId=activiti$2233022&inviteeUserName=myuser&siteShortName=test-invitation-error&inviteTicket=7ce8693f-e105-4eee-b53a-7e605cb28fba HTTP/1.1" 200 19476 95
Here is alfresco.log
16:14:55,356 ERROR [org.springframework.extensions.webscripts.AbstractRuntime] Exception from executeScript - redirecting to status template error: 012873015 org.alfresco.service.cmr.invitation.InvitationExceptionUserError: 0128576624 "Invitation, activiti$2233022 has already been accepted, cancelled or rejected"
org.springframework.extensions.webscripts.WebScriptException: 012873015 org.alfresco.service.cmr.invitation.InvitationExceptionUserError: 0128576624 "Invitation, activiti$2233022 has already been accepted, cancelled or rejected"
at org.alfresco.repo.web.scripts.invite.InviteResponse.execute(InviteResponse.java:146)
at org.alfresco.repo.web.scripts.invite.InviteResponse.access$000(InviteResponse.java:45)
at org.alfresco.repo.web.scripts.invite.InviteResponse$1.doWork(InviteResponse.java:105)
at org.alfresco.repo.web.scripts.invite.InviteResponse$1.doWork(InviteResponse.java:94)
at org.alfresco.repo.tenant.TenantUtil.runAsWork(TenantUtil.java:119)
at org.alfresco.repo.tenant.TenantUtil.runAsTenant(TenantUtil.java:88)
at org.alfresco.repo.tenant.TenantUtil$1.doWork(TenantUtil.java:62)
at org.alfresco.repo.security.authentication.AuthenticationUtil.runAs(AuthenticationUtil.java:548)
at org.alfresco.repo.tenant.TenantUtil.runAsUserTenant(TenantUtil.java:58)
at org.alfresco.repo.tenant.TenantUtil.runAsSystemTenant(TenantUtil.java:112)
at org.alfresco.repo.web.scripts.invite.InviteResponse.executeImpl(InviteResponse.java:93)
at org.springframework.extensions.webscripts.DeclarativeWebScript.executeImpl(DeclarativeWebScript.java:235)
at org.springframework.extensions.webscripts.DeclarativeWebScript.__AW_execute(DeclarativeWebScript.java:64)
at org.springframework.extensions.webscripts.DeclarativeWebScript.execute(DeclarativeWebScript.java)
at org.alfresco.repo.web.scripts.RepositoryContainer$3.__AW_execute(RepositoryContainer.java:489)
at org.alfresco.repo.web.scripts.RepositoryContainer$3.execute(RepositoryContainer.java)
at org.alfresco.repo.transaction.RetryingTransactionHelper.__AW_doInTransaction(RetryingTransactionHelper.java:457)
at org.alfresco.repo.transaction.RetryingTransactionHelper.doInTransaction(RetryingTransactionHelper.java)
at org.alfresco.repo.web.scripts.RepositoryContainer.transactionedExecute(RepositoryContainer.java:551)
at org.alfresco.repo.web.scripts.RepositoryContainer.transactionedExecuteAs(RepositoryContainer.java:619)
at org.alfresco.repo.web.scripts.RepositoryContainer.executeScriptInternal(RepositoryContainer.java:326)
at org.alfresco.repo.web.scripts.RepositoryContainer.executeScript(RepositoryContainer.java:280)
at org.springframework.extensions.webscripts.AbstractRuntime.executeScript(AbstractRuntime.java:378)
at org.springframework.extensions.webscripts.AbstractRuntime.__AW_executeScript(AbstractRuntime.java:209)
at org.springframework.extensions.webscripts.AbstractRuntime.executeScript(AbstractRuntime.java)
at org.springframework.extensions.webscripts.servlet.WebScriptServlet.service(WebScriptServlet.java:132)
at javax.servlet.http.HttpServlet.__AW_service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.alfresco.web.app.servlet.GlobalLocalizationFilter.doFilter(GlobalLocalizationFilter.java:61)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:504)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:683)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:421)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1074)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:611)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)

HTTP crawler in Erlang

I'm coding on a simple HTTP crawler but I have an issue running the code at the bottom. I'm requesting 50 URLs and get the content of 20+ back. I've generated few files with 150kB size each to test the crawler. So I think the 20+ responses are limited by the bandwidth? BUT: how to tell the Erlang snippet not to quit until the last file is not fetched? The test data server is online, so plz try the code out and any hints are welcome :)
-module(crawler).
-define(BASE_URL, "http://46.4.117.69/").
-export([start/0, send_reqs/0, do_send_req/1]).
start() ->
ibrowse:start(),
proc_lib:spawn(?MODULE, send_reqs, []).
to_url(Id) ->
?BASE_URL ++ integer_to_list(Id).
fetch_ids() ->
lists:seq(1, 50).
send_reqs() ->
spawn_workers(fetch_ids()).
spawn_workers(Ids) ->
lists:foreach(fun do_spawn/1, Ids).
do_spawn(Id) ->
proc_lib:spawn_link(?MODULE, do_send_req, [Id]).
do_send_req(Id) ->
io:format("Requesting ID ~p ... ~n", [Id]),
Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)),
case Result of
{ok, Status, _H, B} ->
io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]);
Err ->
io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err])
end.
That's the output:
Requesting ID 1 ...
Requesting ID 2 ...
Requesting ID 3 ...
Requesting ID 4 ...
Requesting ID 5 ...
Requesting ID 6 ...
Requesting ID 7 ...
Requesting ID 8 ...
Requesting ID 9 ...
Requesting ID 10 ...
Requesting ID 11 ...
Requesting ID 12 ...
Requesting ID 13 ...
Requesting ID 14 ...
Requesting ID 15 ...
Requesting ID 16 ...
Requesting ID 17 ...
Requesting ID 18 ...
Requesting ID 19 ...
Requesting ID 20 ...
Requesting ID 21 ...
Requesting ID 22 ...
Requesting ID 23 ...
Requesting ID 24 ...
Requesting ID 25 ...
Requesting ID 26 ...
Requesting ID 27 ...
Requesting ID 28 ...
Requesting ID 29 ...
Requesting ID 30 ...
Requesting ID 31 ...
Requesting ID 32 ...
Requesting ID 33 ...
Requesting ID 34 ...
Requesting ID 35 ...
Requesting ID 36 ...
Requesting ID 37 ...
Requesting ID 38 ...
Requesting ID 39 ...
Requesting ID 40 ...
Requesting ID 41 ...
Requesting ID 42 ...
Requesting ID 43 ...
Requesting ID 44 ...
Requesting ID 45 ...
Requesting ID 46 ...
Requesting ID 47 ...
Requesting ID 48 ...
Requesting ID 49 ...
Requesting ID 50 ...
OK -- ID: 49 -- Status: "200" -- Content length: 150000
OK -- ID: 47 -- Status: "200" -- Content length: 150000
OK -- ID: 50 -- Status: "200" -- Content length: 150000
OK -- ID: 17 -- Status: "200" -- Content length: 150000
OK -- ID: 48 -- Status: "200" -- Content length: 150000
OK -- ID: 45 -- Status: "200" -- Content length: 150000
OK -- ID: 46 -- Status: "200" -- Content length: 150000
OK -- ID: 10 -- Status: "200" -- Content length: 150000
OK -- ID: 09 -- Status: "200" -- Content length: 150000
OK -- ID: 19 -- Status: "200" -- Content length: 150000
OK -- ID: 13 -- Status: "200" -- Content length: 150000
OK -- ID: 21 -- Status: "200" -- Content length: 150000
OK -- ID: 16 -- Status: "200" -- Content length: 150000
OK -- ID: 27 -- Status: "200" -- Content length: 150000
OK -- ID: 03 -- Status: "200" -- Content length: 150000
OK -- ID: 23 -- Status: "200" -- Content length: 150000
OK -- ID: 29 -- Status: "200" -- Content length: 150000
OK -- ID: 14 -- Status: "200" -- Content length: 150000
OK -- ID: 18 -- Status: "200" -- Content length: 150000
OK -- ID: 01 -- Status: "200" -- Content length: 150000
OK -- ID: 30 -- Status: "200" -- Content length: 150000
OK -- ID: 40 -- Status: "200" -- Content length: 150000
OK -- ID: 05 -- Status: "200" -- Content length: 150000
Update:
thanks stemm for the hint with the wait_workers. I've combined your and mine code but same behaviour :(
-module(crawler).
-define(BASE_URL, "http://46.4.117.69/").
-export([start/0, send_reqs/0, do_send_req/2]).
start() ->
ibrowse:start(),
proc_lib:spawn(?MODULE, send_reqs, []).
to_url(Id) ->
?BASE_URL ++ integer_to_list(Id).
fetch_ids() ->
lists:seq(1, 50).
send_reqs() ->
spawn_workers(fetch_ids()).
spawn_workers(Ids) ->
%% collect reference to each worker
Refs = [ do_spawn(Id) || Id <- Ids ],
%% wait for response from each worker
wait_workers(Refs).
wait_workers(Refs) ->
lists:foreach(fun receive_by_ref/1, Refs).
receive_by_ref(Ref) ->
%% receive message only from worker with specific reference
receive
{Ref, done} ->
done
end.
do_spawn(Id) ->
Ref = make_ref(),
proc_lib:spawn_link(?MODULE, do_send_req, [Id, {self(), Ref}]),
Ref.
do_send_req(Id, {Pid, Ref}) ->
io:format("Requesting ID ~p ... ~n", [Id]),
Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)),
case Result of
{ok, Status, _H, B} ->
io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]),
%% send message that work is done
Pid ! {Ref, done};
Err ->
io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]),
%% repeat request if there was error while fetching a page,
do_send_req(Id, {Pid, Ref})
%% or - if you don't want to repeat request, put there:
%% Pid ! {Ref, done}
end.
Running the crawler forks fine for a handful of files, but then the code even doesnt fetch the entire files (file size each 150000 bytes) - he crawler fetches some files partially, see the following web server log :(
82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /10 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /1 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /3 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /8 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /39 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /7 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /6 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /2 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /5 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /50 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /9 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /44 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /38 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /47 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /49 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /43 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /37 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /46 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /48 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /36 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /42 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /41 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /45 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /17 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /35 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /16 HTTP/1.1" 200 150000 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /15 HTTP/1.1" 200 17020 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /21 HTTP/1.1" 200 120360 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /40 HTTP/1.1" 200 117600 "-" "-"
82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /34 HTTP/1.1" 200 60660 "-" "-"
Any hints are welcome. I have no clue what's going wrong there :(
So, if I've understand you correctly - you don't want to return control from function spawn_workers until each of worker is not stopped (and fetched a page)? If that - you may change your code in such way:
spawn_workers(Ids) ->
%% collect reference to each worker
Refs = [ do_spawn(Id) || Id <- Ids ],
%% wait for response from each worker
wait_workers(Refs).
wait_workers(Refs) ->
lists:foreach(fun receive_by_ref/1, Refs).
receive_by_ref(Ref) ->
%% receive message only from worker with specific reference
receive
{Ref, done} ->
done
end.
do_spawn(Id) ->
Ref = make_ref(),
proc_lib:spawn_link(?MODULE, do_send_req, [Id, {self(), Ref}]),
Ref.
do_send_req(Id, {Pid, Ref}) ->
io:format("Requesting ID ~p ... ~n", [Id]),
Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)),
case Result of
{ok, Status, _H, B} ->
io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]),
%% send message that work is done
Pid ! {Ref, done};
Err ->
io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]),
%% repeat request if there was error while fetching a page,
do_send_req(Id, {Pid, Ref})
%% or - if you don't want to repeat request, put there:
%% Pid ! {Ref, done}
end.
Edit:
I've noticed that your entry point (function start) returns control without waiting for all workers are end their tasks (because of you calling there spawn). If you want to wait there too - just do the similar trick:
start() ->
ibrowse:start(),
Ref = make_ref(),
proc_lib:spawn(?MODULE, send_reqs, [self(), Ref]),
receive_by_ref(Ref).
send_reqs(Pid, Ref) ->
spawn_workers(fetch_ids()),
Pid ! {Ref, done}.
You can use a combination of supervisors and the queue module: spawn N fetching children, each child fetches 1 item of the queue and processes it. When done notify parent process to continue with next item in queue. That way you can put a cap on number of concurrent requests.
If you spawn 500 reqs at the time ibrowse might be confused. Do you get any errors in the console?
See ibrowse:get_config_value/1 and ibrowse:set_config_value/2

Resources