In the taskman section of the hallway config there is a section about riak. I wonder if riak is necesary to run the API.
"taskman": {
"numWorkers": 4,
"pagingTiming": 2000,
"defaultScanTime": 5000,
"attempts": 8,
"heartbeat": 10000,
"store": {
"type": "riak",
"servers": ["localhost:8098"]
},
"redis": {
"host": "localhost",
"port": 6379
}
}
If you look at the commit that added that on github:
https://github.com/Singly/hallway/commit/5eb59ae0185373c89f401cf5024f35cb2719f033
It would appear that as of 17 days ago that yes, Riak is the default K/V store for task metadata in the config.
Poking around a bit more finds that this is user configurable and be changed to 'fs' or 'mem':
https://github.com/Singly/hallway/pull/758#issuecomment-12725475
Related
I am working on one e-commerce using Next JS. I am trying to improve the page load speed, this website loads a lot of JS files due to the amount of third parties vendors it has (and can't delete). I am planning to cache some static assets with Service Workers.
I am going to use the library next-offline that uses the workbox-webpack-plugin. This is the configuration I am planning to use:
workboxOpts: {
swDest: '../public/service-worker.js',
maximumFileSizeToCacheInBytes: 20000000,
runtimeCaching: [
{
urlPattern: /https:\/\/fonts\.googleapis\.com\/icon[\w\-_\/\.\:\?\=\&\+]*/,
handler: 'CacheFirst',
options: {
cacheName: 'google-fonts',
expiration: {
maxEntries: 10,
maxAgeSeconds: 30 * 24 * 60 * 60, // 1 month
},
},
},
{
urlPattern: /[\w\-_\/\.:]*\.(jpeg|png|jpg|ico|svg)\??.*/,
handler: 'CacheFirst',
options: {
cacheName: 'cache-img',
expiration: {
maxEntries: 15,
maxAgeSeconds: 2 * 24 * 60 * 60, // 2 days
},
},
},
{
urlPattern: /^((?!monetate).)*\.(js)$/,
handler: 'StaleWhileRevalidate',
},
{
urlPattern: /\/(browse.*|catalogsearch.*)$/,
handler: 'StaleWhileRevalidate',
},
],
},
So, my questions are the following:
do you think that this configuration is risky? Would you change something? I mean, I had several problems in the past with Service Workers caching JS files where I had to set a version for every file to make it work. However, now it seems that workbox fixed this issue.
Should I set a maxAge for the Stale while revalidation strategy? I want to revalidate the data every time the user reload the page. However here it says that it will use the cache without revalidate until the maxAge time is complete. What happens if I don't set in the workbox settings a maxAge?
I am testing this on a Vercel deploy (in a testing environment) and it seems it is working fine and I am thinking to move it to production.
Thanks,
You can use next-offline(i haven't used it much), or you can go ahead with using workbox-webpack-plugin. There is take cares of what file to refetch after caching(uses revision & hash(on url)).
Read this section : https://developers.google.com/web/tools/workbox/modules/workbox-precaching#how_workbox-precaching_works
I deploy 2 resources which one depends on another one but it seems to be a delay between first resource becoming fully operational and second resource being implemented. Code is below. First resource being deployed is DNS resource pointing to APP service and second resource is adding custom hostname binding to App Service. Issue is that there is seems to be a delay in up to 30 seconds between app service being able to validate DNS record being available to verify record. Is it possible somehow to add small delay between resources deployments since just using dependsOn is not sufficient in this case
{
"apiVersion": "2020-09-01",
"name": "[concat(parameters('webAppName'), '-mysite','/mysite.', variables('dnsZoneName'))]",
"type": "Microsoft.Web/sites/hostNameBindings",
"location": "[variables('location')]",
"dependsOn": [
"[resourceId('Microsoft.Network/dnszones/CNAME', variables('dnsZoneName'), 'mysite')]"
],
"properties": {
"domainId": null,
"siteName": "[concat(parameters('webAppName'), '-mysite')]",
"customHostNameDnsRecordType": "CName",
"hostNameType": "Verified"
}
},
{
"type": "Microsoft.Network/dnszones/CNAME",
"apiVersion": "2018-05-01",
"dependsOn": [
"[concat(parameters('webAppName'), '-mysite')]"
],
"name": "[concat(variables('dnsZoneName'), '/mysite')]",
"properties": {
"TTL": 3600,
"CNAMERecord": {
"cname": "[reference(concat(parameters('webAppName'), '-mysite'), '2016-03-01', 'Full').properties.defaultHostName]"
},
"targetResource": {}
}
},
No, its not possible to do directly, but you can use a couple of alternatives:
Deploy a dummy resource between those, you can find a resource that doesn't cost anything
Do some fancy stuff with nested templates, like calling an empty nested template 10 times in a row (in sequence, not in parallel)
Use deploymentScript resource to just issue a sleep 30 command.
To give an example of a deployment script in that can sleep.
I would add this to its own file so it can be used as a module in multiple places
BICEP
param location string = resourceGroup().location
param utcValue string = utcNow()
param sleepName string = 'sleep-1'
param sleepSeconds int = 30
resource sleepDelay 'Microsoft.Resources/deploymentScripts#2020-10-01' = {
name: sleepName
location: location
kind: 'AzurePowerShell'
properties: {
forceUpdateTag: utcValue
azPowerShellVersion: '8.3'
timeout: 'PT10M'
arguments: '-seconds ${sleepSeconds}'
scriptContent: '''
param ( [string] $seconds )
Write-Output Sleeping for: $seconds ....
Start-Sleep -Seconds $seconds
Write-Output Sleep over - resuming ....
'''
cleanupPreference: 'OnSuccess'
retentionInterval: 'P1D'
}
}
You can decompile this with: az bicep decompile --file module_name.bicep to get the ARM version...
ARM
{
"type": "Microsoft.Resources/deploymentScripts",
"apiVersion": "2020-10-01",
"name": "[parameters('sleepName')]",
"location": "[parameters('location')]",
"kind": "AzurePowerShell",
"properties": {
"forceUpdateTag": "[parameters('utcValue')]",
"azPowerShellVersion": "8.3",
"timeout": "PT10M",
"arguments": "[format('-seconds {0}', parameters('sleepSeconds'))]",
"scriptContent": " param ( [string] $seconds ) \n Write-Output Sleeping for: $seconds ....\n Start-Sleep -Seconds $seconds \n Write-Output Sleep over - resuming ....\n ",
"cleanupPreference": "OnSuccess",
"retentionInterval": "P1D"
}
}
You must also ensure that any actions you want to delay must depend on this module/resource - otherwise they will run in parallel, and not after the delay...
I'm trying to run Scrapoxy with Digital Ocean. I successfully created a droplet image and configured Scrapoxy.
When I start Scrapoxy, it keeps on creating a new instance and bypassing max limit. It stops only when it reaches 10 droplets. What annoys me is that No instance found in the GUI version. Also, when I test the proxy server I get this message: "Error: No running instance found". It seems that Scrapoxy only creates droplets and can't connect to them.
I installed Scrapoxy manually. Here is my config file:
{
"commander": {
"password": ".........."
},
"instance": {
"port": 3128,
"scaling": {
"min": 1,
"max": 2
}
},
"providers": [
{
"type": "digitalocean",
"token": "5204b9654f301.............c281036bd19e283321c09680ac9c",
"region": "FRA1",
"size": "s-1vcpu-1gb",
"sshKeyName": "scrapoxy",
"imageName": "forward-proxy",
"tags": "Proxy,Amazon"
}
]
}
Did you try to put your region in lowercase in the config file
Example:
like this
"region": "fra1"
instead of
"region": "FRA1"
There are other troubleshooting steps you could take on the following github pages Issue 84 & Issue 62
I have been trying to use requests v2.19.1 in python 3.6.5 to download a ~2GB file from a remote URL. However, I have been repeatedly facing this issue where the code seems to get stuck forever in the for loop in trying to download the data.
My code snippet:
with requests.get(self.model_url, stream=True, headers=headers) as response:
if response.status_code not in [200, 201]:
raise Exception(
'Error downloading model({}). Got response code {} with content {}'.format(
self.model_id,
response.status_code,
response.content
)
)
with open(self.download_path, 'wb') as f:
for chunk in response.iter_content(chunk_size=1024):
if chunk:
f.write(chunk)
Each time I try to run this code, the download seems to stop at different points, and rarely reaches completion.
I have tried playing around with different chunk sizes, but I still keep seeing this issue.
Some additional details:
python -m requests.help
{
"chardet": {
"version": "3.0.4"
},
"cryptography": {
"version": "2.3.1"
},
"idna": {
"version": "2.7"
},
"implementation": {
"name": "CPython",
"version": "3.6.5"
},
"platform": {
"release": "3.10.0-693.11.1.el7.x86_64",
"system": "Linux"
},
"pyOpenSSL": {
"openssl_version": "1010009f",
"version": "18.0.0"
},
"requests": {
"version": "2.19.1"
},
"system_ssl": {
"version": "100020bf"
},
"urllib3": {
"version": "1.23"
},
"using_pyopenssl": true
}
Has anyone else faced a similar issue? If so, how did you resolve it?
It seems like if there is any interruption to the network during the download, the stream hangs up, and the connection goes dead. However, because no timeout is specified, the code seems to expect more packets to arrive over the dead connection. The best way I have found to handle this is to set a reasonable timeout. Once the timeout is reached after the last received package, the code exits the for loop with an exception which can be handled.
i tried everything what i found in net but nothing helped me... i try deploy app in my server on Debian 8.2, and every time after: mup deploy i got this:
Meteor Up: Production Quality Meteor Deployments
------------------------------------------------
Building Started: /Volumes/Macintosh HD/Users/myName/Google Drive/_projects/Coda/lottato_com
events.js:72
throw er; // Unhandled 'error' event
^
Error: spawn ENOENT
at errnoException (child_process.js:1011:11)
at Process.ChildProcess._handle.onexit (child_process.js:802:34)
my mup.json look lik:
{
"servers": [
{
"host": "server IP",
"username": "root",
"password": "blablabla"
}
],
"setupMongo": false,
"setupNode": true,
"nodeVersion": "0.10.36",
"enableUploadProgressBar": true,
"appName": "myAppName",
"app": "/Volumes/Macintosh HD/Users/myName/Google Drive/_projects/Coda/myAppName",
"env": {
"MONGO_URL": "//<login>:<password>#ds061464.mongolab.com:61111/myAppdb",
"ROOT_URL": "http://myApp.com"
},
"deployCheckWaitTime": 15
}
i can't handle with this issue almost 3 day! i tried deploy form server, change path, but it still didn't work and don't yet work...
and when i try to look in log, i got this:
Meteor Up: Production Quality Meteor Deployments
------------------------------------------------
[178.63.41.196] tail: cannot open ‘/var/log/upstart/lottato.log’ for reading: No such file or directory
tail: no files remaining
also u tried use mupx insted of mup, n now i got:
Invalid configuration file mup.json: There is no meteor app in the current app path.
new mup.json look like:
{
"servers": [
{
"host": "server IP",
"username": "root",
"password": "blablabla",
"env": {}
}
],
"setupMongo": false,
"appName": "appName",
"app": "~/Google Drive/_projects/Coda/appName",
"env": {
"PORT": 80,
"ROOT_URL": "http://appName.com",
"MONGO_URL": "mongodb://login:pass#ds035735.mongolab.com:35735/appName"
},
"deployCheckWaitTime": 15,
"enableUploadProgressBar": true
}
but i tried any type of path, with ~ or full path, n always the same, installation is starting only when in field path i writ:
"app": ".",
after increasing to 0.10.40 you should run 'mup setup' again followed by 'mup deploy'.
In my project I have mup.json in the project root (same level as .meteor) and
"app": "/Volumes/Macintosh HD/Users/myName/Google Drive/_projects/Coda/myAppName"
looks like
"app": ".",
Not sure if that is important.
i resolve this problem only with mupx + i moved project on server and deploy it from server to the same server.