Issue with routing to files w/ Syslog - syslog

I am trying to set up syslog so that we can have our app in different environments log to different files.
Everything works great for our development environment, but no logs are coming through for our staging environment.
Here is the snippet from our config file, 01-app.conf,
# staging
if ($programname == "api-staging") then {
action(type="omfile" file="/var/log/staging/api.log")
stop
}
# development
if ($programname == "api-development") then {
action(type="omfile" file="/var/log/development/api.log")
stop
}
user.* /var/log/other/user.log
stop
I have our config file start with 01 because the app logs go to multiple places if they respect the default config before our own.
Given that the development logs are getting routed correctly and that removing stop from the staging rule sends logs to /var/log/other/user.log, I am pretty confident there is not an issue with sending the logs to the box itself, but is somehow a problem with the routing.
An example log from /var/log/other/user.log that should be in /var/log/staging/api.log is this:
Sep 14 17:28:33 RD0003FF77E220 api-staging[58340]: "...", so I know that the programname I am looking for in the config is the correct name.

The syslog user did not have access to the staging directory so it could not write the logs there.

Related

Angular module federation site not able to deploy in production properly (404 error when calling remotes)

I attempted millions of things already, so help me please.
By following this documentation:
https://nx.dev/recipes/module-federation/faster-builds#production-build-and-deployment-with-nx-cloud
I want to simulate same exercise, create a host and 3 remotes, and deploy the 4 applications on my LOCAL IIS, each one of the apps with a different port. (like if all of them would be in a different CND and deployed independently)
A) I created a MF site with a few characteristics:
host (local port:4201, prod port:6001)
remote shop (local port:4202, prod port:6002)
remote cart (local port:4203, prod port:6003)
remote about(local port:4204, prod port:6004)
B) To simulate PROD, I created on my Local IIS, all the webapps with the mentioned PROD ports, like
host:6001
shop:6002
cart:6003
about:6004
C) I configured as per the documentation, the following in the config prod (note same port for all):
module. Exports = withModuleFederation({
...moduleFederationConfig,
remotes: [
['shop', 'http://localhost:6001/shop'],
['cart', 'http://localhost:6001/cart'],
['about', 'http://localhost:6001/about'],
],
});
which will be incorrect, the console will throw error like
"localhost:6001/shop/remoteEntry.mjs net::ERR_ABORTED 404"
This has sense to me, because the port+folder doesn't exists,
so is not correct to search the remoteEntry on that URL. I assume the example meant to
have all the remotes under the same site, which beaks the purpose of MF to be deployed in
different sites.
In other words, the documentation is not correct for this example.
Based on the error, I modified the config to be like this:
module. Exports = withModuleFederation({
...moduleFederationConfig,
remotes: [
['shop', 'http://localhost:6002'],
['cart', 'http://localhost:6003'],
['about', 'http://localhost:6004'],
],
});
This has more sense now, the 404 error while fetching "RemoteEntry.mjs" disappeared, so it means it is able to fetch the MJS files.
D) Now, trying to navigate
localhost:6001 = OK
localhost:6001/shop = ERR 404
localhost:6001/cart = ERR 404
localhost:6001/about = ERR 404
and now I don't know what else I should configure, it is supposed that the MF configuration would understand the ROUTE /shop, and search for the proper URL which will be in the port :6002, but instead i'm constantly receiving a 404.
Am I having a wrong understanding about module federation? Is it going to support all the remote sites to be deployed under diferent sites? How can I achieve this if so?
Thanks in advance.

Fail to open kibana homepage from development environment

I'm setup kibana local development by following up the wiki from
https://github.com/elastic/kibana/blob/7.1/CONTRIBUTING.md#setting-up-your-development-environment
yarn es snapshot
i'm able to run elastichsearch locally at http://localhost:9200/ with above cli.
yarn start
i'm able to start the server for kibana with above cli, and according to log, it promote me to open http://localhost:5601/ykl,
server log [15:57:39.991] [info][listening] Server running at http://localhost:5603/ykl
server log [15:57:40.150] [info][status][plugin:spaces#8.0.0] Status changed from yellow to green - Ready
after i logined with default user/password, it return back a error response.
{"statusCode":403,"error":"Forbidden","message":"Forbidden"}
I'm not able to access page
http://localhost:5601/ykl/app/kibana#/management,
it will redirect me to http://localhost:5601/ykl/#/management with error response just as above error json response.
My question is what's wrong with the default user account to access homepage? how to change kibana configuration to allow me to access homepage.
ps:
I'm able to open status page without any problem http://localhost:5601/ykl/status#?_g=()
I found answer myself.
just use another default user account which has permission, actually, i login with elastic and it works
https://www.elastic.co/guide/en/elastic-stack-overview/7.1/built-in-users.html

Phabricator feed.http-hooks not notifying

I am trying to setup Slack notifications for Phabricator using etcinit/phabulous. However, Phabricator does not seem to be notifying the server.
My config looks like this:
{
feed.http-hooks: [ "http://127.0.0.1:8085/v1/feed/receive" ]
}
If I run curl http://127.0.0.1:8085 from within the server I get
{"messages":["Welcome to the Phabulous API"],"status":"success","version":"2.4.0-beta1"}
I am running Phabulous in debug mode, but I can see no request is ever made to 127.0.0.1:8085 since Gin shows no debug message.
Am I missing some configuration in Phabricator to actually made feed.http-hooks work?
Turns out I had to restart the daemon.
Above configuration didn't work for me, but this works:
{"feed.http-hooks":"https://callback_domain.xyz"}

Browser-sync not loading ASP.NET 5 website using proxy

Browsersync seems stuck loading the website when run in proxy mode. When using the same config but another website it does work, so it must be something in my local setup but I have been unable to figure out what.
I'm running it on Windows, proxying a ASP.NET 5 application which runs on localhost:5000. Directly navigating towards this location works fine. Trying to hook into the pipeline on ASP.NET's side signals the html is exported over the pipeline but the browser never receives a response and stays pending.
The logging output indicates no difference besides a different session and the obvious proxy url if I run it on another website, where also other sites on localhost seems to work (on IIS).
Configuration used (gulp):
gulp.task('browsersync', function () {
browserSync({
proxy: 'localhost:5000',
notify: true,
open: true,
logLevel: 'debug'
});
});

Amazon S3 Permission Issue

I have 2 buckets for my application:
- gambify-dev-devil ( for development)
- gambify-prod (for production)
I have set them up absolutely identical, but for my production I have issues accessing some ressources. My production environment is a pagodabox. I use Gaufrette, LiipImagine and Vichuploader for my File handling. The issue I have is that in my production environment seems that either my application requests the wrong ressources or that there is an access issue. Because I have a lot logs indicating the an AccessDenied error within my bucket:
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>D90C05F182C91003</RequestId>
<HostId>
i7SkwNCbyUnCCBCnkyyrv7x9pOLGtr4sUgqWYkJMqk0X0lXYIW5zeu4688FCqBiA
</HostId>
</Error>
In order to investigate this issue further (I really have no idea where it is coming from because its working fine in every other environment and also in production it was working fine 2 weeks ago), I would like to see which ressource was requested. Is there a chance to find the URL that was requested or who tried to request what, that caused this issue? Because if I provide a correct path to an existing ressouce the bucket works fine:
e.g: https://s3-eu-west-1.amazonaws.com/gambify-prod/profile/default.png
Update:
Now I found the real error message that is causing me problems:
04fadbab7a82c23143855d5c918e1ba8fa32ef1d622c00a3daa9fcdc6daf5d90
gambify-prod [05/Aug/2013:19:03:57 +0000] 173.193.185.250 -
133EF43443891C63 REST.HEAD.OBJECT
profile_thumb_small/51e9a03453c80.jpeg "HEAD
/profile_thumb_small/51e9a03453c80.jpeg HTTP/1.1" 403
SignatureDoesNotMatch 1015 - 7 -
"https://gambify-prod.s3.amazonaws.com/profile_thumb_small/51e9a03453c80.jpeg"
"aws-sdk-php/1.5.17.1 PHP/5.3.23 Linux/2.6.32-042stab068.8 Arch/x86_64
SAPI/fpm-fcgi Integer/9223372036854775807 Build/20121126140000
simplexml/0.1 json/1.2.1 pcre/8.31 spl/0.2 curl/7.19.7 openssl/0.9.8k
apc/3.1.9 pdo/1.0.4dev pdo_sqlite/1.0.1 sqlite/2.0-dev sqlite3/0.7-dev
zlib/1.1 memory_limit/200M date.timezone/Europe.Berlin
open_basedir/off safe_mode/off zend.enable_gc/on" -
I still have no idea what is causing the initial issue.
Moved the discussion about the signature error to: Amazon S3 signature not working with SDK
If you haven't already done so, you can configure your production bucket to keep a log of all the requests made against it, similar to an Apache or other web server access log.
http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html
Once you have logging enabled, you will be able to find out the URL of the request, who requested it and when it was requested.
Update:
If an AccessDenied error is returned when trying to access the S3 server log files through the API or the AWS console, the problem is caused by missing permissions (ACLs) on the log files.
To access those log files, the Open/Download permission should be granted for the user that owns them. Having a bucket policy with public read enabled is not enough to get access to the server log files.
More details on the issue are available in the comments below.
These look like responses that S3 sends back when the ACL/Grant permissions aren't set correctly. I'd check those first. If your bucket is behind a CloudFront distribution, make sure you invalidate the CloudFront cache as well.

Resources