Angular module federation site not able to deploy in production properly (404 error when calling remotes) - web-deployment

I attempted millions of things already, so help me please.
By following this documentation:
https://nx.dev/recipes/module-federation/faster-builds#production-build-and-deployment-with-nx-cloud
I want to simulate same exercise, create a host and 3 remotes, and deploy the 4 applications on my LOCAL IIS, each one of the apps with a different port. (like if all of them would be in a different CND and deployed independently)
A) I created a MF site with a few characteristics:
host (local port:4201, prod port:6001)
remote shop (local port:4202, prod port:6002)
remote cart (local port:4203, prod port:6003)
remote about(local port:4204, prod port:6004)
B) To simulate PROD, I created on my Local IIS, all the webapps with the mentioned PROD ports, like
host:6001
shop:6002
cart:6003
about:6004
C) I configured as per the documentation, the following in the config prod (note same port for all):
module. Exports = withModuleFederation({
...moduleFederationConfig,
remotes: [
['shop', 'http://localhost:6001/shop'],
['cart', 'http://localhost:6001/cart'],
['about', 'http://localhost:6001/about'],
],
});
which will be incorrect, the console will throw error like
"localhost:6001/shop/remoteEntry.mjs net::ERR_ABORTED 404"
This has sense to me, because the port+folder doesn't exists,
so is not correct to search the remoteEntry on that URL. I assume the example meant to
have all the remotes under the same site, which beaks the purpose of MF to be deployed in
different sites.
In other words, the documentation is not correct for this example.
Based on the error, I modified the config to be like this:
module. Exports = withModuleFederation({
...moduleFederationConfig,
remotes: [
['shop', 'http://localhost:6002'],
['cart', 'http://localhost:6003'],
['about', 'http://localhost:6004'],
],
});
This has more sense now, the 404 error while fetching "RemoteEntry.mjs" disappeared, so it means it is able to fetch the MJS files.
D) Now, trying to navigate
localhost:6001 = OK
localhost:6001/shop = ERR 404
localhost:6001/cart = ERR 404
localhost:6001/about = ERR 404
and now I don't know what else I should configure, it is supposed that the MF configuration would understand the ROUTE /shop, and search for the proper URL which will be in the port :6002, but instead i'm constantly receiving a 404.
Am I having a wrong understanding about module federation? Is it going to support all the remote sites to be deployed under diferent sites? How can I achieve this if so?
Thanks in advance.

Related

Stuck with woocommerce_rest_authentication_error: Invalid signature - provided signature does not match

Below issue was posted by me on https://github.com/XiaoFaye/WooCommerce.NET/issues/414 but since this may not be related at all to WooCommerce.Net but on a lowerlevel to Apache/Word/WooCommerc itself I am posting the same question here
I am really stuck with the famous error:
WebException: {"code":"woocommerce_rest_authentication_error","message":"Invalid signature - provided signature does not match.","data":{"status":401}}
FYI:
I have two wordpress instance running. One on my local machine and one on a remote server. The remote server is, as my local machine, in our company's LAN
I am running WAMP on both machines to run Apache and host Wordpress on port 80
The error ONLY occurs when trying to call the Rest api on the remote server. Connecting to the local rest api, the Rest Api/WooCommerceNet is working like a charm :-)
From my local browser I can login to the remote WooCommerce instance without any problem
On the remote server I have defined WP_SITEURL as 'http://[ip address]/webshop/ and WP_HOME as 'http://[ip address]/webshopin wp-config.php
Calling the api url (http://[ip address]/webshop/wp-json/wc/v3/) from my local browser works OK. I get the normal JSON response
Authentication is done through the WooCommerce.Net wrapper which only requires a consumer key, consumer secret and the api url. I am sure I am using the right consumer key and secret and the proper api url http://[ip address]/webshop/wp-json/wc/v3/ (see previous bullet)
I already played around with the authorizedHeader variable (true/false) when instantiating a WooCommerce RestApi but this has no effect
Is there anybody that can point me into the direction of a solution?
Your help will be much appreciated!
In my case, the problem was in my url adress. The URL Adress had two // begin wp-json
Url Before the solution: http://localhost:8080/wordpress//wp-json/wc/v3/
URL Now, and works ok: http://localhost:8080/wordpress/wp-json/wc/v3/
I use with this sentence.
RestAPI rest = new RestAPI(cUrlApi, Funciones.CK, Funciones.CS,false);
WCObject wc = new WCObject(rest);
var lstWooCategorias = await wc.Category.GetAll();
I hope my answer helps you.
Had the same issue. My fault was to define my url incorrect: http:// instead of https://.

Why would a callback URL not work (for a GroupMe bot)?

I am creating a GroupMe bot, and I'm testing out the callback URL and the basic WSGI app I've set up so far. I am planning host the bot on Heroku, but am testing it on my local machine first. I registered a bot, with the callback URL http://MY_IP_ADDRESS:8000. When I open a different shell and run requests.post('http://MY_IP_ADDRESS:8000', data = 'something') in the Python interpreter, everything works fine. However, when there is activity in the GroupMe group, nothing happens, not even an error message.
Here's my (simplified) code:
from wsgiref.simple_server import make_serve
def app(environ, startResponse):
try:
requestBodySize = int(environ.get('CONTENT_LENGTH', 0))
except ValueError:
requestBodySize = 0
# requestBody = environ['wsgi.input'].read(requestBodySize)
print('something')
responseBody = bytes('successful', 'utf-8')
status = '200 OK'
responseHeaders = [('Content-Type', 'text/plain'), ('Content-Length', str(len(responseBody)))]
startResponse(status, responseHeaders)
return [responseBody]
server = make_server('', 8000, app)
server.serve_forever()
I'm sure I'm doing something obvious, but I can't for the life of me figure out what. I'd appreciate any help!
I never figured out why the callback URL wasn't working with localhost, but when I deployed the app on Heroku, everything worked fine! It must have had something to do with my firewall settings.
When you run servers on your local machine your firewall doesn't really like that. GroupMe also cant send to anything but public facing addressees, which is why Heroku works. One thing I can recommend in the future is using Ngrok, https://ngrok.com/ this will work with your server to make a public facing address on your machine that you can use as callback url. I use Ngrok to test my bots and quickly iterate before pushing to a dedicated server like Heroku, honestly looking through Heroku log files is a pain...

Issue with routing to files w/ Syslog

I am trying to set up syslog so that we can have our app in different environments log to different files.
Everything works great for our development environment, but no logs are coming through for our staging environment.
Here is the snippet from our config file, 01-app.conf,
# staging
if ($programname == "api-staging") then {
action(type="omfile" file="/var/log/staging/api.log")
stop
}
# development
if ($programname == "api-development") then {
action(type="omfile" file="/var/log/development/api.log")
stop
}
user.* /var/log/other/user.log
stop
I have our config file start with 01 because the app logs go to multiple places if they respect the default config before our own.
Given that the development logs are getting routed correctly and that removing stop from the staging rule sends logs to /var/log/other/user.log, I am pretty confident there is not an issue with sending the logs to the box itself, but is somehow a problem with the routing.
An example log from /var/log/other/user.log that should be in /var/log/staging/api.log is this:
Sep 14 17:28:33 RD0003FF77E220 api-staging[58340]: "...", so I know that the programname I am looking for in the config is the correct name.
The syslog user did not have access to the staging directory so it could not write the logs there.

DotNetNuke website migration to Azure fails with nx domain DNS error

I am currently working on a DotNetNuke website (07.03.02) and I am trying to migrate it to Azure. The website is working on my local machine with IIS.
I followed this tutorial to migrate the website : http://www.dnnsoftware.com/community-blog/cid/154975/moving-a-dnn-install-to-microsoft-azure-websites
So I created a new web application on Azure that will host the website files. I also created a new database on Azure, and I imported my DNN backup database.
I changed the connection strings in my web.config to use my Azure database, I uploaded the website folder on Azure.
Now if I try to browse my webapp using the link [sitename].azurewebsites.net, I get the following error :
DNN Error Domain Name Does Not Exist In The Database
DotNetNuke supports multiple websites from a single database/codebase.
It accomplishes this by converting the URL of the client browser
Request to a valid PortalID in the Portals database table. The
following steps describe the process:
Web Server Processing When a web server receives a Request from a
client browser, it compares the file name extension on the target URL
resource to its Application Extension Mappings defined in IIS. Based
on the corresponding match, IIS then sends the Request to the defined
Executable Path ( aspnet_asapi.dll in the case of ASP.NET Requests ).
The aspnet_isapi.dll engine processes the Request in an ordered series
of events beginning with Application_BeginRequest.
HttpModule.URLRewrite OnBeginRequest ( UrlRewriteModule.vb ) The
Request URL is parsed based on the "/" character A Domain Name is
constructed using each of the relevant parsed URL segments.
Examples:
URL: http://www.exemple.com/default.aspx = Domain Name: www.exemple.com
URL: http://209.75.24.131/default.aspx = Domain Name: 209.75.24.131
URL: http://localhost/DotNetNuke/default.aspx = Domain Name:
localhost/DotNetNuke URL:
http://www.exemple.com/virtualdirectory/default.aspx = Domain Name:
www.exemple.com/virtualdirectory URL:
http://www.exemple.com/directory/default.aspx = Domain Name:
www.exemple.com/directory
Using the Domain Name, the application queries the database ( Portals
table - PortalAlias field ) to locate a matching record.
Note: If there are multiple URLs which correspond to the same website
then the website alias field must contain each valid Domain Name in a
comma separated list.
Example:
URL: http://localhost/DotNetNuke/default.aspx URL:
http://MACHINENAME/DotNetNuke/default.aspx URL:
http://209.32.134.65/DotNetNuke/default.aspx PortalAlias:
localhost/DotNetNuke,MACHINENAME/DotNetNuke,209.32.134.65/DotNetNuke
Note: If you are installing the application to a remote server you
must modify the PortalAlias field value for the default record in the
Portals table according to the rules defined above.
So I inserted the Site Alias ([sitename].azurewebsites.net) record into the PortalAlias table as mentioned in the turorial.
Now when I try to reach the website [sitename].azurewebsites.net, I don't have the previous DNN error but it loads for a long time and then I got the following error :
www.[sitename].azurewebsites.net’s server DNS address could not be
found. DNS_PROBE_FINISHED_NXDOMAIN
After the load end, the URL curiously become https://www.[sitename].azurewebsites.net and the DNS error occur.
Is there something I need to change in Azure or in my web.config file ? Maybe there is something to configure in DotNetNuke or in the ASP version?
I don't get why my browser change the url and why this dns error occur (I have no issues with my local IIS server).
(I also tried by using the automatic portal alias transfer as mentioned in the tutorial but I got the same result : the alias is inserted in the database but I still have the NXdomain error)
Thank you for your help !
Etienne.
In your original post you have:
www.[sitename].azurewebsites.net’s server DNS address could not be found. DNS_PROBE_FINISHED_NXDOMAIN
Try to manually put the . (The full, http with colons, etc.) Sometimes the web browser will add the WWW automatically, thinking you wanted it (I hate when they do that). Azure doesn't know about the www subdomain, so that is why you are probably getting that error.
Edit: Oh, and the long load time is good - it means that azure compiled your site and you didn't get a compile error.
The fact that you get an error that comes from DNN is good news, and means that you have (probably) done the major work correctly.
Now, you need to get into your database and modify the PortalAlias table so that there is an alias for sitename.azurewebsites.net. (I'm assuming that the brackets around sitename are incorrect and "[sitename]" eeds to be replaced by the actual domain name for your site.)

Amazon S3 Permission Issue

I have 2 buckets for my application:
- gambify-dev-devil ( for development)
- gambify-prod (for production)
I have set them up absolutely identical, but for my production I have issues accessing some ressources. My production environment is a pagodabox. I use Gaufrette, LiipImagine and Vichuploader for my File handling. The issue I have is that in my production environment seems that either my application requests the wrong ressources or that there is an access issue. Because I have a lot logs indicating the an AccessDenied error within my bucket:
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>D90C05F182C91003</RequestId>
<HostId>
i7SkwNCbyUnCCBCnkyyrv7x9pOLGtr4sUgqWYkJMqk0X0lXYIW5zeu4688FCqBiA
</HostId>
</Error>
In order to investigate this issue further (I really have no idea where it is coming from because its working fine in every other environment and also in production it was working fine 2 weeks ago), I would like to see which ressource was requested. Is there a chance to find the URL that was requested or who tried to request what, that caused this issue? Because if I provide a correct path to an existing ressouce the bucket works fine:
e.g: https://s3-eu-west-1.amazonaws.com/gambify-prod/profile/default.png
Update:
Now I found the real error message that is causing me problems:
04fadbab7a82c23143855d5c918e1ba8fa32ef1d622c00a3daa9fcdc6daf5d90
gambify-prod [05/Aug/2013:19:03:57 +0000] 173.193.185.250 -
133EF43443891C63 REST.HEAD.OBJECT
profile_thumb_small/51e9a03453c80.jpeg "HEAD
/profile_thumb_small/51e9a03453c80.jpeg HTTP/1.1" 403
SignatureDoesNotMatch 1015 - 7 -
"https://gambify-prod.s3.amazonaws.com/profile_thumb_small/51e9a03453c80.jpeg"
"aws-sdk-php/1.5.17.1 PHP/5.3.23 Linux/2.6.32-042stab068.8 Arch/x86_64
SAPI/fpm-fcgi Integer/9223372036854775807 Build/20121126140000
simplexml/0.1 json/1.2.1 pcre/8.31 spl/0.2 curl/7.19.7 openssl/0.9.8k
apc/3.1.9 pdo/1.0.4dev pdo_sqlite/1.0.1 sqlite/2.0-dev sqlite3/0.7-dev
zlib/1.1 memory_limit/200M date.timezone/Europe.Berlin
open_basedir/off safe_mode/off zend.enable_gc/on" -
I still have no idea what is causing the initial issue.
Moved the discussion about the signature error to: Amazon S3 signature not working with SDK
If you haven't already done so, you can configure your production bucket to keep a log of all the requests made against it, similar to an Apache or other web server access log.
http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html
Once you have logging enabled, you will be able to find out the URL of the request, who requested it and when it was requested.
Update:
If an AccessDenied error is returned when trying to access the S3 server log files through the API or the AWS console, the problem is caused by missing permissions (ACLs) on the log files.
To access those log files, the Open/Download permission should be granted for the user that owns them. Having a bucket policy with public read enabled is not enough to get access to the server log files.
More details on the issue are available in the comments below.
These look like responses that S3 sends back when the ACL/Grant permissions aren't set correctly. I'd check those first. If your bucket is behind a CloudFront distribution, make sure you invalidate the CloudFront cache as well.

Resources