I have a URL in the following form
https://abc.domain.com//xyz/test
and I'm fetching the contents via Volley (Android).
Is it a valid URL? Do I need to one / remove from //xyz
Basically it's still a valid URL.
Both server and search engine consider these two (with / or //) are separated URLs.
So be sure what URL is correct for fetching.
Related
I was going to post this in the Workbox Github repo, but I doubt it's a bug and more likely my own misunderstanding. I have found some slightly similar questions, but none of the answers seem to clearly explain how I can resolve my issue.
In my sw.js file I am precaching the Home URL and the Start URL. The Start URL is the exact same as the Home URL, except it appends ?utm_source=pwa to the URL. This is a technique I've read that others do to track PWA usage in Google Analytics and I like the idea.
However, now when a new user arrives at the website, they load the initial page and then Workbox fetches the Home URL and then fetches the Start URL. This means that if the user arrives at the homepage of the website they will have loaded that page 3 times. I'd like to figure out how to get Workbox to realize that the Home URL and Start URL are essentially the same and to not need that third fetch request.
I understand that ignoreUrlParametersMatching defaults to use [/^utm_/] which I would expect it to do as I described above, but perhaps I'm understanding it incorrectly and it does not apply to prefetched URLs...? Does it automatically apply if I don't explicitly call it from precacheAndRoute()?
To clarify my expectation of ignoreUrlParametersMatching would be that it precaches the Home URL and then when it attempts to cache the Start URL it ignores (removes) the UTM parameter, sees that it already has that URL cached and does not fetch. Then, when the Start URL is requested from cache, it again would ignore the UTM parameter and respond with the URL it has in cache. Is this far off from reality? If so, how should I do this to achieve both my tracking and reduce the "duplicate" fetch?
Here are some excerpts of my sw.js file:
const HOME_URL = 'https://gearside.com/nebula/';
const START_URL = 'https://gearside.com/nebula/?utm_source=pwa';
workbox.precaching.precacheAndRoute([
//...other precached files
{url: HOME_URL, revision: revisionNumber},
{url: START_URL, revision: revisionNumber},
]);
Both URLs are precached:
Shows both fetch requests:
Note: I've noticed this problem with or without revision numbers.
TL;DR
Do not include https://gearside.com/nebula/?utm_source=pwa in the precache manifest.
Use the workbox-google-analytics module:
import * as googleAnalytics from 'workbox-google-analytics';
googleAnalytics.initialize();
Long version
You should precache based on unique resources. Every entry defined in the precache manifest will be downloaded and cached.
If https://gearside.com/nebula/ and https://gearside.com/nebula/?utm_source=pwa serve the exact same content, only precache one of them (preferably the one without the query string).
The option ignoreURLParametersMatching serves to specify an array of regexes that will be tested against the query parameters, and if any of them matches, then the route match ignores such query parameter.
To exemplify,
precacheAndRoute([
{url: '/styles/main.css', revision: '777'},
], {
ignoreURLParametersMatching: [/.*/]
});
Will match any of these requests:
/styles/main.css
/styles/main.css?minified=0
/styles/main.css?minified=0&renew=1
and serve /styles/main.css, because the regex .* matches any query string.
The default value of ignoreURLParametersMatching is [/^utm_/]. If in the example above we skip ignoreURLParametersMatching, any of the following requests would be matched (and resolved with the precached /styles/main.css):
/styles/main.css
/styles/main.css?utm_hello=yes
/styles/main.css?utm_yes_what=dunno&utm_really=yeah
But the following requests will not go through the precache:
/styles/main.css?remodelate=expensive&utm_pwa=no
/styles/main.css?utm_spa=neither&trees=awesome
because none of them have exclusively only query parameters starting with utm_.
More info about the workbox-google-analytics module can be found here: Workbox Google Analytics
I need help in rewriting the URL in nginx configuration which should work as below :
/products/#details to /produce/#items
but it is not working as # is creating a problem.
Note : # in the URL denotes the page section
e.g. www.test.com/products/#details should get redirected to www.test.com/produce/#items
This is impossible using nginx because browsers don't send hashtags (#details) to servers. So you cannot rewrite in nginx or any other web servers.
In other words, hashtags is available to the browser only, so you have to deal it with Javascript. The server can not read it.
https://www.rfc-editor.org/rfc/rfc2396#section-4
When a URI reference is used to perform a retrieval action on the identified resource, the optional fragment identifier, separated from the URI by a crosshatch ("#") character, consists of additional reference information to be interpreted by the user agent after the retrieval action has been successfully completed. As such, it is not part of a URI, but is often used in conjunction with a URI.
There is no way to do this rewrite. The # and everything that precedes it will not be sent to the server, it is completely handled on the client side.
I'm working on a direct-to-S3 upload service that operates in two parts described below. This service would not be used by browsers, but would be a RESTful API used by other software clients.
Make a request to an endpoint which certifies and validates the upload, returning an upload URL if all's well.
Make a PUT request to the URL returned from #1 to actually do the upload to S3.
How should the server structure the response for the first endpoint?
The first option I am considering would be to use GET and return a status code 302 with a Content-Location header containing the URL to upload to. However, the intent behind the redirect descriptions in the spec seems to be focussed on redirecting after a form submission.
The other option I'm considering is to use POST for the first endpoint and returning a Location header with the URL, as described here:
If a resource has been created on the origin server, the response
SHOULD be 201 (Created) and contain an entity which describes the
status of the request and refers to the new resource, and a Location
header. RFC 2616 #9.5
Please advise on what other people have used in such circumstances?
I think it mainly depends on whether your API itself will have a resource referencing the uploaded file or not. The only one with knowledge of the uploaded file is the S3 itself or your API has something referencing it?
If the first case where only S3 knows about it, then it's OK to use the GET if it acts merely as a generator for the upload parameters, including the URI.
If the second case, then it shouldn't be a GET, since you're changing something on your side. Yes, you should make a POST, but the Location header should be used to return the URI for the created resource that references the uploaded file. That resource may have the upload URI and it could act like a state-machine, tracking if the file is uploaded or not. To avoid the need for clients to GET that resource before being able to upload, you may return the upload URI in the Link header, with a rel reflecting that purpose.
How to maintain constant URL?
For example:
http:// test23232 /temp/temp.aspx?a=1&b=1
a,b,query string parameter name get differ dynamically page to page(want to use those parameter but not dispaly for users)
While redirecting ,whatever the value present after ? should be removed and final URL displayed to users:
http:// test23232 /temp/temp.aspx or http:// test23232 /temp
Or any constant url post login mentioned throughout entire application.
I can acheive this by iframe, but how can I do by doing web.config through rule or global ascx.
or
whatever page redirect
http : //localhost /test/security / login.aspx
http : //localhost /test/security / main.aspx
http : //localhost /test/security / details.aspx
I want to show to browser as
http :// localhost / reap/ security /
Is it possible?
Use Session to store the values of a and b and keep the url simple.
You can send the necessary parameter using post method instead of get method.
More secure way of passing them is to store them into session variable.
that might make it more "secure" since then the client cannot change the variables by editing the source.
It will depends on how you really want to do.
The session parameters will keep on changing dynamically for every request.We can go for cookies.
Yet this link might be useful for url rewriting
This is a valid url
URL1:
http://www.itsmywebsite.com/showproduct.aspx?id=127
http://www.itsmywebsite.com/browseproduct.aspx?catid=35
but this is not
URL2:
http://www.itsmywebsite.com/showproduct.aspx?id=-1%27
http://www.itsmywebsite.com/browseproduct.aspx?catid=-1%27
How can I block URL2 and the ones containing a string of format "-1%27" and invalidate the request. It's an automated bot sending this request so basically I want to just block the request in probably Global.asax? Please advise.
Well, those are both perfectly valid URLs. Your "URL2" is simply percent-encoded. Since 0x27 is an ASCII apostrophe, your percent-encoded URL2s are exactly the same as
http://www.itsmywebsite.com/showproduct.aspx?id=-1'
http://www.itsmywebsite.com/browseproduct.aspx?catid=-1'
Perhaps your web page should be validating the data it receives on the query string and throwing an error.
Which version of iis are you using? If 7.0 or later use the URL rewrite module to reject invalid urls such as those ending in =-1
See an example blocking domains ( regex patterns ) here: http://www.hanselman.com/blog/BlockingImageHotlinkingLeechingAndEvilSploggersWithIISUrlRewrite.aspx