I'd like to use Mod-rewrite to serve different versions of my page to different devices. For example, I would send a different version to a device with a 1920 x 1024 monitor than I would to an iPhone. It seems like I want to have Mod-rewrite make its decision based on the content of HTTP_USER_AGENT and I'm wondering who is keeping track of what an iPad Air 2 puts in that variable, what an iPhone 6 puts, etc., etc., etc. There must be a huge table somewhere that's up to date.
Thanks for any help
What's in a User-Agent?
It's not the device but the browsing software being used on it that reports its HTTP_USER_AGENT to a web server. In addition to the browser's name and version information, it typically includes the platform or device name as well as the OS version its currently running on.
For example, the latest Firefox 36 reports the following user-agent
Mozilla/5.0 (Windows NT 6.3; rv:36.0) Gecko/20100101 Firefox/36.0
which includes its version 36.0, the platform it's running on Windows and the OS version 6.3 i.e. Windows 8.1. The browsers usually also report if the system is 64-bit WOW64and the user's locale, although the default en-US is often missing.
Here's what Safari reports on Windows 7 and an iPad:
Mozilla/5.0 (Windows; U; Windows NT 6.1; fr-FR) AppleWebKit/533.20.25 (KHTML, like Gecko) Version/5.0.4 Safari/533.20.27
Mozilla/5.0 (iPad; CPU OS 6_0 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/6.0 Mobile/10A5355d Safari/8536.25
And, that's pretty much it. You won't receive detailed information like the device generation (is it an iPad 2?) or its display metrics (the resolution the screen is running on).
So, while you can determine the class of the device (desktop or mobile) and serve alternate content, you can't serve resolution-specific content. So, how does other websites do it? Well, the answer is..
HTML5
HTML5 lets you write dynamic layouts that adjust to the available screen real estate automatically. So, while on a desktop it would render your site in a 3x2 layout, it can automatically switch to 1x6 one on a mobile browser. The best part is that it works independent of the user-agent; so you'll see it auto-adjust the layout on a desktop browser too (when you resize it) as well as work on a mobile browser with its desktop-mode on where it purposely reports a different user-agent to receive the desktop version of your site.
User-Agent Repositories
But, user-agents still do come handy. There's no standardized repository as such but there are plenty of sites that keep track of user-agent strings like user-agents.org, httpuseragent.org, useragentstring.com etc. The last one has been kept the most up to date.
You obviously can't match all of them so would do a sub-string match instead like
RewriteCond %{HTTP_USER_AGENT} (ipad|iphone|ipod|mobile|android|blackberry|palmos|webos) [NC]
Take a look at Mobile Redirect using htaccess for the different user-agents other have been using for their .htaccess(s) successfully.
Using JavaScript
I remember couple of years ago when I used to visit Google Search, it would profile my browser and redirect to itself with the display resolution attached to the query string. I believe they did it for analytics rather than the layouts. Now they encode everything (even their cookies) so you can't exactly tell what they're making a note of.
If you're interested in something similar, JavaScript can help you with that: window.screen.width and window.screen.height will give you the client's resolution. If you're only interested in the actual screen space the browser has to render your site, use availWidth and availHeight instead.
So, your <script> could redirect to resolution specific pages as
<script type="text/javascript">
if ((screen.width >= 1280) && (screen.height >= 720)) {
window.location.replace('http://example.com/index-hi.html');
} else if ((screen.width >= 1024) && (screen.height >= 600)) {
window.location.replace('http://example.com/index-med.html');
} else {
window.location.replace('http://example.com/index-low.html');
}
</script>
Or, you could set a cookie and serve pages from a resolution specific directory using .htaccess.
Related
Given a website (for example stackoverflow.com) I want to download all the files under:
(Right Click) -> Inspect -> Sources -> Page
Please Try it yourself and see the files you get.
How can I do that in python?
I know how to retrive page source but not the source files.
I tried searching this multiple times with no success and there is a confusion between sources (files) and page source.
Please Note, I'm looking for a an approach or example rather than ready-to-use code.
For example, I want to gather all of these files under top:
To download website source files (mirroring websites / copy source files from websites) you may try PyWebCopy library.
To save any single page -
from pywebcopy import save_webpage
save_webpage(
url="https://httpbin.org/",
project_folder="E://savedpages//",
project_name="my_site",
bypass_robots=True,
debug=True,
open_in_browser=True,
delay=None,
threaded=False,
)
To save full website -
from pywebcopy import save_website
save_website(
url="https://httpbin.org/",
project_folder="E://savedpages//",
project_name="my_site",
bypass_robots=True,
debug=True,
open_in_browser=True,
delay=None,
threaded=False,
)
You can also check tools like httrack which comes with a GUI to download website files (mirror).
On the other hand to download web-page source code (HTML pages) -
import requests
url = 'https://stackoverflow.com/questions/72462419/how-to-download-website-source-files-in-python'
html_output_name = 'test2.html'
req = requests.get(url, 'html.parser', headers={
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.67 Safari/537.36'})
with open(html_output_name, 'w') as f:
f.write(req.text)
f.close()
The easiest way to do this is definitely not with Python.
As you seem to know, you can download code to a single site w/ Command click > View Page Source or the sources tab of inspect element. To download all the files in a website's structure, you should use a web-scraper.
For Mac, SiteSucker is your best option if you don’t care about having all of the site assets (videos, images, etc. hosted on the website) downloaded locally on your computer. Videos especially could take up a lot of space, so this sometimes helpful. (Site Sucker is not free, but pretty cheap). The GUI in SiteSucker is self-explanatory, so there's no learning curve.
If you do want all assets to be downloaded locally on your computer (you may want to do this if you want to access a site’s content offline, for example), HTTrack is the best option, in my opinion, for Mac and Windows. (Free). HTTrack is harder to use than SiteSucker, but allows more options about which files to grab, and again will download things locally. There are many good tutorials/pages about how to use the GUI for HTTrack, like this one: http://www.httrack.com/html/shelldoc.html
You could also use wget (Free) to download content, but wget does not have a GUI and has less flexibility, so I prefer HTTrack.
I would like to see if anyone know how chrome-custom-tabs handle prompt window for android permission.
Let's take location as an example,
if we listed it in manifest, then in webview case, i got a chance to handle if i want to prompt the window for permission.
In regular browser case, chrome will prompt the window for permission.
Does anyone chrome-custom-tabs handles the prompt windows? Also, chrome-custom-tabsshows me the same user-agentas mobile broswer :user-agent:Mozilla/5.0 (Linux; Android 6.0; Nexus 6 Build/MPA44I) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.84 Mobile Safari/537.36 is this all expected?
chrome-custom-tabs is just a Chrome app, with customized UI.
Hence:
chrome-custom-tabs does not do anything special about permissions, it would request location the same way as Chrome would, and if granted, the permission would apply only to chrome. As far as I know, there are no plans to share permissions between Chrome and other apps.
chrome-custom-tabs intentionally uses the same user-agent as Chrome for Android.
I'm having a strange issue:
I can't login at http://maskatel.info/login, when I try to click the login button (the blue button that says Connexion), nothing happens at all.
So I opened up the developer tools in Chrome (f12) and saw the following JavaScript error every time I click the button: Uncaught ReferenceError: WebForm_PostBackOptions
I then found out that this function should be in WebResource.axd, I then went to the Resources tab in the developers tool and found out that this 'file' is not there and it is not loaded in the HTML source.
I tried a lot of different things without any success and finally tried another browser and it works fine in any other browsers. That same page was working perfectly previously in Chrome on the same computer in the past.
So then I tried to click the small gear in the Chrome developer tools and went to the overrides section and changed the UserAgent to something else and refreshed the page and it works perfectly with any other UserAgent string. The correct UserAgent when not overridden for my browser is Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.57 Safari/537.36
So right now I really don't know what to do next:
Is this issue related to the latest version of Chrome? I have not found any information on release dates for chrome.
It could also be a DotNetNuke problem but I doubt it since nothing there changed before and after the problem
It could also be asp.net related (I renamed App_Browsers to App_Browsers2 and still no luck.
Any help would be appreciated.
A data file which addresses this issue is available to download from the following url.
http://51degrees.mobi/portals/0/downloads/51Degrees.mobi-Premium-20130823.dat
.NET users will need to perform the following steps.
Download the above data file.
Replace the file 51Degrees.mobi-Premium.dat in the App_Data folder of the web site with the data file downloaded, renaming the downloaded data file to 51Degrees.mobi-Premium.dat
Restart the application pool servicing the web site to apply the new data file.
Some configurations may place the 51Degrees.mobi-Premium.dat file in a location other than App_Data. The web sites current location can be found in the 51Degrees.mobi.config file found in either the web site’s root folder or the App_Data folder. See the following page for more details.
https://51degrees.mobi/Support/Documentation/NET/WebApplications/Config/Detection.aspx
Please use contact us if you have any issues deploying this data file.
We are having this problem on all our DNN6 sites at work (we can't update to DNN7 since we are stuck on SQL Server 2005 and Windows 2003 boxes). DNN support ticket response was:
"This is a known issue with the Google Chrome update to version 29, the browser is having many issues with ASP.Net pages. The current workaround is to use a different web browser until Google can release a new update."
but I know big asp.net sites like redbox and msdn.microsoft.com are working fine, so it's definitely not a global problem.
Our servers are patched by our infrastructure folks, and they are usually up to date (patched regularly), so not sure what specifically is the issue.
I have personal sites on DNN6 (3essentials hosting), that are working fine. So its definitely not all DNN6/7 sites that are having problems. Maybe its DNN6 sites that are running on Windows 2003 boxes?????
It looks like someone has found the culprit at google. It is related to 51degrees that reports a version 0 for Chrome 29 user-agent string.
More details at https://code.google.com/p/chromium/issues/detail?id=277303
I tried to update the premium data (it is a professional edition installation) but I only get the same version that was aready there dating from 2013-08-15 and having 109 properties.
Then I tried renaming the App_Data/51Degrees.mobi-Premium.dat to add a .old at then end, but the system redownloads that file (same one looks like) to that directory.
So I went away and commented out the fiftyone configuration in the web.config file which instantly made the site work again for Chrome 29.
Let's hope there could be an update on a beter solution for this, but I think the culprit is finally found at least.
On a DNN 7.1.0 site, that uses the Popup feature in DNN (login window opens in a modal popup) the login functionality appears to work fine.
I would recommend you try the Popup option, and if that doesn't work, look at upgrading to the latest release of DNN.
update: I tested the same 7.1.0 site using /login instead of the login popup and it also still works fine, so I would encourage you to look at upgrading your DNN instance.
In my device, S60 5th edition
OS: Symbian S60 5th Edition Browser: 7.1
Useragent: Mozilla/5.0 (SymbianOS/9.4; Series60/5.0 NokiaN97-1/12.0.024; Profile/MIDP-2.1 Configuration/CLDC-1.1; en-us) AppleWebKit/525 (KHTML, like Gecko) BrowserNG/7.1.12344
There is no issue with cookies, cookies are working normally. But Link Button control do not works. Actually, as I think, ASP.NET server does not send javascript code to perform a post back. That's why It says '_doPostBack()' not found.
It got fixed if I change target framework version from 4.0 to 3.5.
What is the easiest solution for this problem..??
The reason some that some controls do not work on Symbian browser is that .Net injects a javascript function called __dopostback() into the page.
The controls call this function to cause a postback.
Symbian has a problem with the double underscore and cannot find the function.
Although I do not know how to fix it, I do have a workaround;
My default.aspx page has a javascript function called __Redirect() which redirects to the normal page that uses link buttons etc. If a device that does not recognize the double underscore, they are not redirected and stay on this "basic" page. on this page, I use hyperlinks etc.
I think you should check if the same link button is not working on all sites? I think its a site specific issue considering the fact that .NET version change makes the button work. As for the "easiest" solution, its certainly not on the device.
Been getting a lot of errors like this lately. I did some research and found that this is because html was detected in the input text. Does this mean that someone is trying to hack my website?
I can stop this from happeneing by turning off page validation, but this hardly seems like a good solution.
Here is some info from one of the errors:
HTTP_CONNECTION:keep-alive HTTP_ACCEPT:*/* HTTP_ACCEPT_ENCODING:gzip, deflate HTTP_ACCEPT_LANGUAGE:en-us HTTP_HOST:www.easymuaythai.com HTTP_REFERER:http://www.google.com/search?q=symbolic+tattoos&hl=en&client=safari&tbo=d&source=lnms&tbm=isch&ei=u5c1T8L-JfLYiAKRs5ixCg&sa=X&oi=mode_link&ct=mode&cd=2&ved=0CAkQ_AUoAQ&biw=1024&bih=622 HTTP_USER_AGENT:Mozilla/5.0 (iPad; CPU OS 5_0_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9A405 Safari/7534.48.3
Don't know if it matters, but I have a rule in my IIS to prevent image hotlinking.
Thanks.
Few days ago, while working on an ASP.NET 4.0 Web project, I got an issue. The issue was, when user enters unencoded HTML content into a comment text box s/he got something like the following error message:
"A potentially dangerous Request.Form value was detected from the client".
This was because .NET detected something in the entered text which looked like an HTML statement. Then I got a link Request Validation, that is a feature put in place to protect your application cross site scripting attack and followed accordingly.
To disable request validation, I added the following to the existing "page" directive in that .aspx file.
ValidateRequest="false"
But still I got the same error.
Later I found, for .NET 4, we need to add requestValidationMode="2.0" to the httpRuntime configuration section of the web.config file like the following:
<httpRuntime requestValidationMode="2.0"/>
But if there is no httpRuntime section in the web.config file, then this goes inside the <system.web> section.
If anyone wants to turn off request validation for globally user, the following line in the web.config file within section:
<pages validateRequest="false" />
(source)
First the string you give here seems like some search on google images for two words (symbolic tattoos) and end up to your site. Maybe its false, but the words have do with your site.
99.9% this call is not attack.
Now asp.net by default take care for every input that maybe use for script injection, or render anything on page. But after your familure with this and you know what you must do you can disabled it.
What to do: You can read anything, but write them on page using HtmlEncode, or UrlEncode if you place them on URL, or Attribute Encode if you place this input on attributes. If you import them on SQL then also take care to make your sql queries with parametres.
HotLinking
The image hotlinking just check if the reference come from your site and I do not think that have to do with this error. How ever because this is an image search, maybe the one is click on this google image, the google creates a script to show this image above and this some how is throw an error... hmmm maybe have to do...
Update
I found the link that you give is here Here is what your users come and see from the above reference. From google chrome is not make any error.
This link is found on the above reference link.
http://www.google.com/search?q=symbolic+tattoos&hl=en&client=safari&tbo=d&source=lnms&tbm=isch&ei=u5c1T8L-JfLYiAKRs5ixCg&sa=X&oi=mode_link&ct=mode&cd=2&ved=0CAkQ_AUoAQ&biw=1024&bih=622 HTTP_USER_AGENT:Mozilla/5.0 (iPad; CPU OS 5_0_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9A405 Safari/7534.48.3