Xamarin.Forms 4 URI Route Navigation - xamarin.forms

I'm trying to get a head start with the pre-releases of Xamarin.Forms 4 and I've hit a road block with some of the Navigation features that have been included. From the Microsoft Docs it states:
Shell includes a URI-based navigation experience. URIs provide an improved navigation experience that permits navigation to any page in the application, without having to follow a set navigation hierarchy. In addition, it also provides the ability to navigate backwards without having to visit all of the pages on the navigation stack.
I have been able to setup my Shell file and set my Route names. The following code works as expected and navigates me to the appropriate Page:
private async void NavigateToAbout_Execute()
{
await (App.Current.MainPage as Shell).GoToAsync("app://Testapp/Test/modal");
}
The Problem:
Once I have successfully navigated to this Page, it seems I have no way of navigating away from the Page. In the quote above it states that we should have the ability to navigate backwards (to the previous Page) but I can't see any way of achieving this. Has anyone had some experience with this yet? I appreciate it's a pre-release so I may not get a response but any thoughts would be helpful.

I had the same question. It seems now to be ok. This work for me :
await Shell.Current.Navigation.PopAsync();
You can also have a look at this variable :
Shell.Current.Navigation.NavigationStack
It's a list with all the detail pages that were pushed above/onto the pages defined in the shell. I've seen that the 0 index in this list is always null. It represents the last page of the shell. The "detail" pages added onto the shell with shell.Current.GotoAsync("detailpage") comes from index 1 to n in the NavigationStack.

Related

Should you use next/link (prefetched client side transitions) for pages with any dynamic content?

From: next/link
You can see that the <Link> component from next/link enables client-side transitions and link prefetching, which are great features, but maybe not for all cases.
Please see the caveat I've run into. Let's say I have the following pages:
Home - Some landing page with a nav bar
Latest - Here I can see my latest posts
Admin - Here I can add more posts
The Latest page from the example above uses getStaticProps with revalidate. Something like:
export const getStaticProps : GetStaticProps<HomeRoute> = async () => {
const preloadedState = await getPreloadedState();
return({
revalidate: 1,
props: {
preloadedState
}
});
};
In theory, after 1 second, it should send the last stale response for the next request and trigger a new static regeneration to be served for the subsequent requests. After 1 second, the process repeats and you get fresh data at least after every second, which is pretty much immediately.
Now, see the caveat I've run into with next/link:
User lands on the Home page. There is a Link on the nav bar pointing to Latest. That link will be prefetched by next/link.
In some other browser, an admin goes to the Admin page and adds one more post (which should appear on the Latest page at some point).
Now user clicks on the Latest page link. The new post is not there.
Clicks on Home again. And clicks again on Latest. New post is still not there and never will be.
The transitions in this case are blazing fast, which is nice. But from my experience so far, I think that that user is locked inside a version of my website where the new post will never be available, because that 1st prefetch happened during a time where the new post didn't exist.
The only way that user will ever see the new post is if he/she presses F5 to do a full website reload. And it might be necessary to refresh twice, because the 1st one might return the previous stale version while triggering the regeneration for the next one.
I mean, what is the workaround to this issue? Should I not use next/link for pages that contain any dynamic data? Should I just use normal <a> tags?
UPDATE
From: https://nextjs.org/docs/basic-features/data-fetching#statically-generates-both-html-and-json
From the excerpt above, we can see that indeed, client-side transitions will not trigger a page regeneration, because they'll not call getStaticProps. They only fetch the pre-built JSON object for the page to use as props.
AFAIK, it means that you'll be locked to the version of the page that existed when you first visited the website. You can go back and forth and nothing in the pages would change, because the JSON data is probably cached on client anyway.
PS: I've tested this (like I've mentioned in the question above) and this is exactly what happens.
So how to workaround that? I would like for users that keep an open tab of my website to be able to get updates for the pages when they navigate from one page to the other.
A POSSIBLE SOLUTION
Set some kind of idle time counter, and if the user gets like 10 minutes of idle time (it means that they left the tab open). Whenever he comes back and do some action, I could refresh the whole website to make sure they get the new version of the pages.
Has anyone faced that problem before?
I've posted this very same question in several forums and this is the response I've got:
It seems what you described is true. next/link caches results in the client-side and your visitor will not fetch a revalidated result out of the box unless there is a full-page reload.
Depending on the likelihood of content changes, you might want to use <a> instead or you can look at some client-side content reload strategy that kicks in after mount and query data source for updated content.
Given that fact, I'll stick to using next/link and client-side transitions. But I'll also use something like a setInterval() to do a full website reload from time to time, so I'm sure my users will keep getting revalidated pages eventually.

Plone - embed the content of an internal link in a page

I have a list of pages that have to appear in different places of my Plone. If I use an internal link, I see an HTML link in the page but instead of that I would like to see the embedded content of the linked page.
I've tried to install some link plugins (Smart Link, vs.alias...) but I'm not able to find the solution.
I'm using Plone 4.3.
I don't know any Plone Plugin, which satisfy your requirement.
A long time ago i wrote this small js to show internal links in a popup using Plone's prepOverlay.
In this case you can put a popup custom CSS class on the internal link with TinyMCE.
It simply shows the content area of the given URL.
$(function(){
jq('a.popup').prepOverlay({
subtype:'ajax',
urlmatch:'$',urlreplace:' #content > *'
});
});
I guess this is a good starting point for your own implementation.
You could think of a criterion like location, contenttype, etc., to distinct, which articles should be picked (in worst case use collective.flag), then fetch them with a collection, to give you the links as a resultlist, and set its view to all_content, a nice feature, introduced in the Plone-4 series.

Restrict user to a single window

In a project I'm working on (ASP .Net 3.5 web forms), there is a requirement to restrict the user to work in only 1 window/tab at time. I found this post detailing a solution: http://www.codeproject.com/KB/aspnet/MultipleTabWindows.aspx
However, in one of the pages of my project there is a requirement to open a private (related to the logged in user) pdf document in a new window. The way I'm doing it is by building a request to a page inside of my project and, from that page, stream the pdf document. So, the url of my document looks something like: http://localhost:4087/PdfPage.aspx?type=1&id=2
Q: is there a way to bypass the "single window" rule for only the pdf page or should I say "No, the only way is by opening the pdf in the same window"?
Thanks in advance
When I used the example I put the code on the master page that most of the pages use. Some of the exceptions are links to pdf documents and the login page and assorted error pages.
If that doesn't work you could add logic to the javascript block to look at window.location to allow certain pages through.
Someone needs to say it, implementing any kind of security through javascript is inherently weak. All this really gets you is a short cut to state-management.
Under ideal conditions you should work with your client to make them receptive to the advice their IT department has to offer, instead of them mandating implementation whatever feature they see someone else use. Easier said than done, I know.
Best of luck!!

Why is search functionality not working on this page?

we deliver micro-site content for our client. Our content is injected into a wrapper that is supplied by another developer.
To deliver our content we host the wrapper as well as the content. The user can access this at
http://fundcentre.[redacted].ie/ (try a search for '[redacted]')
For the other content that is not ours, the other developer hosts a similar (though slightly different) wrapper and delivers the content. the user accesses this here:
http://www.[redacted].ie/ (try a search for '[redacted]')
The wrapper contains a search box, which does not work for us but it works for the other developer. I took a look at the network traffic with FireBug but it appears that when I do the search from the wrapper that we're hosting, I'm getting a "407 Proxy Access Denied" error. My guess is their proxy has a problem with the fact that the search is being conducted from a page hosted outside the scope of their proxy.
It was also suggested that there were javascript errors on the page that were preventing the search from executing but I can't see any. Also, I don't think I'd get as far as the proxy error if that was the case.
I don't really understand this stuff too well though, so could somebody with a bit more experience please take a look and maybe shed some light on this for me? Thanks.
The problem appears to be that the search box and the button next to it (the magnifying glass) are both causing the whole page form to submit after they try to set the page URL to the search URL. When you type into the search field and hit "Enter", the outer form that's wrapped around the entire page is submitted. When the magnifying glass is clicked, it tries to load the search results but because it's an image button the click also causes the outer form to be submitted.
I'm not exactly sure how best to fix it, partly because I think the entire page design should be thrown out. But if you're stuck with it, it might be possible to get it working by ditching that in-line Javascript on the button (since it's not working anyway) and then wrapping the search stuff with its own <form> directed to the search page. Having a <form> within a <form> is bad mojo but that's hard to avoid in a design that puts the whole page in a <form> to start with.
Alternatively, you could try handling keypress events on the search input to detect "Enter", and have that handler and the code on the button both return "false" to stop the outer form submission.
edit — as to why that works on the other site, well it appears to me that there the outer form really is the "search" functionality somehow, as they don't have the click handler on the search button at all, so all it'll do is submit the outer form anyway.
edit again — also, I never see that "proxy" issue. The search from your page works fine for me if I first fix the inline Javascript on the button so that it ends with ; return false. That actually may be all you need to do.
It could be a problem that your tags' action are pointing to different scripts. One is pointing to "Home.aspx" and the other to "/Default.aspx".
The two links are in different subdomains, so maybe you would like to change the action of the subdomain so it contains the full location of the action (ex. "http://www.newireland.ie/Default.aspx")

Issue with IHttpHandler and relative URLs

I've developed a IHttpHandler class and I've configured it as verb="*" path="*", so I'm handling all the request with it in an attempt of create my own REST implementation for a test web site that generates the html dynamically.
So, when a request for a .css file arrives, I've to do something like context.Response.WriteFile(Server.MapPath(url)) ... same for pictures and so on, I have to response everything myself.
My main issue, is when I put relative URLs in the anchors; for example, I have main page with a link like this Go to Page 1 , and in Page 1 I have another link Go to Page 2. Page 1 and 2 are supposed to be at the same level (http://host/page1 and http://host/page2, but when I click in Go to Page 2, I got this url in the handler: ~/page1/~/page2 ... what is a pain, because I have to do an url = url.SubString(url.LastIndexOf('~')) for clean it, although I feel that there is nothing wrong and this behavior is totally normal.
Right now, I can cope with it, but I think that in the future this is gonna bring me some headache. I've tried to set all the links with absolute URLs using the information of context.Request.Url, but it's also a pain :D, so I'd like to know if there is a nicer way to do these kind of things.
Don't hesitate in giving me pretty obvious responses because I'm pretty new in web development and probably I'm skipping something basic about URLs, Http and so on.
Thanks in advance and kind regards.
First of all I would take a look at the output HTML delivered to the browser and specifically the links that you are describing.
You say that the link is Go to Page 2 but according to your result I would guess it is more like Go to Page 2.
You can confirm this by placing a brakepoint in the handler and when it triggers with "~/page1/~/page2" look in the address bar of your browser and it should say something like "http://www.example.com/page1/~/page2"
You should first look at the code generating the link. If it is generated from some kind of function call, make sure you get the web address and not the script address.
In any case these kind of links that switch between first level pages should all start with a "/" indicating that their location is relative to the root of your website rather than relative to the current shown page.

Resources