I'd like to use the offline routing feature so I downloaded the MapPackage of my region. I know the download went well because I can zoom on towns in the region offline and all details are shown).
But when I try to compute a route on this region, it does not work, and I get no error message. The onCalculateRouteFinished callback is just never called.
I'm using RouteManager#calculateRoute method (as for online calculation). Should I be using something else to make it work ?
I'm using an evaluation license, is it related ?
Thank you
edit to answer marco's comment:
I'm using "bretagne" region in france/europe.
RouteOptions ro = new RouteOptions();
ro.setTransportMode(RouteOptions.TransportMode.TRUCK);
ro.setRouteType(RouteOptions.Type.FASTEST);
RoutePlan routePlan = new RoutePlan();
routePlan.setRouteOptions(ro);
-- addWaypoint (coord) called several times. all coords are in the bretagne region --
RouteManager routeManager = new RouteManager();
routeManager.calculateRoute(routePlan, new RouteManager.Listener(){
#Override
public void onProgress(int i) {
//Log.d(TAG, "progress " + index + ": " + i);
}
#Override
public void onCalculateRouteFinished(RouteManager.Error error, List<RouteResult> list) {
if (error != RouteManager.Error.NONE) {
Log.e(TAG, "Could not calculate route: " + error.name());
return;
}
if (list != null && list.size() > 0) {
-- do something with result --
}
}
Using TransportMode.CAR instead of TRUCK fixed the problem. As stated by marco in the comments, offline routing is only available for cars.
Related
Our Google domain has groups (synced copies of our Active Directory email listservs/distribution groups) that have a lot of external accounts (currently kept as contacts in Active Directory).
As part of an intranet site I'm building I'm trying to be able to do mass search and replace of individual contact email address when for example a school district changes its domain name. One of the visual/verification steps I'm working on is to list the Google group membership of any selected external account, but I'm getting mixed results. For some accounts it seems to list the groups properly, and for others it doesn't seem to pull any. I have verified the external account's group membership in both Active Directory and in Google Admin group management, but when I query Google via code I don't get valid results every time... What am I missing? Code below.
-- in Global.asax
public static List<string> GOOGLE_GetListOfUsersGroups(string useremail)
{
List<string> groupList = new List<string>();
try
{
///stripped out credential/service stuff...
var groups = service.Groups.List();
groups.UserKey = useremail;
Groups gs = groups.Execute();
if (gs != null)
{
foreach (Google.Apis.Admin.Directory.directory_v1.Data.Group g in gs.GroupsValue)
groupList.Add(g.Email);
}
}
catch (Exception ex)
{
SendERROREmail("GLOBAL<HR>GOOGLE_GetListOfUsersGroups()<HR>useremail:" + useremail + "<HR>" + ex.ToString());
}
return groupList;
}
and the consuming function:
--- in Page.aspx
protected void ddlADExternalContacts_SelectedIndexChanged(object sender, EventArgs e)
{
lbContactsGoogleGroups.Items.Clear();
if (ddlADExternalContacts.SelectedIndex > 0)
{
//show what google has for same group
List<string> memberList = Global.GOOGLE_GetListOfUsersGroups(ddlADExternalContacts.SelectedValue);
if (memberList != null)
{
foreach (string s in memberList)
lbContactsGoogleGroups.Items.Add(new ListItem(s, s));
}
}
}
Also, does anyone have a good example how to handle this in Google's 'preferred' JSON format rather then the API route?
UPDATE: Ok, its not my code, its something with the group/Google. When I use the 'try it' functionality on the sdk admin site I get the same results for groups that work (in my code and their site) and no results from the same groups that should be showing results...
{
"kind": "admin#directory#groups",
"etag": "\"HKdfSgTnCxrWl3RtRnlZSCPY3NjdWJxz53nrhwSz7ob4/oMWMqbsluP5m2PCo8Y7WmWeHGP4\""
}
Not that that helps me any, as there's no error or anything, just the 'no groups' result as if it can't find the external account...
UPDATE2: Ok, based on what I'm seeing after some testing, I have a sneaky suspicion that Google is doing some validation of emails before checking for group membership and reporting anything. I.E. if the email being searched for is no longer valid (client's server doesn't responds that the account is reachable/enabled/exists...), it won't bother going any further... will try it out with a few more email addresses that I know should be invalid and update....later.
It looks like what you are experiencing might be a bug.
This has been reported on Google Issue Tracker here.
What you can do in this situation is to star the issue above and eventually add a comment saying that you are affected by it.
i hope i have another simple question for someone out there.
im currently working on a website where i am having a user post information to the server and upon posting they are taken to a verification page that tells them if it successfully was stored on the .mbd table or not. I have had no problems with doing this for a single table but i seem to be running into a issue when wanting to verifing the information stored on multiple tables.
HERE is the code that im using THAT IS WORKING for posting to a SINGLE table
protected void Page_Load(object sender, EventArgs e)
{
txtVerifiedInfo.Text = Session["txtUserFirstName"].ToString() +
"\n" + Session["txtUserLastName"].ToString() +
"\n" + Session["txtUserName"].ToString() +
"\n" + Session["txtUserPassword"].ToString() +
;
// Check if the record is successfully saved in the tblUserLogin Table and prints the appropriate message in the text box txtVerifiedInfo
if (clsDataLayer.SavePersonnel(Server.MapPath("App_Data\\WSC_DB.mdb"),
Session["txtUserFirstName"].ToString(),
Session["txtUserLastName"].ToString(),
Session["txtUserName"].ToString(),
Session["txtUserPassword"].ToString(),
))
{
txtVerifiedInfo.Text = txtVerifiedInfo.Text +
"\nThe information was successfully saved!";
}
else
{
txtVerifiedInfo.Text = txtVerifiedInfo.Text +
"\nThe information was NOT saved.";
}
}
}
Here is what i have attempted with not much luck at all
if (clsDataLayer.Saveneworder(Server.MapPath("App_Data\\WSC_DB.mdb"),
Session["txtfirstName"].ToString(),
Session["txtlastName"].ToString(),
Session["txtstreetAddress"].ToString(),
Session["txtcity"].ToString(),
Session["txtzipCode"].ToString()))
&& if (clsDataLayer.Savenewitem(Server.MapPath("App_Data\\WSC_DB.mdb"),
Session["jobType"].ToString() +
Session["txtmediaContent"].ToString()))
{
txtVerifiedInfo.Text = txtVerifiedInfo.Text +
"\nThe Order successfully submitted!";
}
else
{
txtVerifiedInfo.Text = txtVerifiedInfo.Text +
"\n The order did not save, please return to the previous screen and verify all of yourr data is correct, thank you.";
}
}
}
I Imagine im not that close to doing this correctly but hopefully in the ball park.
any help with this would be great.
Thank you for your time
To simplify, this is roughly what you have now:
var path = Server.MapPath("App_Data\\WSC_DB.mdb");
if (clsDataLayer.Saveorder(path, [order parameters]
&& clsDataLayer.Saveitem(path, [item parameters])
{
txtVerifiedInfo.Text = txtVerifiedInfo.Text +
"\nThe Order successfully submitted!";
}
else
{
txtVerifiedInfo.Text = txtVerifiedInfo.Text +
"\n The order did not save, please return to the previous screen and verify all of yourr data is correct, thank you.";
}
The code you've presented is not well structured, in that there is an unnecessary call to Server.MapPath and the call may be better placed as part of an argument to the constructor of your datalayer class.
What you may not be aware of is the && is a shortcut operator - the second part of the argument will not be called if the first part fails. Is this the behaviour you want?
I suspect you want Saveorder AND Saveitem to succeed or fail together, as part of one transaction. That being the case, you might be better off writing a new method in your datalayer class that does exactly that.
I'm assuming the .mdb in your filename indicates you're using MS Access? I'm not sure if it deals with transactions as well as SQL server (for instance), so you might need to write your own code to remove the order if its item cannot be saved (or whatever on-failure behaviour you need).
I am trying to retrieve the Binary Url of a multimedia component's file that is published as a dynamic Component Presentation.
I can see the Url in the Binaries table within the Broker database but I can't seem to get the binary url using either of the following bits of code:
using SQLBinaryMetaHome:
using (var sqlBinMetaHome = new Com.Tridion.Broker.Binaries.Meta.SQLBinaryMetaHome())
{
int componentItemId = int.Parse(queryStringId.Split('-')[1]);
var binaryMeta = sqlBinMetaHome.FindByPrimaryKey(new TCDURI(publicationId, 16, componentItemId));
if (binaryMeta != null)
{
VideoBinaryUrl = binaryMeta.GetURLPath();
}
else
{
Logger.Log.ErrorFormat("Failed ot load via SQL Binary Meta {0}", queryStringId);
}
}
Using Binary Meta factory:
using (var b = new BinaryMetaFactory())
{
var binaryMeta = b.GetMeta(queryStringId);
if (binaryMeta != null)
{
VideoBinaryUrl = binaryMeta.UrlPath;
}
else
{
Logger.Log.ErrorFormat("Failed to load binary meta {0}", queryStringId);
}
}
I can load the Component Meta data using the ComponentMetaFactory.
Any ideas on why I can't load the Binary Meta? Am I on the right track?
Rob
It looks like your first example is importing (auto-generated) methods from an internal DLL (Tridion.ContentDelivery.Interop.dll). Please don't use those and stick to the ones in the Tridion.ContentDelivery namespace (Tridion.ContentDelivery.dll).
You can find the official documentation for the Content Delivery .NET API in CHM format on SDL Tridion World (click the link, log in to the site and click the link again). From that documentation comes this example:
//create a new BinaryMetaFactory instance:
BinaryMetaFactory binaryMetaFactory = new BinaryMetaFactory();
//find the metadata for the specified binary
BinaryMeta binaryMeta = binaryMetaFactory.GetBinaryMeta("tcm:1-123");
//print the path to the output stream:
if(binaryMeta!=null) {
Response.Write("Path of the binary: " + binaryMeta.UrlPath);
}
//Dispose the BinaryMetaFactory
binaryMetaFactory.Dispose();
The factory class is Tridion.ContentDelivery.Meta.BinaryMetaFactory from Tridion.ContentDelivery.dll. I indeed also can't find a GetBinaryMeta method in that class, so it seems there is a mistake in the code sample. The most likely method that you should use is GetMeta.
Is there a reason you are not using a Binary Link to get a Link object to the specific Variant of the binary you want? Keep in mind that any DCP may render multiple variations of your multimedia component. From the Link object you can then get the URL to the binary.
Look for BinaryLink in the documentation for more details.
Try this:-
BinaryMeta binaryMeta = b.GetBinaryMeta(queryStringId);
if(binaryMeta != null) {
VideoBinaryUrl = binaryMeta.URLPath;
}
I did a SQL Profiler on the code and noticed that it was because I deployed my test app it wasn't calling the broker. Running the code within the actual Tridion Published site did hit the database but it was passing the value "[#def#]" for the variantId column.
I have now got it working with the following code:
IComponentMeta cm = cmf.GetMeta(queryStringId);
if (cm != null)
{
TcmId = queryStringId;
Title = cm.TryGetValue("title");
Summary = cm.TryGetValue("summary");
Product = cm.TryGetValue("product");
if (cm.SchemaId == StreamingContentSchemaId)
{
VideoId = cm.TryGetValue("video_url");
IsVimeo = true;
}
else if (cm.SchemaId == WebcastSchemaId)
{
using (var b = new BinaryMetaFactory())
{
var binaryMeta = b.GetMeta(queryStringId, "tcm:0-" + cm.OwningPublicationId + "-1");
if (binaryMeta != null)
{
VideoBinaryUrl = binaryMeta.UrlPath;
}
else
{
Logger.Log.ErrorFormat("Failed to load binary meta {0}", queryStringId);
}
}
}
We are rying to use WF with multiple tracking participants which essentially listen to different queries - one for activity states, one for custom tracknig records which are a subclass of CustomTrackingRecord.
The problem is that we can use both TrackingParticipants indivisually, but not together - we never get our subclass from CustomTrackingRecord but A CustomTrackingRecord.
If I put bopth queries into one TrackingParticipant and then handle everythign in one, both work perfectly (which indicates teh error is not where we throw them).
The code in question for the combined one is:
public WorkflowServiceTrackingParticipant ()
{
this.TrackingProfile = new TrackingProfile()
{
ActivityDefinitionId = "*",
ImplementationVisibility = ImplementationVisibility.All,
Name = "WorkflowServiceTrackingProfile",
Queries = {
new CustomTrackingQuery() { Name = "*", ActivityName = "*" },
new ActivityStateQuery() {
States = {
ActivityStates.Canceled,
ActivityStates.Closed,
ActivityStates.Executing,
ActivityStates.Faulted
}
},
}
};
}
When using two TrackingParticipants we have two TrackingProfile (with different names) that each have one of the queries.
in the track method, when using both separate, the lines:
protected override void Track(TrackingRecord record, TimeSpan timeout)
{
Console.WriteLine("*** ActivityTracking: " + record.GetType());
if (record is ActivityBasedTrackingRecord)
{
System.Diagnostics.Debugger.Break();
}
never result in the debugger hitting, when using only the one to track our CustomTrackingRecord subclass (ActivityBasedTrackingRecord) then it works.
Anyone else knows about this? For now we have combined both TrackingParticipants into one, but this has the bad side effect that we can not dynamically expand the logging possibilities, which we would love to. Is this a known issue with WWF somewhere?
Version used: 4.0 Sp1 Feature Update 1.
I guess I encounterad the exact same problem.
This problem occurs due to the restrictions of the extension mechanism. There can be only one instance per extension type per workflow instance (according to Microsoft's documentation). Interesting enough though, one can add multiple instances of the same type to one workflow's extensions which - in case of TrackingParticipant derivates - causes weird behavior, because only one of their tracking profiles is used for all participants of the respective type, but all their overrides of the Track method are getting invoked.
There is a (imho) ugly workaround to this: derive a new participant class from TrackingParticipant for each task (task1, task2, logging ...)
Regards,
Jacob
I think that this problem isn't caused by extension mechanism, since DerivedParticipant 1 and DerivedParticipant 2 are not the same type(WF internals just use polymorphism on the base class).
I was running on the same issue, my Derived1 was tracking records that weren't described in its profile.
Derived1.TrackingProfile.Name was "Foo" and Derived2.TrackingProfile.Name was null
I changed the name from null to "Bar" and it worked as expected.
Here is a WF internal reference code, describing how is the Profile selected
// System.Activities.Tracking.RuntimeTrackingProfile.RuntimeTrackingProfileCache
public RuntimeTrackingProfile GetRuntimeTrackingProfile(TrackingProfile profile, Activity rootElement)
{
RuntimeTrackingProfile runtimeTrackingProfile = null;
HybridCollection<RuntimeTrackingProfile> hybridCollection = null;
lock (this.cache)
{
if (!this.cache.TryGetValue(rootElement, out hybridCollection))
{
runtimeTrackingProfile = new RuntimeTrackingProfile(profile, rootElement);
hybridCollection = new HybridCollection<RuntimeTrackingProfile>();
hybridCollection.Add(runtimeTrackingProfile);
this.cache.Add(rootElement, hybridCollection);
}
else
{
ReadOnlyCollection<RuntimeTrackingProfile> readOnlyCollection = hybridCollection.AsReadOnly();
foreach (RuntimeTrackingProfile current in readOnlyCollection)
{
if (string.CompareOrdinal(profile.Name, current.associatedProfile.Name) == 0 && string.CompareOrdinal(profile.ActivityDefinitionId, current.associatedProfile.ActivityDefinitionId) == 0)
{
runtimeTrackingProfile = current;
break;
}
}
if (runtimeTrackingProfile == null)
{
runtimeTrackingProfile = new RuntimeTrackingProfile(profile, rootElement);
hybridCollection.Add(runtimeTrackingProfile);
}
}
}
return runtimeTrackingProfile;
}
Have a real puzzler here. I'm using Atalasoft DotImage to allow the user to add some annotations to an image. When I add two annotations of the same type that contain text that have the same name, I get a javascript permission denied error in the Atalasoft's compressed js. The error is accessing the style member of a rule:
In the debugger (Visual Studio 2010 .Net 4.0) I can access
h._rule
but not
h._rule.style
What in javascript would cause permission denied when accessing a membere of an object?
Just wondering if anyone else has encountered this. I see several people using Atalasoft on SO and I even saw a response from someone with Atalasoft. And yes, I'm talking to them, but it never hurts to throw it out to the crowd. This only happens in IE8, not FireFox.
Thanks, Brian
Updates: Yes, using latest version: 9.0.2.43666
By same name (see comment below) I mean, I created default annotations and they are named so they can be added with javascript later.
// create a default annotation
TextData text = new TextData();
text.Name = "DefaultTextAnnotation";
text.Text = "Default Text Annotation:\n double-click to edit";
//text.Font = new AnnotationFont("Arial", 12f);
text.Font = new AnnotationFont(_strAnnotationFontName, _fltAnnotationFontSize);
text.Font.Bold = true;
text.FontBrush = new AnnotationBrush(Color.Black);
text.Fill = new AnnotationBrush(Color.Ivory);
text.Outline = new AnnotationPen(new AnnotationBrush(Color.White), 2);
WebAnnotationViewer1.Annotations.DefaultAnnotations.Add(text);
In javascript:
CreateAnnotation('TextData', 'DefaultTextAnnotation');
function CreateAnnotation(type, name) {
SetAnnotationModified(true);
WebAnnotationViewer1.DeselectAll();
var ann = WebAnnotationViewer1.CreateAnnotation(type, name);
WebThumbnailViewer1.Update();
}
There was a bug in an earlier version that allowed annotations to be saved with the same unique id's. This generally doesn't cause problems for any annotations except for TextAnnotations, since they use the unique id to create a CSS class for the text editor. CSS doesn't like having two or more classes defined by the same name, this is what causes the "Permission denied" error.
You can remove the unique id's from the annotations without it causing problems. I have provided a few code snippets below that demonstrate how this can be done. Calling ResetUniques() after you load the annotation data (on the server side) should make everything run smoothly.
-Dave C. from Atalasoft
protected void ResetUniques()
{
foreach (LayerAnnotation layerAnn in WebAnnotationViewer1.Annotations.Layers)
{
ResetLayer(layerAnn.Data as LayerData);
}
}
protected void ResetLayer(LayerData layer)
{
ResetUniqueID(layer);
foreach (AnnotationData data in layer.Items)
{
LayerData group = data as LayerData;
if (group != null)
{
ResetLayer(data as LayerData);
}
else
{
ResetUniqueID(data);
}
}
}
protected void ResetUniqueID(AnnotationData data)
{
data.SetExtraProperty("_atalaUniqueIndex", null);
}