I have created a Linked Service using key vault and then used that Linked service in Data Linked Service (Azure SQL database). Both Linked services independently tested successfully. I have used that in a very simple pipeline, while I am debugging the pipeline, it gets failed with an error:
'Invalid linked service reference. Name: '.
This is referring to Key Vault linked service.
When I trigger the pipeline, it works fine. I have published my changes so many time but no success.
So my basic query is - My pipeline is not working on Debug, however it is working fine with Trigger now.
I had faced exactly the same problem, I performed the following actions:
Save all existing pipelines
Validated all
Publish all
Closed the datafactory browser window/tab
logged back into datafactory
Opened the pipeline again and the debug worked fine. I didn't have to touch the Azure Vault configuration. Its most likely to do with cached vault configuration (or a sync issue with the cached vault config)
When a pipeline is working by trigger, but not by debug, that suggests either: there is a difference between the published version and the version in the UI, or, you have parameters that depend upon the trigger.
It's very strange thing I have noticed in Linked Service in ADF. I have selected Azure Key Vault near to password and just passed AKV linked service name there and it worked.
That suggests that JSON is not properly working with Azure key vault services in Linked Services. Well, my issue has been resolved however logically I am still unclear.
If any one looking for resolution of same, please refer below. Thank you.
Key Vault Linked Service
Related
Getting this warning Data won't be synchronized. No GraphQL endpoint configured. Did you forget Amplify.configure(awsconfig)? which I assume is the reason my data isn't being synced to the cloud and is only working locally while testing my application.
I have read many of the pasts posts about this issue and have followed all the steps such as configuring the endpoint url in aws-exports.js but nothing I'm finding online is working. Anyone have any success with this issue using API key as the default auth setting?
I have a backend system built in AWS and I'm utilizing CloudWatch in all of the services for logging and monitoring. I really like the ability to send structured JSON logs into CloudWatch that are consistent and provide a lot of context around the log message. Querying the logs and getting to the root of an issue is simple or just exploring the health of the environment - makes CloudWatch a must have for my backend.
Now I'm working on the frontend side of things, mobile applications using Xamarin.Forms. I know AWS has Amplify but I really wanted to stick with Xamarin.Forms as that's a skill set I've already got and I'm comfortable with. Since Amplify didn't support Xamarin.Forms I've been stuck looking at other options for logging - one of them being Microsoft's AppCenter.
If I go the AppCenter route I'll end up having to build out a mapping of the AppCenter installation identifier and my users between the AWS environment and the AppCenter environment. Before I start down that path I wanted to ask a couple questions around best practice and security of an alternative approach.
I'm considering using the AWS SDK for .Net, creating an IAM Role with a Policy that allows for X-Ray and CloudWatch PUT operations on a specific log group and then assigning it to an IAM User. I can issue access keys for the user and embed them in my apps config files. This would let me send log data right into CloudWatch from the mobile apps using something like NLog.
I noticed with AppCenter I have to provide a client secret to the app, which wouldn't be any different than providing an IAM User access key to my app for pushing into CloudWatch. I'm typically a little shy about issuing access keys from AWS but as long as the Policy is tight I can't think of any negative side-effects... other than someone flooding me with log data should they pull the key out of the app data.
An alternative route I'm exploring is instead of embedding the access keys in my config files - I could request them from my API services and hold it in-memory. Only downside to that is when the user doesn't have internet connectivity logging might be a pain (will need to look at how NLog handles sinks that aren't currently available - queueing and flushing)
Is there anything else I'm not considering or is this approach a feasible solution with minimal risk?
I'm currently running an A/B test on a remote config value in Firebase. The target of the test is for users with a user property X that is contained in a regex.
My problem is that I fetch the remote configs for the user BEFORE setting the user property, so I need to update the remote config when the user property is set. So basically, a force update of a remote config (that is updated because the user is now part of the AB test).
Any ideas?
This article will help you.
Force your users to update your app with using firebase
This code will set app to fetch every minutes.
firebaseRemoteConfig.fetch(60)
But this method does not force update app.
So, I suggest you to all server APIs implement the functionality to do version checking. For example, if your app version was updated, all of APIs will return "You need to update your app" error.
Be careful with Firebase updates on both: iOS and Android. You can read more about it here: https://medium.com/#elye.project/be-careful-when-using-firebase-remote-config-control-for-pre-announced-feature-52f6dd4ecc18
On iOS config can be force updated with following code:
config.fetch(withExpirationDuration: 0, completionHandler: fetchCompletion)
I've implemented real time remote config updates via the documentation here.
In general, it works as expected, except when it comes to experiments via A/B Testing. Changes to A/B Testing that affect remote config do not fire the update cloud function hook.
Does anyone know if its possible to have the functions.remoteConfig.onUpdate cloud function hook trigger when a change to remote config is made via an A/B Testing experiment change?
The only workaround I can think of is to have a dummy value in remote config itself that I change whenever an experiment is created/updated.
firebaser here
There is nothing built into Remote Config for that at the moment. But thanks to the integration between Cloud Functions and Remote Config, you can build it yourself.
One of our engineers actually just gave a demo for this last week. I recommend you check it out here: https://youtu.be/lIzQJC21uus?t=3351.
In this demo, there are a few steps:
You publish a change from the Remote Config console.
This change triggers Cloud Functions through a functions.remoteConfig.onUpdate event.
The Cloud Function sends an FCM message to all apps through a topic.
When an app receives this message, it shows a prompt that the configuration is out of date.
When the user clicks the "fetch" button, the app fetches the new configuration data from Remote Config.
I am using Hangfire.AspNetCore with ASP.NET Core v1.0.
My Database is SqlLite.
As far as I found, there is no proper SQLite driver for hangfire for .NET Core.
So, I decided to work without any dashboard.
So, what I have configured is like the following:
In Startup.cs, in ConfigureServices method
services.AddHangfire(configuration => {});
And in Configure method, I am using this
app.UseHangfireServer();
But I am getting the next error:
An exception of type 'System.InvalidOperationException' occurred in Hangfire.Core.dll but was not handled in user code
Additional information: JobStorage.Current property value has not been initialized. You must set it before using Hangfire Client or Server API.
I don't need dashboard, so I did not configured dashboard.
Can anyone please help?
The error is telling you that you have not configured a job storage provider. It's got nothing to do with the dashboard. Even without the dashboard you must have a storage provider.
There is in-memory storage available via Nuget called Hangfire.MemoryStorage that you can use if you don't require persistent storage for your background jobs.