Reflection on Jetpack Compose - reflection

Currently, I’m creating a new SDK that contains sensitive fields that shouldn’t be read by the consumers (Think Credit Card Number field) and I’m using Jetpack Compose to create the forms, my question is, is it possible to do a reflection on Jetpack Compose, compromise the user privacy and read their credit card numbers for example?
I tried reading declared fields / declared methods via reflection but didn’t find anything important that’s possible to compromise are there any other ways to do this or to prevent this from happening?

The consumers will start a specific activity that contains the composables and they will receive results once the user finished this activity
For your specific concern, this is the best possible implementation for an in-app SDK that has its own UI. It is theoretically possible for a developer to obtain the credit card number being collected by your TextField(), BasicTextField(), or other composable. However, it would be even easier for them to obtain the credit card number from an EditText, so your use of Compose UI is, if anything, improving your position.
Basically, so long as you have the UI in the consumer's app, there will be some way to get at that UI and its contents.

Related

How to do updates with GraphQL mutations(Hot Chocolate)

We recently introduced GraphQL to our project, I've never used it before, however I like it a lot.
We decided to go with the HotChocolate library for .NET Core and Apollo for client side.
One thing I am not quite sure is about mutations, specifically peforming updates and partial updates with mutations.
I read somewhere that the practice is and that I should stick with creating specific mutation for each update, for instance updateUsername(), updateAddress(), updateCity() all of them should have specific mutation.
Issue with that is that my codebase will grow enormously if I decide to go in that direction, as we are very much data driven, with a lot of tables and columns per table.
Another question is, how to handle nullable properties, I can create a mutation which accepts some input object, but I'll end up with my entity being overwritten and all nullable properties not provided on the calling end will be set to null.
Is there a way to handle this update partially or I should go with specific update mutation for each property I want updated?
I think you understood the best practice around specific mutations wrong. It's less "have one mutation to update one field" and more "have specific mutations that encapsulate actions in your domain". A concrete example would be creating an "addItemToBasket" mutation, instead of having 3 mutations that update the individual tables related to your shopping basket, etc.
GraphQL is very much centered around front-end development, so your mutations should, in general, closely resemble actions a user can perform in your front-end. E.g. your front-end has an "Add to basket" button and your backend has an "addItemToBasket" mutation that resembles the action of placing an item in the user's basket.
If you design your mutations with this in mind, most of the time you shouldn't be having an issue with partial updates, since the mutation knows exactly what's to do and you are not just letting the user of your schema update fields at will.
If for whatever reason you need to have these partial updates, you likely won't get around implementing the patching yourself, unless your data provider supports it. Meaning you will have to have an input object type with nullable properties and your mutation that decides which fields have been changed and changing them using your data provider.
That being said, there's also a proposal for patching types in the Hot Chocolate repository that should simplify the patching part: https://github.com/ChilliCream/hotchocolate/issues/1326
for instance updateUsername(), updateAddress(), updateCity() all of them should have specific mutation.
Issue with that is that my codebase will grow enormously if I decide
to go in that direction, as we are very much data driven, with a lot
of tables and columns per table.
Correct. That's practically impossible to follow that way for more or less big data-driven applications.
Consider how we implement the patching in our API here. Also consider following the discussion about the patching feature in HotChocolate github thread. Hope, that helps!

Pre-Built Intents for Watson Conversation

After setting up the IBM Watson Conversation service on my Raspberry Pi today I was disappointed to see that I'd have to write out every possible input (intent) and output (entity). Chalk this up to my extreme naivety around machine learning, But isn't there a way to tie into an existing set of conversation capabilities?
For example, I'm sure Watson already knows all the words for Hello and their proper responses. Or how to answer a variety of silly questions. Is there any way to tap into the Watson we all saw on Jeopardy?
Thanks for your help!
There are a number of options here.
System Entities
These are pre-defined to allow you to understand certain common concepts. Numbers, Currency and Dates are the available ones at the moment, but there are more coming.
Entites
You can also pull public lists and import as CSV. For example: http://data.okfn.org/data/core/country-list . You may need to check the licensing before using though.
Intents
As you mentioned there is no pre-defined intents. But there are two solutions available that augment conversation.
First is "Watson Virtual Agent". This is a SaaS that contains pre-defined training sets for certain industries, as well as custom UI you can slot into your application. It's not cheap, but you can get a trial to play with it.
The other option is "Project Intu". It's still experimental, but it's purpose is to help in building robots/IOT devices. It contains pre-defined chit chat and some off topic stuff.
They have a "TJ Bot" project which can be used with a raspberry pi.

which method will yield better performance?

Consider an example data structure with nested items. ie: schools.classes.students
In the UI is used to add/edit/delete the students only.
Do I connect the firebase Ref at the root (schools) or classes and get the UI to iterate over and manage the the data or does each student have a dataRef as this is where the data will be changed?
If each student has its own Firebase Ref (lets say 100 students), does this add any performace/latency issues?
I can explain further if needed but wanted to keep post short.
Firebase is intended to be used with large numbers of references and callbacks. Creating a reference ("new Firebase(...)") is an extremely lightweight option, so you should feel comfortable doing this very often.
In your app, usually rendering and network activity are the major bottlenecks, so loading and re-rendering large chunks is likely to be slow. I'd recommend accessing Firebase in a very granular way, and tying it tightly to your GUI so that only the minimum set of GUI elements need to be re-rendered when things change.
If this wasn't clear, please provide some example code and I can make some additional recommendations.
[I work at Firebase]

Dynamic form creation in asp.net c#

So, I need some input refactoring an asp.net (c#) application that is basically a framework for creating dynamic forms (any forms). From a high level point of view, there is a table that has the forms, and then there is a table that has all the form fields, where it is one to many between the two. There is a validation table, where each field can have multiple types of validation, and it is a one to many from the form fields table to the validation table.
So the issue is that this application has been sold as the be-all-end-all customizable solution to all the clients. So, the idea is whatever form they want, we can build it jsut using DB configurations. The thing is, that is not always possible, because there is complex relationship between the fields, and complex relationship between the forms themselves. Also, there is only once codebase, and this is for multiple clients - all of whom host it on their own. There is very specific logic for each of the clients, and they are ALL in the same codebase, with no real separation. Sometimes it was too difficult to make it generic, so there are instances where it has hard coded logic (as in if formID = XXX then do _). You can also have nested forms, as in, one set of fields on its own within each form.
So usually, when one client requests a change, we make the change and deploy it to that client - but then another client requests a different change, and we make the change and deploy it for THAT client, but the change from the earlier client breaks it, and its a headache trying to debug, because EVERYTHING is dynamic. There is no way we can rollback the earlier change, because then the other client would be screwed.
Its not done in a real 3-tier architecture - its a web site with references to a DB class, and a class library. There is business logic in the web site itself, in the class library, and the database stored procs (Validation is done in the stored procs).
I've been put in charge of re-organizing the whole thing, and these are my thoughts/questions:
I think this is a bad model in general, because one of the things I heard one of the developers say is that anytime any client makes a change, we should deploy to everybody - but that is not realistic, if we have say 20 clients - there will need to be regression testing on EVERYTHING, since we don't know the impact...
There are about 100 forms in total, and their is some similarity in them (not much). But I think the idea that a dynamic engine can solve ALL form requests was not realistic as well. Clients come up with the most weird requests. For example, they have this engine doing a regular data entry form AND a search form.
There is a lot of preserving state between pages, and it is all done using session variables, which is ok, except that it is not really tracked, and so sessions from the same user keep getting overwritten, and I think sessions should be got rid of.
Should I really just rewrite the whole thing? This app is about 3 years old, and there has been lots of testing and things done, and serious business logic implemented, so I hate to get rid of all that (joel's advice). But its really a mess of a sphagetti code, and everything takes forever to do, and things break all the time because of minor changes.
I've been reading Martin Fowlers "Refactoring" and Michael Feathers "working effectively with legacy code" - and they are good, but I feel they were written for an application that was 'slightly' better architected, where it is still a 3-tiered architecture, and there is 'some' resemblance of logic..
Thoughts/input anyone?
Oh, and "Help!"
My current project sounds like almost exactly the same product you're describing. Fortunately, I learned most of my hardest lessons on a former product, and so I was able to start my current project with a clean slate. You should probably read through my answer to this question, which describes my experiences, and the lessons I learned.
The main thing to focus on is the idea that you are building a product. If you can't find a way to implement a particular feature using your current product feature set, you need to spend some additional time thinking about how you could turn this custom one-off feature into a configurable feature that can benefit all (or at least many) of your clients.
So:
If you're referring to the model of being able to create a fully customizable form that makes client-specific code almost unnecessary, that model is perfectly valid and I have a maintainable working product with real, paying clients that can prove it. Regression testing is performed on specific features and configuration combinations, rather than a specific client implementation. The key pieces that make this possible are:
An administrative interface that is effective at disallowing problematic combinations of configuration options.
A rules engine that allows certain actions in the system to invoke customizable triggers and cause other actions to happen.
An Integration framework that allows data to be pulled from a variety of sources and pushed to a variety of sources in a configurable manner.
The option to inject custom code as a plugin when absolutely necessary.
Yes, clients come up with weird requests. It's usually worthwhile to suggest alternative solutions that will still solve the client's problem while still allowing your product to be robust and configurable for other clients. Sometimes you just have to push back. Other times you'll have to do what they say, but use wise architectural practices to minimize the impact this could have on other client code.
Minimize use of the session to track state. Each page should have enough information on it to track the current page's state. Information that needs to persist even if the user clicks "Back" and starts doing something else should be stored in a database. I have found it useful, however, to keep a sort of breadcrumb tree on the session, to track how users got to a specific place and where to take them back to when they finish. But the ID of the node they're actually on currently needs to be persisted on a page-by-page basis, and sent back with each request, so weird things don't happen when the user is browsing to different pages in different tabs.
Use incremental refactoring. You may end up re-writing the whole thing twice by the time you're done, or you may never really "finish" the refactoring. But in the meantime, everything will still work, and you'll have new features every so often. As a rule, rewriting the whole thing will take you several times as long as you think it will, so don't try to take the whole thing in a single bite.
I have a number of similar apps for building dynamic forms that I support.
There's a whole lot of things you could/could not do & you're right to think hard before throwing away 3 years of testing/development.
My input for you to consider is to implement a plug-in architecture on top of what you're got. Any custom code for a form goes in the plug-in & the name of this plug-in is stored with the form. When you generate a form, the correct plug-in is called to enhance the base functionality. that way you get to move all the custom code out of the existing library. It should also mean less breaking changes, each plug-in only affects the form it's attached to.
From that point it'll be easy to refactor the core engine as it's common functionality across all clients & forms.
Since your application seems to have become a big ball of mud, a complete (or an almost complete rewrite) might make sense.
You should also take into account new technologies like document-oriented databases (couchDB, MongoDB)
Most of the form definitions could probably fit pretty well in document-oriented databases. For exemple:
To define a customer form, you could use a document that looks like:
{Type:"FormDefinition",
EntityType: "Customer",
Fields: [
{FieldName:"CustomerName",
FieldType:"String",
Validations:[
{ValidationType:"Required"},
{ValidationType:"StringLength", Minimum:15, Maximum:50},
]},
...
{FieldName:"CustomerType",
FieldType:"Dropdown",
PossibleValues: ["Standard", "Valued", "Gold"],
DefaultValue: ["Standard"]
Validations:[
{ValidationType:"Required"},
{
ValidationType:"Custom",
ValidationClass:"MySystem.CustomerName.CustomValidations.CustomerStatus"
}
]},
...
]
};
With this kind of document to define your forms, you could easily add forms and validations which are customer specific.
You could easily add subforms using a fieldtype of SubForm or whatever.
You could define FieldTypes for all common types of fields like e-mail, phone numbers, address, etc.
namespace System.CustomerName.CustomValidations {
class CustomerStatus: IValidator {
private FormContext form;
private List<ValidationErrors> validationErrors;
CustomerStatus(FormContext fc) {
this.validationErrors = new List<ValidationErrors>();
this.form = fc;
}
public List<ValidationErrors> Validate() {
if (this.formContext.Fields["CustomerType"] == "Gold" && Int.Parse(this.form.Fields["OrderCount"]) < 10) {
this.validationErrors.Add(new ValidationError("A gold customer must have at least 10 orders"))
}
if (this.formContext.Fields["CustomerType"] == "Valued" && Int.Parse(this.form.Fields["OrderCount"]) < 5) {
this.validationErrors.Add(new ValidationError("A valued customer must have at least 5 orders"))
}
return this.validationErrors;
}
}
}
A record of a document with that definition could look like this:
{Type:"Record",
EntityType: "Customer",
Fields: [
{FieldName:"CustomerName", Value:"ABC Corp.",
{FieldName:"CustomerType", Value:"Gold",
...
]
};
Sure, this solution is a lot of work, but if/when realized it could be really easy to create/update/customize forms.
This is a common but (IMO) somewhat naive design approach. "Instead of solving the customer's problem, let's build a tool to let them solve their own problems!". But the reality is, that generally customers want YOU to solve their ACTUAL problems. So build things that solve their problems.
If you can architect it in a way that allows you to reuse some parts for different customers, fine. But that is generally what the frameworks have done for you already - work out the common features that applications need and make them available in neat packages.

To Develop LMS and Scorm Sequesncing Engine

We want a LMS(coded in ASP.NET/vb.net) which is able to import SCORM packages & display it to learner for viewing content. I am totally new to SCORM and have been shifted to this project. I want to know how can I access SCORM Assessment object's (Test) result, like Learner ID, passed/fail, time.
Can you please guide me what will I need to implement in ASP.NET code to accomplish my goal ?
Task that I have done so far is,
Reading a manifest zip file, unzipping the file and get all information from the file(content name,description,items and launching page) and when user clicks on a particular course a pop up window is launching the page.
I eagerly want to know what I can do next to communicate with the LMS with the APIs. Shall I need to develop my own LMS to get the result,If there is a quiz which is running, all I need to know is the no of questions attempted by the user, whether the user is pass or fail and I need to store all information in the database for individual user so that I can review the result afterwards.
So the task remaining.
Tracking mechanism to deliver the content.
SCORM/LMS sequencing engine that controls the navigation between parts of SCORM conformant course.
Please help.
SLK at codeplex provides a good starting point. However, if you are truly wanting to provide an in-house written SCORM play that is fully compliant, you have a major task ahead of you. In essence there are three party you need to fully develop:
CAM - the unzipping process, which it sounds like you have already achieved.
RTE - the javascript host for SCORM, providing the 8 specified methods. Behind this you also need to implement the SCORM object model, which SLC does help with. If you have implemented all of this, then there should be data entries on the data model that indicate completion etc.
SN - the sequencing and navigation processing. This is significantly the most complex part. I am still in the process of trying to implement this, using SLC, and it is hard. It is the completion of this that will potentially give you more information that will enable you to know what has been done.
it is also worth looking at scorm.com, who are a consultancy, but provide a lot of useful information about the scorm standard.
That is true. SCORM is one of these stadarts where you can implement as little as possible. But you will need some of Javascript with a Backend-Script (JSON to the rescue) so you can track the scorm data, and save it your database.
But let me tell you this: This is the easiest task! Making your own course-creator is a whole other beast.

Resources