In my project(Next.js v10), the immutable library is used to work with redux. Now I tackled the issue of optimization, because I ran into the problem of "red" first load js.
I am not very strong in this yet, but I try to learn and understand everything. I applied dynamic import on the pages themselves, as it is advised everywhere, and it helped a lot, since the situation was even worse than now. I checked _document.js and _app.js, everything seems to be fine except:
//_app.js
const {serialize, deserialize} = require('json-immutable');
...
const wRedux = withRedux(makeStore, {
serializeState: state => state ? serialize(state) : state,
deserializeState: state => state ? deserialize(state) : state
})(MyApp);
export default wRedux;
As it works now, I get:
If I turn off the use of serialize and deserialize completely (in _app.js), and index.tsx (no redux request and no imports other than React) just returns an empty div => I get this:
const wRedux = withRedux(makeStore, {
serializeState: state => state,
deserializeState: state => state
})(MyApp);
Some chunks are missing, but the immutable chunk remains in place(
though for some reason its size is slightly different, but the hash is the same):
I found this article: https://betterprogramming.pub/try-these-instead-of-using-immutable-js-with-redux-f5bc3bd30190 and check https://www.npmtrends.com/immutable-vs-immer-vs-seamless-immutable
The problem is that the whole project is already on the syntax immutable-js (post.get ('prop'))
My questions:
How much better Immer will be?
Will he(Immer) also "go into the general chunk"?
What other ways are there to reduce the size of "First Load JS shared by all"?
Perhaps there are some other shortcomings that I do not notice due to lack of experience, but they can be seen on the reports?
Thanks for any help!
I published the results of my work, I hope this will help someone(Sorry for my English :)).
Ditching immutable.js in favor of Immer did make sense (156 => 123):
Also, if anyone is interested, take a closer look at your chunks. As you can see from my question, in addition to Immutable, http-status.js was also "added" to the general First Load JS. This is a standard file with a set of response codes, from which I needed only one (I just wrote the number manually and removed the import), and the file where it was imported is distributed to the entire application. Additionally, I revised the connection of third-party scripts and used the internal optimization of fonts in next v10:
Also, in conjunction with immutable, json-immutable was used, which is no longer required, which removed 2 more small chunks.
And my previously problematic chunk now looks like this:
Finally: "First Load JS shared by all" has been reduced from 156 kB to 111kB (28.85%)
P.S. I have such a big _app.js chunk because I have Automatic Static Optimization disabled due to getInitialProps
Related
I noticed by accident that openstack.connect() automatically tries to access the clouds.yaml file. I tried to replicate this for the cinderclient, but it didn't work. I know of no documentation of that feature, therefore I just guessed:
from cinderclient import client
from keystoneauth1 import loading
loader = loading.get_plugin_loader('password')
auth_cinder = loader.load_from_options()
I also tried the other load commands given by loading, but none of them worked without further parameters like openstack.connect() did.
If I just missed the full documentation of this feature, I would love to be pointed towards the right direction.
I am working on a website with React.js and asp.net mvc 4, I am planning to use Flux as my front-end architecture, but I met some problems and was very confused about the use of Flux:
In the beginning,I thought Flux would be a perfect front-end architecture in my website,but after I read a lot of articles about Flux, I find that they are nearly all with NodeJs,even the demos from facebook team,that means they all do the rendering stuffs of React.js/Flux code in server side,right? but How can I use Flux in the client side ,I mean in the user's browser?
I am very confused,am I wrong if I treat react.js/flux as a client side solution?If I am not wrong, but why they all use them with NodeJs and ES6(like facebook Dispatcher.js), That's ok in server side,but what about client side ? most of user broswers don't support ES6. I tried using Babel to convert Dispatcher.js from ES6 to ES5,but the es5 version had some errors and didn't work.
And I also found some implements of Flux that claim to support client side,like fluxxor,but I don't have a chance to try it before I write this post,because I am too confused.
I hope someone can help me to figure out these problems.
ps. Sorry for my english,if you don't understand my words,pls let me know , I will explain it.
I think you want :
$ bower install flux
Then you could do something like this (if using require.js):
require(
['bower_components/flux/dist/Flux'],
function(
Flux )
{
var dispatcher = new Flux.Dispatcher();
dispatcher.register(function(payload) {
if (payload.actionType='test') {
console.log('i got a ', payload);
}
});
dispatcher.dispatch({
actionType: 'test',
otherData: { foo: 'bar' }
});
});
(This answer uses : https://bower.io/, https://libraries.io/bower/flux, http://requirejs.org/)
React is a client side library. You can serve a React App with virtual any backend language. The reason a lot of examples are with node is because it is easy and fast to set up.
You should try this tutorial:
https://facebook.github.io/react/docs/getting-started.html
It is pretty straight forward and doesn't require node.
Also maybe you should try starting to serve the React app statically at the beginning to better understand React itself.
ES6 works in Browsers thanks to Babel. If you believe you have any trouble with Babel, you might want to first play around with it's REPL to get a feeling for it: https://babeljs.io/repl/
The idea is that the code can run on the client and server (universal js, used to be called isomorphic javascript (though it goes a little further then that with serverside rendering etc..),
There are many flux implementations reflux is the most promising at this point , im using martyjs (but they stopped the development, it will be taken over by alt) but even for the flux architecture, u just get the dispatcher / event emitter and some ideas :D,
Shorty said u can install the npm packages (flux, react , babel) etc, but u need something like http://browserify.org/(with reactify) or Webpack, to run them in the browser. U don't need to run them on a node js "after its bundled", webpack/browserify will bundle the code so it can used within the browser independently
https://github.com/christianalfoni/flux-react-boilerplate/ <-- ther are some boilerplate, that provide some nice guide on how to bundle the code.
I want to give my users the possibility to create document templates (contracts, emails, etc.)
The best option I figured out would be to store these document templates in mongo (maybe I'm wrong...)
I've been searching for a couple of hours now but I can't figure out how to render these document template with their data context.
Example:
Template stored in Mongo: "Dear {{firstname}}"
data context: {firstname: "Tom"}
On Tom's website, He should read: "Dear Tom"
How can I do this?
EDIT
After some researches, I discovered a package called spacebars-compiler that brings the option to compile to the client:
meteor add spacebars-compiler
I then tried something like this:
Template.doctypesList.rendered = ->
content = "<div>" + this.data.content + "</div>"
template = Spacebars.compile content
rendered = UI.dynamic(template,{name:"nicolas"})
UI.insert(rendered, $(this).closest(".widget-body"))
but it doesn't work.
the template gets compiled but then, I don't know how to interpret it with its data context and to send it back to the web page.
EDIT 2
I'm getting closer thanks to Tom.
This is what I did:
Template.doctypesList.rendered = ->
content = this.data.content
console.log content
templateName = "template_#{this.data._id}"
Template.__define__(templateName, () -> content)
rendered = UI.renderWithData(eval("Template.#{templateName}"),{name:"nicolas"})
UI.insert(rendered, $("#content_" + this.data._id).get(0))
This works excepted the fact that the name is not injected into the template. UI.renderWithData renders the template but without the data context...
The thing your are missing is the call to (undocumented!) Template.__define__ which requires the template name (pick something unique and clever) as the first argument and the render function which you get from your space bars compiler. When it is done you can use {{> UI.dynamic}} as #Slava suggested.
There is also another way to do it, by using UI.Component API, but I guess it's pretty unstable at the moment, so maybe I will skip this, at least for now.
Use UI.dynamic: https://www.discovermeteor.com/blog/blaze-dynamic-template-includes/
It is fairly new and didn't make its way to docs for some reason.
There are few ways to achieve what you want, but I would do it like this:
You're probably already using underscore.js, if not Meteor has core package for it.
You could use underscore templates (http://underscorejs.org/#template) like this:
var templateString = 'Dear <%= firstname %>'
and later compile it using
_.template(templateString, {firstname: "Tom"})
to get Dear Tom.
Of course you can store templateString in MongoDB in the meantime.
You can set delimiters to whatever you want, <%= %> is just the default.
Compiled template is essentially htmljs notation Meteor uses (or so I suppose) and it uses Template.template_name.lookup to render correct data. Check in console if Template.template_name.lookup("data_helper")() returns the correct data.
I recently had to solve this exact (or similar) problem of compiling templates client side. You need to make sure the order of things is like this:
Compiled template is present on client
Template data is present (verify with Template.template_name.lookup("data_name")() )
Render the template on page now
To compile the template, as #apendua have suggested, use (this is how I use it and it works for me)
Template.__define__(name, eval(Spacebars.compile(
newHtml, {
isTemplate: true,
sourceName: 'Template "' + name + '"'
}
)));
After this you need to make sure the data you want to render in template is available before you actually render the template on page. This is what I use for rendering template on page:
UI.DomRange.insert(UI.render(Template.template_name).dom, document.body);
Although my use case for rendering templates client side is somewhat different (my task was to live update the changed template overriding meteor's hot code push), but this worked best among different methods of rendering the template.
You can check my very early stage package which does this here: https://github.com/channikhabra/meteor-live-update/blob/master/js/live-update.js
I am fairly new to real-world programming so my code might be ugly, but may be it'll give you some pointers to solve your problem. (If you find me doing something stupid in there, or see something which is better done some other way, please feel free to drop a comment. That's the only way I get feedback for improvement as I am new and essentially code alone sitting in my dark corner).
Is there a simple way to check if a content-type, or a specific object, has Versioning enabled/disabled in Plone (4.3.2)?
For context, I am making some unique conditionals around portal_actions. So instead of checking path('object/##iterate_control').checkout_allowed(), I need to first see if versioning is even enabled. Otherwise, the action in question does not display for items that have versioning disabled, because obviously it isn't checkout_allowed.
I didn't have any luck with good ole Google, and couldn't find this question anywhere here, so I hope it's not a dupe. Thanks!
I was able to get this working by creating a new script, importing getToolByName, and checking current content type against portal_repository.getVersionableContentTypes(). Then just included that script in the conditional.
I was looking for something like this that already existed, so if anyone knows of one let me know. Otherwise, I've got my own now. Thanks again!
The first thing that checkout_allowed does is check if the object in question supports versioning at all:
if not interfaces.IIterateAware.providedBy(context):
return False
(the interface being plone.app.iterate.interfaces.IIterateAware:
class IIterateAware( Interface ):
"""An object that can be used for check-in/check-out operations.
"""
The semantics Interface.providedBy(instance) are a bit unfortunate for usage in conditions or TAL scripts, because you'd need to import the interface, but there's a reversal helper:
context.portal_interface.objectImplements(context,
'plone.app.iterate.interfaces.IIterateAware')
I'm working in some old code which was originally designed for handling two different kinds of files. I was recently tasked with adding a new kind of file to this code. Most of my problems were solved by filling out an extensive XML file with a new entry that handled everything from what lists were named to how the file is written in plural lower case. But this ended up being insufficient, as there were maybe 50 different places in 24 different code files where I had to update hardcoded switch-statements that only branched for the original two file types.
Unfortunately there is no consistency in this; there are methods which operate half from the XML file, and half off of hardcode. Some of the files which look like they would operate off of the XML file don't, and some that I would expect that I'd need to update the hardcode don't need it. So the only way to find the majority of these is to run through testing the whole system when only part of it is operational, finding that one step to fix (when I'm lucky that error logging actually tells me what is going on), and then running the whole thing again. This wastes time testing the parts of the code which are already confirmed to work, time better spent testing the new parts I have to add on top of it all.
It's a hassle and a half, and to my luck I can expect that I will have to add yet another new kind of file in the near future.
Are there any solutions out there which can aid in this kind of endeavour? Something which I can input some parameters of current features, document what points in a whole code project actually need to be updated, and run something nice the next time I need to add a new feature to the code. It needn't even be fully automated, something that'll help me navigate straight to the specific points in everything and maybe even record what kind of parameters need to be loaded.
Doubt it matters specifically, but the code is comprised of ASP.NET pages, some ASP.NET controls, hundreds of C# code files, and a handful of additional XML files. It's all currently in a couple big Visual Studio 2008 projects.
Not exactly what you are describing, but if you can introduce a seam into the code and lay down some interfaces you can break out and mock, a suite of unit/integration tests would go a long way to helping you modify old code you may not fully understand well.
I completely agree with the comment about using Michael Feathers' book to learn how to wedge new tests into legacy code. I'd also strongly recommend Refactoring, by Martin Fowler. What it sounds like you need to do for your code is to implement the "Replace conditionals with polymorphism" refactoring.
I imagine your code today looks somewhat like this:
if (filetype == 23)
{
type23parser.parse(file);
}
else if (filetype == 69)
{
filestore = type69reader.read(file);
File newfile = convertFSto23(filestore);
type23parser.parse(newfile);
}
What you want to do is to abstract away all the "if (type == foo)" kinds of logic into strategy patterns that are created in a factory.
class FileRules : pReader(NULL), pParser(NULL)
{
private:
FileReaderRules *pReader;
FileParserRules *pParser;
public:
void read(File* inFile) {pReader->read(inFile);};
void parse(File* inFile) {pParser->parse(inFile);};
};
class FileRulesFactory
{
FileRules* GetRules(int inputFiletype, int parserType)
{
switch (inputFiletype)
{
case 23:
pReader = new ASCIIReader;
break;
case 69:
pReader = new EBCDICReader;
break;
}
switch (parserType)
... etc...
then your main line of code looks like this:
FileRules* rules = FileRulesFactory.GetRules(filetype, parsertype);
rules.read(file);
rules.parse(file);
Pull off this refactoring, and adding a new set of file types, parsers, readers, etc., becomes as simple as writing one exclusive to your new type.
Of course, go read the book. I vastly oversimplified it here, and probably got stuff wrong, but you should get the general idea of how to approach it from this. I can also recommend another book, "Head First Design Patterns", which has a great section on the Factory patterns (if you like those "Head First" kinds of books.)