Inspired by a comment on this question: What does #with_kw do in Julia?, what is the difference between #with_kw from Parameters.jl and Base.#kwdef? Why would I use one versus the other?
The biggest difference that I can see is the support for different macros. In the case of Base.#kwdef, while it is accessible through Julia, it is an un-exported internal macro meaning it is not fully supported as part of the public API. You can read more about that here: https://github.com/JuliaLang/julia/issues/33192
Based on that fact alone, it is likely a better practice to make use of the Parameters.jl instead of the one from base as it will be more stable until the macro is publicly supported.
As to the underlying technical differences, it does not appear that there is any significant difference in the way you would use either of these macros.
Related
I have seen many places the mention that Julia is "Composable". I know that the word itself means:
Composability is a system design principle that deals with the inter-relationships of components. A highly composable system provides components that can be selected and assembled in various combinations to satisfy specific user requirements.
But I am curious what the specific components of Julia are that make it composable. Is it the ability to override base functions with my own implementation?
I guess I'll hazard an answer, though my understanding may be no more complete than yours!
As far as I understand it (in no small part from Stefan's "Unreasonable Effectiveness of Multiple Dispatch" JuliaCon talk as linked by Oscar in the comments), I would say that it is in part:
As you say, the ability override base functions with your own implementation [and, critically, then have it "just work" (be dispatched to) whenever appropriate thanks to multiple dispatch] ...since this means if you make a custom type and define all the fundamental / primitive operations on that type (as in https://docs.julialang.org/en/v1/manual/interfaces/ -- say +-*/ et al. for numeric types, or getindex, setindex! et al. for an array-like type, etc.), then any more complex program built on those fundamentals will also "just work" with your new custom type. And that in turn means your custom type will also work (AKA compose) with other people's packages without any need for (e.g.) explicit compatibility shims as long as people haven't over-constrained their function argument types (which is, incidentally, why over-constraining function argument types is a Julia antipattern )
Following on 1), the fact that so many Base methods are also just plain Julia, so will also work with your new custom type as long as the proper fundamental operations are defined
The fact that Julia's base types and methods are generally performant and convenient enough that in many cases there's no need to do anything custom, so you can just put together blocks that all operate on, e.g., plain Julia arrays or tuples or etc.This last point is perhaps most notable in contrast to a language like Python where, for example, every sufficiently large subset of the ecosystem (numpy, tensorflow, etc.) has their own reimplementation of (e.g.) arrays, which for better performance are all ultimately implemented in some other language entirely (C++, for numpy and TF) and thus probably do not compose with each other.
I'm in a situation where I'm modifying an existing compiler written in OCaml. I've added locations to the AST of the compiled language, but it has cause a bunch of bugs, because equality checks that previously succeeded now fail when identical ASTs have a different location attached.
In particular, I'm seeing List.mem return false when it should return true, since it relies on equality.
I'm wondering, is there a way for me to specify that, for any two values of my location type, that = should always return true for any two values of this type?
It would be a ton of work to refactor the entire compiler to use a custom equality everywhere, particularly since many polymorphic functions rely on being able to use = on any type.
There's no existing OCaml mechanism to do what you want.
You can use ppx to write OCaml syntax extensions, and (as I understand it) the behavior can depend on types. So there's some chance you could get things working that way. But it wouldn't be as straightforward as what you're asking for. I suspect you would need to explicitly handle = and any standard functions (like List.mem) that use = implicitly. (Note that I have no experience with ppx.)
I found a description of PPX here: http://ocamllabs.io/doc/ppx.html
Many experienced OCaml programmers avoid the use of built-in polymorphic equality because its behavior is often surprising. So it might be worth converting to a custom comparison function after all.
What an annoying problem to have.
If you are desperate and willing to write a little C code you can change the representation of locations to Custom_tag blocks, which allow customising the behaviour of some of the polymorphic operations. It's a nasty solution, and I suggest you look hard for a better approach before resorting to this one.
One possibility is that most of the compiler does not use locations at all. If so, you might be able to get away with replacing every location in the AST with the same dummy location. That should allow equality to behave as if locations were not there at all. This is rather hacky, and may not be possible if passes later in the compiler make any use of location info.
The 'clean' solution is to define a sane equality operation for ASTs (or to derive one using ppx) and to change the code to use that. As you say, this would be a lot more work.
The Racket FFI's documentation has types for _ptr, _cpointer, and _pointer.1
However, the documentation (as of writing this question) does not seem to compare the three different types. Obviously the first two are functions that produce ctype?s, where as the last one is a ctype? itself. But when would I use one type over the other?
1It also has as other types such as _box, _list, _gcpointer, and _cpointer/null. These are all variants of those three functions.
_ptr is a macro that is used to create types that are suitable for function types in which you need to pass data via a pointer passed as an argument (a very common idiom in C).
_pointer is a generic pointer ctype that can be used pretty much wherever a pointer is expected or returned. On the Racket side, it becomes an opaque value that you can't manipulate very easily (you can use ptr-ref if you need it). Note the docs have some caveats about interactions with GC when using this.
_cpointer constructs safer variants of _pointer that use tags to ensure that you don't mix up pointers of different types. It's generally more convenient to use define-cpointer-type instead of manually constructing these. In other words, these help you build abstractions represented by Racket's C pointers. You can do it manually with cpointer-push-tag! and _pointer but that's less convenient.
There's also a blog post I wrote that goes into more detail about some of these pointer issues: http://prl.ccs.neu.edu/blog/2016/06/27/tutorial-using-racket-s-ffi/
Is there a way to define artihmetic ooerators between structs?
Im using a decimal package to work with fixed decimal positions and avoid floats rounding erre ta. Ir defines operations cAlling functions like mul, add, sub, etc.
Id like to use that structure like i do with floats: 6 / 2, not decimal.newfromfloat(6).div(newfromfloat(2))
I was hoping to find something interface to implement which alouds me to do that kind of operations, or maybe some kind of getter setter to work with the underlying valĂșes... Any ideas?
No, you can't overload operators in Go. There is a FAQ entry about it:
Why does Go not support overloading of methods and operators?
Method dispatch is simplified if it doesn't need to do type matching as well. Experience with other languages told us that having a variety of methods with the same name but different signatures was occasionally useful but that it could also be confusing and fragile in practice. Matching only by name and requiring consistency in the types was a major simplifying decision in Go's type system.
Regarding operator overloading, it seems more a convenience than an absolute requirement. Again, things are simpler without it.
https://golang.org/doc/faq#overloading
If you need a working solution, look at how package math/big deals with arithmetic sans operator overloading.
In Python map() works on any data that follows the sequence protocol. It does The Right Thing^TM whether I feed it a string or a list or even a tuple.
Can't I have my cake in OCaml too? Do I really have no other choice but to look at the collection type I'm using and find a corresponding List.map or an Array.map or a Buffer.map or a String.map? Some of these don't even exist! Is what I'm asking for unusual? I must be missing something.
The closest you will get to this is the module Enum in OCaml Batteries Included (formerly of Extlib). Enum defines maps and folds over Enum.t; you just have to use a conversion to/from Enum.t for your datatype. The conversions can be fairly light-weight, because Enum.t is lazy.
What you really want is Haskell-style type classes, like Foldable and Functor (which generalizes "maps"). The Haskell libraries define instances of Foldable and Functor for lists, arrays, and trees. Another relevant technique is the "Scrap Your Boilerplate" approach to generic programming. Since OCaml doesn't support type classes or higher-kinded polymorphism, I don't think you'd be able to express patterns like these in its type system.
There are two main solutions in OCaml:
Jacques Garrigue already implemented a syntactically-light but inefficient approach for many data structures several years ago. You just wrap the collections in objects that provide a map method. Then you can do collection#map to use the map function for any kind of collection. This is more general than your requirements because it allows different kinds of data structures to be substituted at run time. However, this is not very useful in practice so the approach was never widely adopted.
A syntactically-heavier but efficient, robust and static solution is to use functors to parameterize your code over the data structure you are using. This makes it trivial to reuse your code with different data structures. See Markus Mottl's OCaml translations of Okasaki's book "Purely Functional Data Structures" for some great examples.
If you aren't looking for that kind of power and just want brevity then, of course, you can just create a module alias with a shorter name (e.g. module S = String).
The problem is that each container has a different representation and requires different code for map/reduce to iterate over it. This is why there are separate functions. Most languages provide some sort of general interface for containers (such as the sequence protocol you mentioned) so functions like map/reduce can be implemented abstractly, but this is not done for the types you mentioned.
As long as you define a type t and val compare (: t->t->int) in your module, Map.Make will give you the map you want.