#BartoszMilewski : « it would seem that #mathematical proofs are irrelevant to #programming. Or so it may seem, until you learn about the #CurryHoward isomorphism — or #propositions as #types, as it is sometimes called — which says that there is a one to one correspondence between #logic and #programs, and that every #function can be seen as a #proof of a #theorem. »
https://bartoszmilewski.com/2018/01/23/pointwise-kan-extensions/
#CategoryTheory #Haskell
@amiloradovsky
It's already great the extent to which #TypeClasses enable to generate code mechanically, be it through deriving or dictionary inferring.
In #Haskell, I try to avoid #DependentTypes unless the gain overwhelms the pain — which for me only happened once in https://hackage.haskell.org/package/symantic — code which I hope won't change a lot, because each redesign has been a HUGE stress, since even if I know I can write a correct term-level algorithm, I'm never sure to be able to satisfy the #TypeSystem…
@amiloradovsky
Often hard to follow, but #ConorMcBride's take on types:
« #ConorMcBride - What are #Types for, or are they only Against?»
https://www.youtube.com/watch?v=3U3lV5VPmOU
#ConorMcBride : « I'm trying to convince the Haskellers to let types depend on runtime values and the dependently typed programmers to let types depend on erasable values »
Anyway, I agree when he says we need to have more tools to support redesign. Better error reporting.
https://www.youtube.com/watch?v=ad4BVmPni7A
« #TypeInference needs Revolution »
@amiloradovsky
Also, since I've discovered #symantics (aka. #TaglessFinal) I find useful in some case to think the other way around: not to generate #code from human-readable #specs, but to generate human-readable #specs by running developer-readable #code. Like that #specs and #code are always in sync.
For example, a same code whether intanciated on a Parser type or an EBNF type, generates either an actual parser or a list of grammar rules. I'm also using this approach to generate XML schemas.
@amiloradovsky
to be less ambiguous: s/discovered/learnt/
Amongst the many great courses and experiments of #OlegKiselyov: http://okmij.org/ftp/tagless-final/
@julm I'm sorry in advance for responding without actually checking out the links, esp. on the "typed final style".
If I said something stupid, please feel free to point it out :) Or I'll do it myself as I'll have more time…
@julm First, I like the dichotomy between human-readable and developer-readable. 😉
I don't think formal code may be generated from informal specs, algorithmically.
In the other direction, generation of human-readable specs, a.k.a. documentation, from a developer-readable code, a.k.a. sources — sure, isn't that literate programming or "self-documenting"?
Sources often do (and should) contain both human-readable free-form part (i.e. comments) and developer-readable part to be machine-translated.
@julm
So the point here is to not just quote the relevant pieces of the source code in the docs/specs, but to process them more deeply in order to make it look more natural for a human reader (typesetting, diagrams, etc.)?
Yep, that sounds good.
Lastly, I suppose it would confuse a lot of folks: calling the generated parts of the documentation by "specification" — it's just counter-intuitive, from project management PoV.
@amiloradovsky
Yep, that's it.
@amiloradovsky
I'd say, using #symantics to generate spec is one step further than #LiterateProgramming, it's not just extracting comments to build documentation. It's actually running the code itself, but to print, not to evaluate, eg. 1+2 outputs "1+2" instead of 3.
The code prints itself in a syntax that we can call specs.
#symantics are just an (extensible) way to do an embedded Domain-Specific Language. eDSL exist mainly to be analyzed, and here one of this analysis is to print specs :)
@julm I'd prefer to illustrate the ideas of dependent typing, both "full" (say, #MLTT) and "partial" (say, parametric modules, #GADT's, polymorphic types, and so on), with languages like #Coq and #OCaml, since I believe they are much cleaner.
And the "error reporting" issue seems to be Haskell's only, that's what happens when the language designers themselves lose control over all the terminology and abstractions they utilize; no offense.
PS: IDK what he means by "erasable values"; will check.
@amiloradovsky
By « erasable » #ConorMcBride means not available at runtime, erased by the compiler.
Talking about #Haskell: « this presumption that runtime vs. erasable lines up with terms vs. types is now causing genuine pain and adding a great deal of unnecessary complexity to the way we do things.
I claim that term-versus-types and runtime-versus-erasable are orthogonal. »
[…/…]
@amiloradovsky
[…/…]
« Also, I like to write types which are going to get erased in order to get inferred runtime terms.
I'm perfectly happy to write simple high-level specifications that don't make it to runtime, as long as the execution of my design is inferred for me and executed, that is such a win! »
@julm I may be a bit ignorant (especially about the Haskell branch), but Haskell, with it's "semi-dependent" type-system, is probably as good illustration for dependent types as C++ for OOP…
So I think it's Ok to be confused and frustrated by Haskell's type-system and Haskellers view on satisfaction…
Although type-classes, as a sort of syntactic sugar (overloading, generic functions, whatever) is not that bad a thing, but I'd like to have it in addition to parametric modules and not in place of.
@amiloradovsky
Sure, #DependentTypes in #Haskell are quite hacky.
Going back and forth between monomorphic types and terms is now easy with the #Typeable machinery, and #GADT.
However doing the same with rank-1 polymorphic types (eg. forall a. a -> Maybe a) is much harder.
Though I was able to do it in #symantic by maintaining a type-level list of type variables, and using the bleeding edge #TypeInType extension to deconstruct higher kinds (eg. getting types f and a from: f a).
@amiloradovsky
And concerning rank-N polymorphic types (eg. (forall a. a -> a) -> Bool), I wasn't able to find a systematic way to write a term encoding them. Tried a few things using #GHC's extension #ImpredicativeTypes but to no avail.
Will likely require more work directly on #GHC :\
@julm My guess would be that Haskell is simply rather inappropriate tool for that kind of thing. Or I miss something.
Type families, and sum and product types on them, are relatively easy concepts, and there are languages which do not complicate them as much.
Polymorphic types and parametric modules are essentially just a sum types, and they are relatively easy to implement and type-check automatically.
But the product types aren't — composition and comparison of functions is a work for humans.
@julm It appears like the troubles, you ran into with Haskell, have to do with universes and stratification of types.
But the last example does not look like a "higher type", it all seems to happens within Type_0 ("small types").
Yet, I'm a bit confused about the notation "(forall a. a -> a) -> Bool" — "a" has a type (but I never saw anybody have ever written it explicitly; always Type_0?), and "forall a. a -> a" is a product type over a family of endofunctions, parametrized by the type of "a"?
@amiloradovsky
In #Haskell, « forall » is like a lambda but at the type-level, no the term-level. It (explicitely) binds a fresh type variable instead of a term variable.
https://en.wikibooks.org/wiki/Haskell/Existentially_quantified_types
https://wiki.haskell.org/Rank-N_types
https://ghc.haskell.org/trac/ghc/wiki/DependentHaskell
Useful, for instance, to allow the use a term « f » on terms whose types aren't the same:
foo :: (forall a. a -> a) -> (Int, String)
foo f = (f 42, f "42")
@amiloradovsky
My problem with the term bijectively encoding a type, was that the type-level list of type-variables couldn't be nested within a type, it could only be at the top-level (so called prenex-form).
A #GADT allows to introduce constraints on their type parameters, but they cannot introduce « forall ».
@amiloradovsky
Consider an #Haskell #GADT term named « Type » to bijectively encode (almost) any #Haskell type:
data Type vs a where
TyInt :: Type vs Int
-- so that matching TyInt will say « a » is an « Int »
TyVar :: Var vs v -> Type vs v
-- so that matching « TyVar var » will say « a » is « v » where « v » is indexed in the list of types « vs »
TyApp :: Type vs f -> Type vs b -> Type vs (f b)
-- so that matching « TyApp lam arg » will say « a » is the type application « f b »
@amiloradovsky
But to support rank-N polymorphism, I would need something in that #GADT to be able to say « a » is a « forall b. <the only type expression that could use b> ». Effectively instantiating « a » with a polymorphic type: so called #ImpredicativePolymorphism
But this is too bleeding edge:
foo :: Maybe (forall a. a -> a)
<interactive>:17:8: error:
• Illegal polymorphic type: forall a. a -> a
GHC doesn't yet support impredicative polymorphism
@amiloradovsky
Sorry for the multiple deletes/reposts, mastodon is not easy on me for discussing technical stuffs ^^
@julm Whatever, I need a bit of time to process it anyway, so I'm not going to reply very quickly. Although it's likely better to just append a correction.
@julm So you're writing a "macro processor" (a la CL's defmacro) for Haskell, and you can't implement it in Haskell itself, since it doesn't allow you to match some constructors (e.g. dependent product's)?
In Common Lisp, for example, you could change the syntax (parse the expressions) however you like by creating special read-tables.
Then maybe for Haskell too the solution is to go a level lower? (I don't know how exactly it is structured.)
Anyway, Haskell appears to me as very inconsistent.
@julm Again, on pure type theory and #Coq: I haven't noticed that the distinction between "term-level" and "type-level" variables/values played a role in those — there are variables, and they all have types: the "type-variables" are just variables of type "Type", which is built-in, in a sense.
I can't however competently compare the extensibility of Haskell+Agda/Idris/… and OCaml+Coq. So I'm not quite sure it would be easier to implement an EDSL in the latter, but I have "reasons to believe" :)
@amiloradovsky
Wouldn't call #symantic a macro processor, rather a compiler, projecting some chosen types and terms from #Haskell into a custom runtime language. While reusing abundantly #GHC's machinery to do that, hence carrying along every runtime term, a proof that this term represents such or such #Haskell type.
@amiloradovsky
One of #symantic's applications is to build an user-facing language for an #accounting tool.
A well-known typed lambda calculus, with primitives helpful for #accounting.
In order to get a middle ground between expressive but brittle spreadsheet formulas, and robust but rigid softwares, where you need to be (or pay) a fully fledged developer (and overcome any legal or governance problems on the software) just to program that simple but not featured calculus you need.
@amiloradovsky
« practical software freedom », that's what this #Emacs user calls it: « In the case of #Emacs the boundary between user and programmer is blurred as adapting the environment to one’s needs is already an act of #programming with a very low barrier to entry. #Emacs provides practical software freedom »
https://elephly.net/posts/2016-02-14-ilovefs-emacs.html
@amiloradovsky
It's still certainly too much optimistic to expect casual end-users to configure their needs and express their queries in typed lambda-calculus. Even with the best error reporting feedback. And even when I see the complex spreadsheets accountants write and suffer.
But I hope that it may smooth the learning curve for the most hacking-inclined amongst them, and maybe foster a diverse but robust plugin ecosystem. :]
@julm Here is my attempt to translate it into Coq:
First, define an option type:
Inductive Maybe (X: Type): Type := Just (x: X) | Nothing.
Then observe how it works:
Definition no_thing := Nothing.
Definition some_thing := Just nat 0.
Here "nat" is needed as an (otherwise implicit) argument for "Maybe".
And apply it to the example:
Definition foo: Maybe (forall a, a -> a).
To build a term we could just say:
apply Nothing. Qed.
@julm Or…
assert (id := fun X: Type => fun x: X => x).
apply (Just (forall x: Type, x -> x) id). Qed.
Where "id" is the identity function, defined locally within the proof.
Here I didn't try to make the example clean & short: just wanted to demonstrate that it's nothing extraordinary with that kind of expressions.
@julm Yes, a (sub)expression binding a variable.
∀ from the predicate calculus, in type theory, gets translated into ∏ (product type constructor). But the programming languages are just using "forall", for convenience. Not only for propositions, but also sets, and generally small types.
Does, BTW, #Haskell also have an "exist" (sum type constructor)?
In #Coq, for example, the type signature would look like:
Definition foo: (forall a, a -> a) -> (Z * string).
where "a: Type" is inferred.
@amiloradovsky
It's confusing for a start but #Haskell has no « exist » keyword, it reuses the « forall » keyword to introduce existentially quantified type variables.
Using the fact that both formulas are equivalent:
(exists x. p x) -> q
forall x. (p x -> q)
https://stackoverflow.com/questions/28545545/why-there-is-no-an-exist-keyword-in-haskell-for-existential-quantification
@julm Indeed, some of the "exponentiation rules" hold for types, to convert between sums and products (including dependent) revolving around an exponent (function type). But I doubt we can go about without, say, variants (sums) just because we already have tuples (products)…
So, meh, not convinced. Another asymmetry, another kludge.
@julm Moreover, I presume, in the near future "programming" will consist primarily of composing the dependent type signatures (from a human-readable specifications), and maybe giving some hints to a heuristic proof/algorithm search engine.
And writing the algorithms by hand will be seen as a low-level programming, like in assembler today.