I think that you may find some interesting reading on a similar vein and with some curious twists in the book “Probability Theory, The Logic of Science” by E T Jaynes, specifically chapter 2. This derives the quantitative rules of probability theory from the ‘basic desiderata’ of ‘plausible reasoning’. What I find interesting is firstly another fine example of the approach you have applied here and from what I surmise quite widely elsewhere; the approach is the same and the particular steps are also similar although as the problem is more general the equations involved are more general functional equations, one being ‘The Associativity Equation’. But it is also interesting that the author took this approach, with some inspiration from G Polya, around 65 years ago and while active in applying computation to probability and statistics was certainly not of a theoretical computer science orientation.

]]>In a category with an initial object, AFAIK it’s standard to encode constants as morphisms from the initial object.

Sorry, of course I meant terminal object (1 or Unit).

]]>

-- Primitives Prim ∷ Prim (a → b) → (a ↣ b) Const ∷ Prim b → (a ↣ b)

I’m not happy with the similarity of Prim and Const. Perhaps there’s a simpler formulation.

Wouldn’t you want to model `Const b`

as a primitive `p`

of type `Unit → b`

, and then construct `Prim p`

of type `Unit ↣ b`

? In a category with an initial object, AFAIK it’s standard to encode constants as morphisms from the initial object.

The polymorphic `a`

in `Const`

result type seems especially unfortunate, since you can inadvertently pass whatever argument (you will likely still get some error, but probably not the best one).

I see you’re sharing `Prim`

with Haskell code, so those constants should also be encoded without the `Unit →`

part of the type in some places. Unfortunately I don’t see a perfect solution to that: it might be best to have a different type of primitives for CCCs, with a constructor for function primitives and one for constants which adds the `Unit →`

part, because that at least keeps the CCC language precisely matching its spec.

(Sorry for my bad quoting, I can’t find how to format comments properly, so I just tried out some Markdown dialect which kind-of-works).

]]>main :: MMap k v -> MMap kk vv

Dictionary lookups on a key block until that key arrives in the dictionary.

lookup :: MMap k v -> k -> v

Sometimes you want to wait on all keys at the same time, and produce a result. This form accomplishes this:

mapp :: MMap k v -> (k -> v -> MMap kk vv) -> MMap kk vv

The only caveat is that if mapp ever detects that the result of the function for two key-value pairs disagree at a key, it must signal an error.

All destruction actions are pushed to the top level of the model. The option to delete input records can be provided. If a record is deleted, all computations that depended on its existence can be automatically recomputed.

]]>`f x y = 5`

is a 2-argument function, and `f x = 5`

is a 1-argument function, I thought `f = 5`

must be a 0-argument function. The syntax makes it hard to tell that all functions take exactly one argument.]]>I admit I was hoping you’d covered this base already in your work or knew someone who did .

I’m not sure I have a good enough sense of what you mean by a “precise denotation” to come up with one.

I can think of some of my own approaches to the problem, and I may try some experiments to see what works out in practice, but I’m not sure if they are going to be as “good” as the procedural version.

For example, if the input is given as a list of lines of text and the output is given as a list of lines, then one could have a function from input lines to output lines. Like:

prog [] = ["What is your name?"] prog [name] = (prog []) ++ ["Hello, " ++ name ++ "! How old are you?"] prog [name, age] = (prog [name]) ++ ["Well, you look much younger..."]

Presumably I can devise a function from time to a list of input lines, and pass that into this function to get the output lines.

I know this approach has major flaws, but is this an example of what you mean by casting something in terms of denotation?

]]>`IO`

is not. One way to investigate is to cast something like your `IO`

example in terms of a (precise) denotation, and then see whether/how to capture that denotation in terms of FRP’s which is functions of (continuous) time. I expect that doing so would flush out some hidden assumptions and reveal some alternatives.]]>```
printLine "What is your name?"
name ← readLine
printLine "Hello, " ++ name ++ "! How old are you?"
age ← readLine
printLine "Well, you look much younger..."
```

Is FRP unsuitable for this kind of application?

]]>The basic idea is to represent a 3D object as a “distance field”, i.e. a continuous function from R^3 to R that returns the distance to the object. The value of that function is zero inside the object, and grows as you get further away from it. That technique is well suited to ray marching on the GPU, because when you’re marching along the ray, evaluating the distance field at the current point gives you the size of the next step, so you can make bigger steps when the ray passes far from the object. Simple objects like spheres have distance fields that are easy to define, and simple operations on objects often correspond to simple operations on distance fields. It’s also easy to approximate things like normal direction, ambient occlusion, and even penumbras (send a ray to the light source and note how closely it passed to the occluder).

In a perfect world, we could have a combinator library in Haskell for rendering distance fields that would compile to a single GPU shader program. It’s scary how well this idea fits together.

]]>