## Algebraic Geometry to Geometric Algebra

You would think with names so similar there would be connection. Since it is math, there is almost always connection between everything.

I’m starting to learn algebraic geometry to help me understand the process of choosing and working in a projective space better when using geometric algebra.

GA tends to focus on a few spaces, euclidean, homogenous, and conformal. The last two are projective.

Algebraic geometry also deals with the algebra of projective spaces but focusing of polynomials.

Here is my guess. Algebraic geometry can answer questions like, what is gained from working in one projective space of another. GA is useful as the operators for working with the spaces.

Maybe GA is closer to the type AG is closer to the values.

## The File Format of a Game

Thinking about the file format of game is usually attacked after a game exists for a while. It is an after the fact activity that occurs piecemeal, some what unconsciously, as game elements are abstracted out to allow the game to be more data driven.

The abstraction usually hits a road block on behaviors. The problem is that most games are compiled, but complex behaviors are best solved with a programming language expression.

Enter scripting languages. This is the right idea, but it introduces a new set of issues. Now there are two languages, a fast compiled (and usually complex) language, and a simple interpreted language.

We still don’t have a file format that our game pipeline can load. We can’t interpret most of the code. The code is not written for the pipeline to understand.

This is a problem, and prevents powerful forms of automation. Between partial application, profiled guided compiliation/memory allocation, and lazy incremental computing I think all of the optimization that programmers apply  by hand, can be automated. To do these types of optimization requires an understandable and predictable language to reason about. Object C is too big. Its to confusing. There are too many primitives.

Haskell is also too complex, but it is closer. One lesson learned from Haskell is that all the built-in data types should be used through an interface. Haskell learned this the hard way with the whole String vs Text/Bytestring saga. The specific implementations would be choosen by the compiler from profile guided optimizations. Whether to use a list for the list interface or a vector or fixed array is determined by how the object is used in the code. This is something that can at least semiautomated.

I don’t know the right language to write games in. It is whatever language is the most productive all the way through optimization.  This includes the community.

The question of what language to pick is best posed like this: “If we were hiring as fast possible what rate could we aquire employees from the community, and how many games would be completed from that the community and language combo.”

So there are two reasons to have a simple language. One is for optimizations, the other is for hiring employees.

I don’t know what the answer is though.

Back to the format of a game. Its simple.

 data Game = Game Code Data

I have been talking about the Code part. Its some language we can perform the optimization I talked about. Its this unicorn of a language. I am assuming that it exists for the rest of the post and it requires the automatic optimization I talk about earlier. I’ll go into more details.

So the data breaks into two things. First there is the data that is never mutated. This where the art assets go for the most part. This our readonly data with the ability to add local contexts. It is the Reader Monad. The other parts of the data are the mutable data. This is all the tweaks that producers can do. Its our state monad. So the data creates an environment that game executes in. The Writer is where all computationally irrelevant output goes like metrics. I will leave it out now.
 type ReflectableEnvironment = Dictionary String ReflectableType type ImmutableEnvironment = ReflectableEnvironment type MutableEnvironment = ReflectableEnvironment 

 

newtype Environment = RWST ImmutableEnvironment MutableEnvironment () 

This split makes it easy to partially apply at any point in execution by using whatever data is in the reader monad to make a new function. This would be equivalent to replacing the update loop with a new one specialized to all the available readonly data. Think you fixed sized preallocated arrays based off the data.

To make this concept efficient would require that not only the data storing the readonly be amenable to incremental updates, but that the structure for storing the Code be incrementally updateable also. Think edit and continue. Specializing the program must be able to be done incrementally.

I have no doubt that the algorithm can be done in polynomial time, I think depending how the specializer works it might be sublinear if whole program optimizations can be made incremental.

Lets assume that incremental specialization is cheap. Then you could have the client perform its own specializations. So if a users mutable state hasn’t changed in x years, or whatever threshold was optimal, we could specialize using the all of the users slow changing data. So each user would have a game that was optimal for their data based on the probability of the data mutating.

If some of the mutable data was specialized away, and was mutated after specialization, that would require a latency hit as the game would need to be recreated. It might be impossible to ever make this fast enough, to happen regularly, but if the likelihood was once a year, or whatever we set, we could just perform the operation in the background and let the user know that it is updating. The optimal threshold would need to be tweaked between the speed of the optimizations and the latency of implementing them.

There is no way you do this stuff by hand. I think partial evaluation is the future. The trick to make it efficient is to rethink all file formats that the game uses, or more the simply, the one file format of the whole game, as a list edits. The list of edits are passed to a specializer along with an evaluator. A JIT. A tracing JIT that logs long runs of user behavior and can process it in the background.

So the trip to incremental algorithms is just pass in the changes. There is another way to about it that is less concrete but makes it clearer what the process to optimize is.

There is someway to fit the data to a function. The edits are the changes of the parameters of the function. This is not what I think should be done. It is too hard (MAXEnt seems equivalent, although it goes the other direction. Creates a functional that optimizes entropy given constraints. The nonlinear solutions from MAXEnt might not be suitable for games, but maybe could work in the pipeline). Instead as a stop gap that is still pretty efficient, is to use the data type definition to generate a edit list that is optimized for general plane trees. This is the gdiff library on Hackage.

So for any data type we can generate a reasonably efficient edit list for our entire game. In this way we can stream the game. Enough for now.

## Of Montreal

Just got around to listening to “False Priest”. I never really stopped listening to “Hissing Fauna You are the Destroyer.” I can’t listen to “Our roitous defect” with out thinking of MC Chris or King Missile. I would like to hear MC Chris cover. Nothing against King Missle.

I like the story. The chorus is “Crazy Girl” but the implied part is that the guy telling the story is a real asshole. He’s making her act crazy.

## Better Syntax FTW

I was reading the chapter Tensors and Multilinear Forms in Tensor Geometry: the geometric viewpoint and its uses.

There was the following formula

f: L(W;X) x L(X;Y) x L(Y;Z) -> L(W;Z) : (A, B, C) ~> CBA

It occurred to me that it was both a function signature and an implementation. I thought how it would have been written in Haskell.

class (VectorSpace a, VectorSpace b) =>  LinearTrans a b l where

trans :: a -> b

 f :: (LinearTrans l) => l w x -> l x y -> l y z -> l w z f a b c = c . b . a

 I've curried the arguments, made the L more explicit with a constraint. For some reason Haskell is just clear syntax for understanding functions. 

Thus:

$f \epsilon L \left\lgroup L(W;X), L(X:Y), L(Y;Z); L(W;Z) \right\rgroup$

instance (LinearTrans l) =>

LinearTrans (l w x, l x y, l y z) (l w z)

(l w x -> l x y -> l y z -> l w z) where

                        trans = uncurry2 f

So if f can compose  linear transforms then it is also a linear transformation. Linear transformation are closed under composition.

## Math Story Time

Bishop Berkeley  identified a fundamental flaw in arguments for both the derivate and the integral made by Newton and other mathematicians of his day. At the time both the derivative and the integral involved the idea of manipulating an infinite amount of infinitesimal objects in an imprecise way.

Reasoning with the infinite is not advisable. It is unintuitive. Adding infinities is the same as multiplying, you just take the biggest one (only the power set lets you climb up ladder of infinities).

The problem was mathematicians had not yet developed the idea of a mechanical reasoning process during Berkeley’s time. Logic, like probability, was outside the realm of mainstream mathematics during early 1700′s.

Anyway, no one cared about the foundations of calculus in the 18th century. They were having too much fun using it. Calculus can solve all these optimizations problems that once required more complex proofs to solve. There was this whole class of unexplored well-behaved functions.

Solving optimization problems with calculus is usually pretty easy. We just take the derivative and set it equal to zero. Sometimes the second derivative.

Taking the derivative of polynomials is no more complicated then division with a remainder. Its easy.

It is convient  to model physically phenomena as a function and its rate of change. Hopefully a polynomial or something that is easy to take the derivative of and the antiderivative. Add some conservation laws and that pretty much sums up physics.

So calculus made it easy to solve things with derivatives. Not everything has a derivative. Fractals don’t. There are no smooth parts of  fractals.

Fractals were not part of the 18th century vernacular, but by  the ninetieth century the “pathological”  counter examples in real analysis were being brought to center of mathematics as an example that more rigorous mathematically proofs are needed.

It turns out that the structure of the continumm is more complex then was initially realized. Understanding different continuums was all the more important as calculus evolved from Newton’s calculus, where the domain of functions was the real numbers, to the calculus of variations, where the domain is function spaces. Infinite functional spaces in general and R^n are different beasts. More precise definitions were needed to keep things well behaved.

All of this pushed Riemann to develop a more rigorous definition of integral based on the idea of explicit limiting process. It was an algorithm you let run forever.

Rigour was being added, but just as quickly new problems were emerging.

All of this was the prehistory to my favorite act in the history of mathematics the Foundational Crisis.

And the point is … after a long war, Agda is winning small battles for the finists everywhere.

## Super Dag

I never thought about this before, but in flash because each node in the scene graph has a timeline, you really have a graph of graphs. A super dag to be more precise, where each dag at each node is labeled by the frame number.

## The New GHC.Generics

I found a subtle issue with the way GHC.Generics converts the general trees of sum and product types into binary trees. Instead of using the “put all the forests on the left” isomorphism it tries to create a balanced binary tree. This doesn’t seem to bad on first blush but actually makes the system less powerfully then it could be.

In my case, I wanted to know the index of a particular sum type instance. No dice. Can’t be done (reasonably, there is a total shit way that it is possible, but that is besides the point.). Right, Left, Right, Left, means a different index depending on the size of the balanced binary tree. If they had went with the “left forest” method I could get the index, regardless of the size (laying out a forest is equivalent to laying out the list of immediate children — I guess all children are immediate children — on the left, and we only need the first level, so we can always get an index.).

Which brings me to my next point. It would be nice to know the count of the sum type. Then even balanced binary trees would work.

Also the binary tree layout makes conversion of tuples annoying and slow because you have to keep splitting your list of values in half.

I’m going to put together a tutorial and explain in detail how to use generics and what is problematic about the design.

After releasing simple-c-value I just had to get that off my chest