We have studied constraints to programs, but we can use constraints to programmers instead.

Let's assume the following constraint:
  • Programmer must always figure out, for his specific algorithm, how much memory it is going to maximally take.
  • Programmer must use potentially lazy methods as much as possible, because infinite loops need more complex constraints to check (if it will always reach back this execution point; if it will always finish after getting this signal pattern; if it will always reach this point after getting this signal pattern etc.)
This is a good coding style - when I download an image viewer, I should know, how much memory it needs to run or to open a 5-megabyte image or smth like that.

For some simulations etc. we can not be sure - but we will have those constraints. Like memory limits etc. If it is related to open math program, we also do not know - but this is not a constraint to a program, it's a limitation of programmer.

When we have a program, it has some memory dump - say, x bytes of memory; memory dump must contain current execution point here. Memory dump is also an integer - you can serialize it, x bytes of memory has integer values.

Running program, you have a dictionary, which has those memory dumps as keys. At each decision point, you store the keys - whenever your dictionary already contains the given key, it has crashed.

You can make up several kinds of structures to make the memory more free play:
  • Having memory request command, which adds memory to working memory and can not be called inside loops. This can dynamically allocate memory, but not outside the memory borders. For black-box functions, this allocation might have nothing to do with top program memory limits (such things need to be overchecked).
  • Having memory requirements defined for functions.
  • Having, for loops, defined the maximum number of iterations, which effectively allows to calculate the total memory need for a loop.
Such things make the memory need more portable. For any fixed memory need or computer, it is always possible to show the halting problem.

You can use a set of optimizations; some of the more important:
  • Only store the variables used in iteration process as dictionary keys (you don't need to store strings if all you do with them is displaying the string with correct index in some situation or add two strings without using their lengths to decide, when the iteration should stop).
  • Only store local variables of each function and keep local dictionaries instead of global one.
  • You can cache results of functions. Specifically, you can make sure that some functions always return (by trying all possible inputs) for functions with fixed-size input and then turn checks off for those functions.
  • Do not do checks for loops, which exit anyway - like walking through an array - and also leave them out from list of "decision points".
This leaves open a series of math problems needed to tackle the problem further, but it should be effective for normal program.

The framework, which would be nice to actually use such solver (as it can take quite a lot of time to check some program):
  • Located in version control system cluster.
  • Each local checkout is connected to it as a local branch, so included in check.
  • Goes bottom-up, solving halting problem with local functions and then going a round up.
  • Analyses and learns all the time, finds patterns - I have a very complicated methodology for that, which is out-of-scope for this intro.
  • Gives feedback so that programmer will see in IDE, which methods are checked, which are failed and which are not yet tackled. Programmer can mark some methods or inputs (unit tests) to be more interesting - as cluster has the data to boost checking, it's not OK to do it in local computer.
The framework must be self-learning, thus it will do the following:
  • Find patterns.
  • Simplify patterns.
  • Check patterns for constraints.
Here is a lot of work to do - to really optimize this thing -, but for computers with limited amounts of memory, this is a general case. It's not general case for all scientific computations on supercomputers; also it might not tell anything with some programs, which can not be run because of hardware requirements. So it is not usable for solving the solvability problems for math problems - but it will need only optimization to raise up to the task for common programming.