Let's assume the following constraint:
- Programmer must always figure out, for his specific algorithm, how much memory it is going to maximally take.
- Programmer must use potentially lazy methods as much as possible, because infinite loops need more complex constraints to check (if it will always reach back this execution point; if it will always finish after getting this signal pattern; if it will always reach this point after getting this signal pattern etc.)
For some simulations etc. we can not be sure - but we will have those constraints. Like memory limits etc. If it is related to open math program, we also do not know - but this is not a constraint to a program, it's a limitation of programmer.
When we have a program, it has some memory dump - say, x bytes of memory; memory dump must contain current execution point here. Memory dump is also an integer - you can serialize it, x bytes of memory has integer values.
Running program, you have a dictionary, which has those memory dumps as keys. At each decision point, you store the keys - whenever your dictionary already contains the given key, it has crashed.
You can make up several kinds of structures to make the memory more free play:
- Having memory request command, which adds memory to working memory and can not be called inside loops. This can dynamically allocate memory, but not outside the memory borders. For black-box functions, this allocation might have nothing to do with top program memory limits (such things need to be overchecked).
- Having memory requirements defined for functions.
- Having, for loops, defined the maximum number of iterations, which effectively allows to calculate the total memory need for a loop.
You can use a set of optimizations; some of the more important:
- Only store the variables used in iteration process as dictionary keys (you don't need to store strings if all you do with them is displaying the string with correct index in some situation or add two strings without using their lengths to decide, when the iteration should stop).
- Only store local variables of each function and keep local dictionaries instead of global one.
- You can cache results of functions. Specifically, you can make sure that some functions always return (by trying all possible inputs) for functions with fixed-size input and then turn checks off for those functions.
- Do not do checks for loops, which exit anyway - like walking through an array - and also leave them out from list of "decision points".
The framework, which would be nice to actually use such solver (as it can take quite a lot of time to check some program):
- Located in version control system cluster.
- Each local checkout is connected to it as a local branch, so included in check.
- Goes bottom-up, solving halting problem with local functions and then going a round up.
- Analyses and learns all the time, finds patterns - I have a very complicated methodology for that, which is out-of-scope for this intro.
- Gives feedback so that programmer will see in IDE, which methods are checked, which are failed and which are not yet tackled. Programmer can mark some methods or inputs (unit tests) to be more interesting - as cluster has the data to boost checking, it's not OK to do it in local computer.
- Find patterns.
- Simplify patterns.
- Check patterns for constraints.