The material was mostly standard Wolfram stuff but with some focus on future technology. NKS points of view on AI were of course also present. The most interesting theme for me was about human purpose.
Here are a few points I extracted:
Humans can't predict the future because of computational irreducibility, except for "pockets" or reduceability. I'm not entirely clear as to what defines those pockets. This notion apparently has the premise that human society is a sufficiently complex system so that humans have to run the program to see what happens.
One potential major thread of future technology could be to migrate engineering solutions from iterative to search-based. In the most extremely abstract cases, an engineer could start from scratch and do an automatic search through trillions of programs in a computational space in order to find the one which exhibits the desired behavior. Knowledge of how it works internally is not necessary. I am wondering as I write this, however, how you could be sure that an internally-mysterious computational entity would not at some point exhibit undesirable behavior.
On the premise that we do more search-based engineering in the future, we will no longer limited to mere adjustments and recombinations of existing technology. We could create almost anything immediately, at least in computational spaces. If that becomes the case, then the limiting factors will then be human purposes.
If you could make any program in the world relatively quickly, why would you make something alien? We are limited to what we can use and interact with. Of course, I would point out that those usage limitations will change as human minds and bodies change, but limitations of some kind are still there.
An analogy to cats is where we get the phrase "cat usability testing." I'm not sure if Wolfram saw this video, but he probably did since he mentioned apps cats will like on iPads.