Once upon a time, I was a computer science researcher, tutor, and PhD student at the University of Queensland.
My research area was in an area of artificial intelligence (AI) known as automated reasoning, and particularly, two techniques known as model checking and (automated) theorem proving.
To explain in a more accessible way, I was working on getting computers to automatically check important things about software - or to be more accurate, prove (check) things from mathematical descriptions of software.
Object-oriented meaning organising ideas and data about things ('objects') in the real world into reusable classifications ('classes') which share the same properties.
Which can then be organised and related to each other mathematically to describe software. What's known as a 'formal specification'.
Why do we want to do this? It turns out, you can prove conclusively whether a system is safe or not. Correct or not. Does the right things at all times. Before something goes wrong.
This is particularly useful when the software's purpose is life or death. Say, if it's being used to control medical devices, railway systems, planes, even rockets.
The whole field is known as 'software verification', and this area in particular is known as 'formal verification'
(I really think the world would be a lot safer - and software a lot less buggier - if we did this more often!)
My research role and subsequent PhD research was about exploring the limits of this automated checking. Like much of my academic research, it was focussed around doing so in variants of an object-oriented formal specification language (Object Z) developed by my then-research supervisor, Graeme Smith.
I have one publication to my name1, from early in my academic career:
Kassel, G. and G. Smith, Model checking Object-Z classes: Some experiments with FDR, Asia-Pacific Software Engineering Conference (APSEC 2001) (2001).
Sadly, I had to drop out of my PhD.
But I still use the computer science I learnt during my time at uni to push forward the edges of AI research in my open-source projects.
I'm interested in pathways to general AI via automated reasoning, as this better makes for Explainable AI, the main way many researchers see that will create AI that's safe and human-friendly.
To this end, I'm currently exploring integrating automated reasoning with natural-language understanding, particularly via the amazing Python language library, the Natural Language ToolKit
I also have an older Python language project pyaixi that implements the universal artificial intelligence AIXI algorithm. I explain pyaixi more here
Please check out my blog posts for what else I'm working on currently!
Anyone who's come here from the academic side of the internet may know me under a different name. If you knew me then - or just are curious - please feel free to contact me!↩