Once upon a time, I was a computer science researcher, tutor, and PhD student at the University of Queensland.
To explain in a more accessible way, I was working on getting computers to automatically check important things about software - or to be more accurate, prove (check) things from mathematical descriptions of software.
Object-oriented meaning organising ideas and data about things ('objects') in the real world into reusable classifications ('classes') which share the same properties.
Which can then be organised and related to each other mathematically to describe software. What's known as a 'formal specification'.
Why do we want to do this? It turns out, you can prove conclusively whether a system is safe or not. Correct or not. Does the right things at all times. Before something goes wrong.
This is particularly useful when the software's purpose is life or death. Say, if it's being used to control medical devices, railway systems, planes, even rockets.
(I really think the world would be a lot safer - and software a lot less buggier - if we did this more often!)
My research role and subsequent PhD research was about exploring the limits of this automated checking. Like much of my academic research, it was focussed around doing so in variants of an object-oriented formal specification language (Object Z) developed by my then-research supervisor, Graeme Smith.
I have one publication to my name1, from early in my academic career:
Kassel, G. and G. Smith, Model checking Object-Z classes: Some experiments with FDR, Asia-Pacific Software Engineering Conference (APSEC 2001) (2001).
Sadly, I had to drop out of my PhD.
But I still use the computer science I learnt during my time at uni to push forward the edges of AI research in my open-source projects.
I'm interested in pathways to general AI via automated reasoning, as this better makes for Explainable AI, the main way many researchers see that will create AI that's safe and human-friendly.
Please check out my blog posts for what else I'm working on currently!