Research
For us, technology is a living practice – we engage in thinking through making, and we make the things we want to see in the world, grounded in philosophical and anthropological thinking. Our process includes new ways of building AI models, material experiments, critical writing, and collaborative investigations.
We are driven by the idea that humans are complex, that asking the right questions is important, and so is observing people. Some solutions are complex, some are simple; the simpler ones are usually harder. Technology gives us access to data, and humans give us access to insight.
AI Reading Group
A space for collective thinking about human understanding in an age mediated by algorithmic models and technological artefacts. If you are in Athens, feel free to join us in person at Philosophy Machines HQ (contact us for directions). Otherwise join online. All welcome, no need to book.
January 30th, 2026 - 11:30 am EST, 16:30 GMT,18:30 EET
Zoom: https://us06web.zoom.us/j/8395952838
Join us to discuss Kant’s brief essay “What is Enlightenment?” and how it relates to our relationship with AI.
In answering the question “What is Enlightenment?” posed in the December 1783 issue of the newspaper Berlin Monthly, Kant opens with:
"Enlightenment is man's emergence from his self-imposed immaturity. Immaturity is the inability to use one's understanding without guidance from another. This immaturity is self-imposed when its cause lies not in lack of understanding, but in lack of resolve and courage to use it without guidance from another. Sapere Aude! "Have courage to use your own understanding!" – that is the motto of enlightenment."
What does independent thinking and daring to know mean in an age of AI? Does Enlightenment inevitably come from the ability to think freely?
Find notes from past Reading Group sessions here.
Code
Access our open-source code here
Publications
Written during peak corporate empathy discourse, this piece anticipated AI ethics' central blind spot: empathy can be faked and maintains power asymmetries, while trust demands bilateral vulnerability and time. Eight years later, the distinction matters more than ever—trust isn't a pattern recognition problem but a relational practice that unfolds in duration and cannot be automated.
In Despina's PhD thesis, the algorithmic grid is revisited, inserting affectivity, complexity and material resonance. Through a series of material encounters and their deliberate "kinking" of established patterns, she demonstrates how algorithmic systems might be recrafted from processes of reduction into expansive sites of co-creation and possibility.
In this Keynote at the 2019 Tech Meets Design conference, Despina draws inspiration from biology, cybernetics, craft practices and the Bauhaus, to reposition design as a material practice that forces us to think about the type of shared futures we want to inhabit.