The Federal Public Service Policy and Support (BOSA) and AI4Belgium are organizing a hackathon on the theme AI4Gov. Anyone who is interested in harnessing the power of AI for more efficient and/or user-friendly government processes, is welcome to participate! The hackathon itself is currently scheduled for March 2021.
I’m happy to be part of the steering committee for this event. While my background allows me to do in-depth technical analyses of the projects and solutions, I’ll certainly pay special attention to fairness, accountability, transparency, ethics and privacy. By the way, I find the resources that are available on the Flemish Knowledge Center for Data and Society to be of great value in helping with these kind of assessments.
An AI-project is built on vast amounts of data. Good quality data can be hard or expensive to gather, also there are serious privacy concerns when the data pertains to real persons. On European level, the GDPR imposes high standards and restrictions for data gathering, management and usage.
The consumer is optimally protected in this way, but the work of the data scientist does not become easier. As a result the concept of “synthetic data” is getting some traction: fictitious data simulating the statistical properties of the original dataset. Applications are, among others, dataset rebalancing, masking or anonymizing sensitive data, or making simulation environments for machine learning applications.
In Covid-times, naturally all seminars and presentations are converted into webinars - also this one, an updated version of my Pitfalls of AI talk that I’ve given several times now. I update it every time with the latest and greatest in AI failures, it never ceases to amuse!
This webinar was given on invitation by Ordina for their JOIN Ordina JWorks event. The presentation, in English, was recorded and put on YouTube - enjoy!
The fast rise of online translation engines (Google/Bing Translate, Deepl, etc.) changes the job content of professional translators. But are those also useful for the simultaneous interpreter? In this article for the Smals Research blog I take a closer look at the specific challenges in that field, and the state of the technology today.
This article was subsequently polished and republished in IT Daily. (All in Dutch, for an automated translation into English click here).
In my first 2020 article for the Smals Research blog (in Dutch) I describe 5 general questions that we ask ourselves before diving head first into a new AI project. The article links also to a bunch of external resources where more AI management wisdom can be found. For an English translation, you may try running it through Google Translate, without any guarantees as to its accuracy of course ;)
With some regularity I speak for a general audience about my study topics at Smals Research. Lately I’m speaking mostly about the risks that come with AI-projects, and that deserve some more attention amidst all the hyping. AI is not a magic wand with that makes everything work from the first get-go: it’s a complicated set of technologies, and there are many points of attention to consider in order to bring an AI project to a good end. In these presentations I therefore highlight what can go wrong while developing an AI system (training data, confounding variables, objective function), at deployment (attacks against AI systems), the impact on us as citizens (bias, fairness, transparency) and on society as a whole (policy issues, ethics etc.).
This gave rise to a series of presentations given at
In my latest blogpost for Smals Research, I dive into the problem of discovering information in unmanageably large and unknown datasets, and the related problem of anonymizing the results on a large scale. These kind of problems occur sometimes in legal research, investigative (data) journalism or auditing. Learn a bit or two about the concept of e-discovery here (Article in Dutch).
AI is a hype and governments consider it too, as a possible solution for whatever problems they face. To cut through the promo talk and put public services with their feet on the ground again, colleague Katy and I gave a series of well-attended, bilingual, presentations for Belgian public service personnel. What is true and false about AI, and what can you do with it in an (administrative) government context? The slides are available for download.
Next to a short overview of the various flavours of AI, Machine Learning and Natural Language Processing, we also pay attention to the practical side of things: how about data collection and the law, what are the technical requirements, how to organize updating and maintenance of an AI system, and which ethical issues need to be taken into account? We illustrate it with a few small examples that were built within Smals.
This presentation was repeated several times for a varied audience. AI for government will certainly remain a core topic for me in the near future. If you’d like to exchange some ideas, or have some interesting proposals for the application of AI in public services, feel free to contact me!
In my latest blogpost for Smals Research I discuss some of the rists that the latest progress in AI entails for our knowledge society: what’s the impact on e.g. spam, scams, fake news or information warfare? A hot topic with European elections coming up, and of course concluded with some recommendations. (Dutch only for the moment, an English translation will follow.)
In a consultancy assignment for the section analysis and prospection of the labour market of FOREM, the Walloon Employment Mediation Service, I worked as member of an expert panel on their report on the evolution and opportunities of AI-related jobs:
Métiers d’avenir - Les métiers de l’intelligence artificielle (document in French).
In this blogpost for Smals Research I present some of the many Facets of Natural Language Processing. This first article deals with parsing and automatic translation. (Articles are in Dutch, I will publish an English translation soon.)
Edit 07/02/2019: in the meantime a second article is also published. It deals with classification, entity recognition and the more general problem of (syntactic and semantic) ambiguity.