After AI
Often I surprise myself trying to foresee what will come after this "AI" era. There will be an after, for sure. There's always an after.
The current trend, helped by all the governments in every so-called developed countries, independently of the social orientation displayed during the election race, is to replace as many civil servants as possible by some software that will handle as many as possible from the citizens' requests, faster and with more impartiality than the most zealous and dedicated civil servants. The feasibility of this endeavour and the enormous long-term savings of this qualitatively unparalleled initiative that will without any possible doubt improve the trust of the people in their governments and democracies is guaranteed by the best performing consulting companies on the planet, for they hire the most talented young people, they make them work hard (they obviously know the magic formula since they are so successful), and they are actively managed as to prevent them to have any of the kind of sentiments that could make them doubt or sway from the goal when the switch must be flicked.
The same scenario is currently playing in the organisations with the healthiest financial indicators, because not doing that move now would be suicidal for two reasons. The first one is that their shareholders (read: the pension funds and other speculative hands that are at the bleeding edge of the neo-liberal machine and can make a company go flop overnight) would "diversify" their assets in favour of more forward-looking companies, and that would be very upsetting. The second reason is that no-one has ever been blamed for choosing the solutions that all the winners have chosen. The telecom and IT industries have exemplified that in every single country of the OCDE since the 1990s.
So. What will happen when the plan has played out?
Will we all be unsatisfied that each and every one of the bloody AI/automation software that individually costed an arm and a leg only cater for 80% of the job it was meant to do, and that all together will do much less? I believe the mass won't care, because the mass will scuttle to gently fit in the square boxes the programs were designed for. The resisting round pegs will eventually be squeezed in between the two sides of the mighty pliers: the necessity to have a bank account on one side, the health and tax systems on the other side. Or they will have to evade the whole system, and that is not an option in a de facto authoritarian regime even when it calls itself a democracy. And as we see it panning out every single day in the news, none are immune.
Will there be a John Connor who will hack the AI so badly, that everyone will stop reading the AI suggestions when running a search on the www? There's no doubt there are a number of witty people out there, even journalists (to whom I apologise for believing they were all dumb), with enough common sense to understand what some young PhD punks can't grasp in between their ears, and trick the automaton for fun. The latest was the story of the hot-dog eating champion. What the punks can't get in their heads, is that whatever people can blob about "large language models", the actual primary material of the computation that produces the famous search engine's AI suggestions, and the answers from chat-gee-pee-tee, and the awful translations in between the three languages I practice everyday, (breath) the primary material is the pestilent and sick defecations that seem to cover most of the web. That is the garden of delicacies where the LLM chefs find the ingredients of the answers many people use for everything and anything: school students who copy expedite the homework that is meant to train their thinking capabilities, young parents who try to figure out why baby cries, an individual past his 50s who wonder whether an IT solution – proposed by a well-meaning very qualified and very experienced individual in his 50s – is suitable and sends to the other an automated digest that is so full of stupidity that it oozes through the screen and onto the keyboard. It's disheartening to contemplate these daily examples of the global loss of human quality in human-to-human relationships.
"After" may be akin to Farenheit 451. Big Brother is already here. Hopefully no-one will figure out how to make a positronic brain before a very long time, because I fear that the economic powers of our time would do away with any of the three laws. I have no clue about what comes next, but I dread it may happen before I'm gone.