The professional classes are currently, and rightly, obsessing about the impact that AI will have on the service sector as ChatGPT gets exponentially smarter. By Mark Tinker

 
The current iteration of ChatGPT – the most popular AI – is said to have an IQ equivalent of around 155, meaning that the next one will, quite literally, be the “cleverest” thing in history.

Mark Twain once said about risk that “the thing that kills you isn’t what you don’t know, but the thing that you know to be absolutely true but turns out not to be so.” Are we ready, then, for if AI tells us things we know to be true are not quite so?

Think of the three so-called certainties that have driven western economies to the brink of disaster in recent years – what I refer to as the triple zeroes; Zero Interest Rates, Zero Carbon and Zero Covid.

All three were globalist policies presented by the white-collar classes as the only option to solve an imminent problem that they had determined based on – rather basic – computer-modelling. None of the three have ever been in a political manifesto, none have ever been subject to cost-benefit analysis, and none have been allowed any challenge, either to the thesis or to the models – which all incidentally fail the scientific method and all fail to match real world outcomes.

The fact that the policies have all largely benefitted the 1% rather than the 99% is consistent with Charlie Munger’s observation “show me the incentive and I will show you the behaviour.” However, this is less about self-interest being the driver of the policy than it is about why the white-collar classes will resist any challenge.

So why will this change with AI? For the same reason that most high-priced services like education and medicine will come down in price; because it will break the guild of “academics” who prevent challenge to the “science.”

The first line of defence for all three of these disastrous policies is the logical fallacy known as “appeal to authority” which not only prevents discussion of the theory, but also, more importantly, prevents discussion of the associated policy. “You don’t have a PhD in Economics/Climatology/ Virology, so your opinion is invalid” conveniently ignores the implication that these experts on the problem also somehow have the most valid insights on the solution.

However, even if you get past the priesthood hurdle, and you are qualified, you usually get hit with the next one – the bandwagon fallacy – that “95% of climate scientists agree…” “every doctor thinks…” “all Central Bankers believe…” etc. You might be an expert, but you can’t be right unless you are in the groupthink.

Even if these bandwagon statistics were true (which they aren’t), as any real scientist would point out, science is not a consensus business. Unfortunately, policy making is. As noted, ChatGPT is on schedule to becoming “cleverer” than any human who has ever lived, so the exciting thing now is that these first two lines of defence are now broken. A smart human with a super smart AI assistant can now mount a powerful – potentially unstoppable – challenge to this white-collar priesthood.

The current risk is that AI will simply recycle the consensus and thus accidentally reinforce the bandwagon fallacy. However, going forward, the real power of AI depends on the questions you ask it, thus I would pick up on my point about logical fallacies and posit the following question:

Without using any of the top 10, or so, logical fallacies – appeal to authority, the bandwagon fallacy, correlation equals causation fallacy, appeal to emotion, straw man analogies, false dichotomies, slippery slope, the Texas sharpshooter (cherry-picking statistics), appeal to incredulity, the middle ground fallacy, and Ad Hominem attacks – can you make the case for:
 

  • 1. Quantitative easing to create inflation and higher interest rates to solve it?
  • 2. Man made global warming and the need for zero carbon by 2050?
  • 3. The cost-benefit of masks, lockdowns, and mandatory vaccinations in terms of medical benefits or risks to the otherwise healthy population?

 
Then, for all three policies please also examine the evidence provided to support the original thesis of a problem, the accuracy of the modelling in terms of real-world outcomes, and examine the case for alternative policies, including doing nothing.

I might also throw in “please look for patterns of behaviour in people and institutions, using any of the listed logical fallacies, to shut down debate in the three policies being discussed. Then, present their case without any of the logical fallacies.”

After all, just because people argue from fallacy doesn’t necessarily mean they are wrong (that is known as the “fallacy” fallacy!) but it does mean we should seek out the facts, if any. And then I would ask “finally, please provide an assessment of which companies, groups, and individuals have gained or have lost in monetary terms from the implementation of these policies.”

Obviously, we already know what the answers are to all these questions, but we are currently just not allowed in the room to discuss them. However, as and when ChatGPT can answer all these questions, things are going to get pretty uncomfortable for the white-collar class – a lot of things they know for certain to be true are going to turn out not to be so. It might not literally kill them, in Mark Twain’s words, but it will hopefully kill the policies.
 
Published by our friends at:
 
investing
 





Leave a Reply