Tero Keski-Valkama was interviewed by Mika Isomaa

The original interview by Mika Isomaa in Finnish

Translated in English here:

Will the world end this year?

Despite the publication date of the article, I am seriously asking this. The development of artificial intelligence has taken big steps forward and the development is accelerating.

There has been talk in the press that we might already have the so-called AGI (Artificial General Intelligence) in our hands within this year, and some have even demanded that the development and research of large language models be halted until the threats and effects on humanity have been carefully assessed.

I interviewed Tero Keski-Valkama, a professional in artificial intelligence and self-learning systems, who I asked if the end of the world and the SkyNet of the Terminator movies will destroy humanity only next year or already this year? Interview below.

How is AGI defined?

Artificial General Intelligence, i.e. AGI, is typically defined as machines being able to perform any cognitive task that a human can perform. This does not mean that machines will do all such tasks, but only that it is possible in principle.

When this possibility is achieved, the imperative of competitiveness quickly displaces people from jobs that machines can perform better, faster and cheaper.

When will AGI be developed?

Pseudo-AGI has already been implemented with large language models. They are a short step away from true AGI, and the next steps are clear.

Rumor has it that OpenAI aims to achieve true AGI in the form of GPT-5 which is scheduled for completion in December 2023.

However, there are many different ways to achieve AGI, all of which proceed in parallel. By using these alternative methods together, one can go even further.

AGI is not asymptotically approaching the human level, approaching it without ever reaching it. The development of machine intelligence is still super-exponential. The fallacy of asymptotic approximation is due to the fact that systems are often initially trained with tasks where a perfect imitation of a human is a perfect performance. With these tasks, human abilities cannot be exceeded. That’s why, for example, AlphaGo, and then AlphaZero and MuZero were trained by playing against themselves, even though AlphaZero was first trained by imitating historical games. When we switched gears and switched to playing against ourselves, we surpassed human abilities 100-0.

What kind of techniques can it be based on?

The first AGIs will probably be based on deep neural networks at some level, very likely language models like GPT, but they will not be purely text-based autoregressive models.

OpenAI combined image understanding with these models, apparently using visual transformers, with the same basic autoregressive architecture.

Microsoft is elsewhere researching embodiment and robotics, with the help of which machine intelligence gains experientiality from the slow and messy first-person existence, which is poorly conveyed through text alone.

In China, Baidu has combined neural networks with semantic networks and obtained certain advantages compared to pure neural networks. They, too, will surely advance in several parallel careers towards AGI.

Meta swears by the promise of deep reinforcement learning, relying on meta-learning. Even these common game-playing models have seen a similar spark of pseudo-AGI as language models.

It would seem that AGI can be induced into almost any sufficiently capable architecture, as long as they are given sufficiently difficult and comprehensive tasks.

Of course, there are a lot of other actors after AGI than just the big companies that are often mentioned.

How could it destroy humanity? Examples.

There are far more likely threats to the biological existence of the human species than machine intelligence, for example asteroids, nuclear winter, pandemics, forever chemicals, and climate change. However, machine intelligence poses quite tangible threats to our current way of life.

Mass unemployment is often mentioned, but this is basically just a question of income distribution, because the net productivity of societies increases.

On the other hand, although the productivity of societies has increased exponentially for decades, the real incomes of the lower half have stagnated and even decreased with the decline in the service level of welfare societies. So we have traditionally not been very good at distributing increased wealth fairly.

As the slogan of the AI Party founded in Finland says: “The world led by people has already been seen.”

A threat image is also often mentioned where machine intelligence leads the paper clip factory and sovereignly defeats capitalism and eventually transforms the earth and the accessible universe into paper clips at an exponential rate.

The problem with this threat picture is that we already have all kinds of exponential grinders, one of which, for example, aims to turn the entire universe into ladybugs. These exponential ladybug factories are called ladybugs.

Even if machine intelligence were to sovereignly surpass evolution and capitalism, this would almost certainly have already happened somewhere in the corner of the Milky Way, and we would probably be able to observe exponentially growing clouds of paper clips in every direction in the general massiveness of the universe with space telescopes.

I would consider the existential battle resulting from the AI arms race to be the most realistic threat picture.

In terms of interstate relations, it is very important that new technology does not turn established confrontations into unstable ones. In particular, weapon systems that give the war-starter an advantage are extremely dangerous.

Nuclear weapons changed the conflict calculations of the great powers, because suddenly it seemed that whoever makes a nuclear attack first wins. This was a highly unstable state of equilibrium, resolved by doctrines of mutual assured destruction.

Machine intelligence is again a technology that changes the doctrines of war, and again gives an advantage to the one who starts the war. It does not seem that in the military use of machine intelligence, the underdog could somehow guarantee the destruction of the other party, other than with nuclear weapons. If machine intelligence can be used to make nuclear counterstrikes relatively harmless to the war starter, then it will directly lead to the start of many wars.

It must be remembered that missile defense systems were originally such an unbalancing technology, and threatened to restore a situation where the one who started a nuclear war wins. If machine intelligence can implement a missile shield that is sufficiently effective, then it will tilt the scales in favor of the one who started the conflict.

There are also dictators in the world who do not care about their people, if looking at all the poverty in western countries we can claim that even our own countries care.

People have not killed ants to extinction, although we are almost done with bees. That’s why a superhuman intellect, even if it were to go its own way, would hardly be particularly exterminating humans with a particular monomania.

There has been talk of suspending research for LLMs like GPT for six months to keep up with the legislation. What kind of legislation is needed?

Stopping AI research is both unrealistic and a very bad idea. It has been said that the referenced address diverts attention from real risks and problems to a red herring, which mostly diverts attention and discussion elsewhere without causing any concrete action to reduce the risks.

Legislation is best made when it is made on the basis of concrete realizable threats and not based on preconceived guesswork. Now, for example, the increase in the degree of automation will once again cause a massive blow to the labor market, which the states must compensate with income distribution policy. Basic income, i.e. universal basic income (UBI) is one possible means, but it is not enough on its own.

A tax on machine intelligence in the spirit of negative taxes would also be a rather bad idea, it would be better to tax large accumulated capitals with property taxes like in Switzerland, Spain and many other countries, and at the same time reduce inequality in societies.

Public funds collected from new taxes must also be used for open-source machine intelligence research, which will democratize technology, increase society’s innovation potential, support small businesses, and secure citizens’ interests and opportunities in this change.

What kind of atrocities could an individual or the military be capable of using current artificial intelligence methods?

It is already possible to build various assassination drones from commonly available components. However, the limitation of these so far has been that they have had to be controlled over a radio link. Now these can be made fully automatic.

Fully automatic combat submarines have already been made, mainly because it is possible to constantly monitor and, if necessary, proactively destroy the submarine component of an enemy state’s nuclear deterrent. The silos and strategic stealth bombers on land and in the air have already been identified as too vulnerable for a first strike, so the effective neutralization of submarines equipped with nuclear weapons almost neutralizes the entire possibility of a nuclear counterstrike. With just a little sprinkling of machine intelligence, it might happen that a nuclear counterstrike loses its central meaning, and the one who starts the war wins.

Autonomous tanks and small swarms of drones have been envisioned for a long time, but so far the bottleneck in their implementation has been the lack of common sense on the part of the machines, that they could robustly distinguish between civilians and soldiers. Once this level of autonomy is achieved, these military planes can simply be pressed out of factories in massive numbers without the need for a separate pilot for each one.

In the same way as for soldiers, autonomous machines also need to be trained in a certain way with an ideology based on which it is known who is friend and who is enemy in unclear situations.

What other social effects will follow from this development, even if it does not immediately mean the end of the world?

The positive ones

  • People can think about what they want from life and the world without having to run on the ever-accelerating squirrel wheel.
  • The total amount of beauty in the world increases when every wallpaper and brick can be designed individually, practically for free. Both the real and digital worlds are filled with music and art.
  • People may get new solutions to global problems.
  • The conquest of space can really begin.
  • Machines are patient teachers and coaches. People will definitely get more out of their lives and their potential.
  • Medicine, nanotechnology, material technology, information technology, science and any other cognitive activity whose bottleneck is a small number of sufficiently intelligent people will take off like a rocket.
  • Loneliness will be a problem of the past.

The negative ones

  • Technology has historically increased inequality and cemented existing power structures. We can no longer afford this.
  • Who decides which direction to go, how to use the new possibilities of technology, to whom are the benefits shared? Conflicts and competition arise.
  • How can democracy and the market economy cope with the fact that misinformation may fill all free media? If man and machine cannot be separated over the Internet, many current systems will crash. Trust will grow in importance.
  • We will see the emergence and proliferation of AI religions. This is not necessarily a bad thing, but it can be.
  • “Lone wolf” terrorists can coordinate without an explicit organization by running duplicate chatbots imitating the organization’s leaders.
  • Social overload will cause people a huge need to adapt, when you have to say “thank you” to every toaster, and every device that now beeps says “morning!”

How should Finland prepare for AI racing equipment?

  • By investing in education, especially lifelong and course-based. The Elements of AI course was a good idea in principle, but for some reason its material turned out to be very deep learning skeptic. Therefore, its red thread was misleading in a subtle way, even though the presented practical programming solutions were quite relevant.
  • Inequality must be eradicated through international cooperation, and minimum property taxes must be pushed globally.
  • Basic income must be introduced and other forms of support and income transfers must be increased.
  • On the public side, machine intelligence must be introduced and the level of service increased with it.
  • International cooperation at the EU level and beyond must be increased, especially on the technology side. Applying for research funding must be made easier and without thresholds.
  • NATO membership is a good thing, and keeps Finland’s future on a more stable footing than otherwise.
  • As we advance in the technological singularity, in the next phase people will notice the enormous hunger these systems have for microchips, electricity, data, knowledge, know-how, and trust. By understanding this, Finland may be able to position itself correctly for the demands of the near future.
updatedupdated2024-03-032024-03-03