- The Intelligist AI Digest
- Posts
- Entering the AI era: a new Cold War?
Entering the AI era: a new Cold War?
The January 2 intelligist AI newseltter
Greetings, intelligists!
Today’s newsletter starts out with a pressing concept: the impact of AI on geopolitics.
I grew up toward the end of the Cold War, and have fresh memories on how the proliferation of nuclear weapons had the world on edge for decades.
Now, I am seeing how AI is becoming potentially the next global superweapon. We are currently witnessing an unharnessed AI arms race with few controls, agreements, consensus, and regulations.
There will undoubtedly be some immense benefits to emerge from this. However, there might also be some not-so-desirable side effects. Many companies and states are set to make immense profits and gain extraordinary power from AI. Some will have the public interest in mind, others, only their own power.
In my opinion, AI education is a critical issue. Now is a crucial time for ordinary citizens to learn about how AI works and how to use it for their own benefit. This is a unique moment in history in which the doors of opportunity are open. I have the feeling this moment won’t last long, and a significant divide will make itself evident, separating those who know and those who don’t.
With that, I give you today’s AI newsletter.
Joaquin
The Economist comments on the rise of AI Nationalism

Image generated by the author in Stable Diffusion XL
A recent Economist editorial points out that AI nationalism is reshaping the global tech landscape.
According to the magazine, countries like the US, China, and India are investing heavily in AI. In these countries, the strategic approaches vary from state-led initiatives to private-sector innovation.
This global race is not just about technological supremacy. It is also about maintaining control over critical tech infrastructures. The geopolitical implications are profound, as nations seek to balance innovation, regulation, and national security.
At the same time, it is hard to not think about the nuclear race of the 20th century and whether this might have similar consequences. Right now, companies across the world are throwing concern out the window. They are racing to get as far ahead as possible in the AI race. Companies such as OpenAI are open about building AGI, or artificial general intelligence. AGI implies an AI that is capable of both autonomy and significantly exceeding human intelligence.
The editorial points out how there is a current complicity between states and AI companies. One good example is evident in the EU, where AI companies in France and Germany appeared to exert influence on negotiations over EU AI regulation. As Macron’s statements about Mistral and Huggingface (a French-American company) show, these AI developers are definitely on the minds of national leaders.
For those of us regular people out there, this current scene begs the question of what the role of government should be in tech. Should they allow tech companies to pursue their goals without barriers? Or should there be some kind of regulation in the public interest?
For example, if I tell you that AI can become a dangerous weapon, would it not be logical to control it as one might control guns or nuclear weapons?
In our opinion, 2024 should be the year in which the public wakes up, gets educated on AI issues, and weighs in on the current direction of tech development.
Read More: The Economist Article
How to use AI ethically? Ideas for execs

Image generated by the author in Dall-E 3
AI is revolutionizing industries, offering a potential annual value of up to $4.4 trillion. With over 80% of enterprises expected to adopt generative AI models by 2026, the race is on.
Though many AI companies have thrown ethics under the bus, some people are still thinking about doing things right with AI.
Generative AI is a game-changer for many jobs. Yet, it can introduce issues such as hallucinations, discrimination, bias, and data that can be harmful to people and organizations. You might be thinking about how to use AI in your job, but have you thought of the potential harm that can come about from it?
The data shows that a mere 17% of businesses are addressing AI risks, such as privacy and bias issues.
Meanwhile, meaningful and effective regulation appears far off. How can companies lead in AI ethics? What measures can and should be taken? This article from TechCrunch takes a look at ways in which companies can anticipate ethical issues in applying AI.
Read More: TechCrunch Article
Square Enix's Bold AI Strategy: Revolutionizing Gaming

Image generated by the author in Stable Diffusion XL
Any old-school gamers out there? Did you ever envision a future in which computers design computer games?
It looks like this bizarre concept is close to being real. Square Enix is taking a leap forward, integrating AI aggressively in its development and publishing divisions.
Square Enix president Takashi Kiryu has announced a strategic shift in which his video game company will begin to use generative AI in game development.
For video game designers, this upends their worlds. AI tools will likely have a dramatic impact on how different aspects of games are designed. It might displace traditional designers and writers, or at least require them to learn to use AI tools.
It is yet to be seen how this might affect the quality of games and gameplay.
Read More: Nintendo Life Article
AI in the Courtroom: The Future of Justice?

Image generated by the author in Stable Diffusion XL
By now, you’ve already heard the one about the lawyer who used ChatGPT to file a brief. ChatGPT did him dirty, hallucinating and creating nonexistent court cases that the lawyer then cited. Public humiliation ensued.
Funny as that incident was, it appears to be a sign of the times.
Yesterday, Chief Justice John Roberts of the US Supreme Court acknowledged that AI will significantly affect judicial work. While AI offers promising advancements, its limitations are evident, particularly in courtroom nuances. As the legal system explores AI integration, the technology's role in courtrooms is set to evolve, challenging traditional judicial processes and potentially influencing Supreme Court decisions.
It is likely only a matter of time before there are specialized AI-powered legal tools that lawyers and judges will use much in the same way they currently use legal databases.
It might sound scary, but it is not necessarily a bad thing. Current databases rely on older-generation search engines. AI-powered ones promise to have a greater “understanding” of user intentions and can potentially lead to more accurate results.
While we are probably far from having a robo-judge preside over a courtroom, there is no doubt that AI will be present in one way or another.
Read More: USA Today Article