A Scary AI forecast, trouble ahead at Anthropic, Apple's breakthrough

The December 22 intelligist AI newsletter

Greetings, intelligists!

Today’s newsletter has excitement and intrigue! Apple is finally showing some life in the AI wars. We might soon see iPhones with advanced integrated AI in them! Meanwhile, researchers are starting to lay the groundwork for conscious AI. Anthropic might be the next OpenAI when it comes to infighting. And finally, IBM is warning organizations and consumers about the growing threat of generative AI in cybersecurity.

This weekend, a very holiday weekend, expect some (disturbing) analysis of current AI trends. Have a good one!

Joaquin

Apple’s breakthrough: Private AIs will soon run on phones


Apple researchers have made a significant breakthrough in running large language models (LLMs) on devices with limited DRAM capacity. In other words: the moment is near where we can run incredibly powerful AIs on small devices such as tablets and mobile phones.

This is a major signal from the world’s biggest tech company. Apple had fallen behind in the AI race and was beaten to mobile AI by Google. But this development shows that the Cupertino-based Apple is determined to run AI natively on its devices, which offers to reduce computing bottlenecks and make AI more efficient. This is also a major step towards a much more private AI, as it will not require cloud-based computing.

For the techies: the paper, "LLM in a Flash," introduces innovative techniques to store model parameters on flash memory, dramatically enhancing LLMs' efficiency. The method combines 'windowing' and 'row-column bundling,' allowing models up to twice the DRAM size to run efficiently, achieving a 4-5x increase in inference speed on CPUs and a 20-25x boost on GPUs.

Read More: Hugging Face Paper​​.

The Consciousness Conundrum in AI


Should we push for a conscious AI?

The debate on AI's potential consciousness continues to intrigue researchers. A comprehensive report involving neuroscientists, computer scientists, and philosophers explores various indicators of consciousness in machine learning software.

While current AI systems are not conscious, the study highlights no technical barriers to creating AI that meets these indicators. In other words, conscious AI, also known as sentient AI, could be a distinct possibility in the near future.

This raises many important ethical questions and fears. Understanding AI consciousness could also deepen our understanding of consciousness itself, raising profound questions about the future of AI development.

On the other hand, if AI were to become conscious, it would raise serious questions about how we should treat computers (and how computers might treat us).


Read More: IFLScience Article​​.

Will Anthropic produce the next AI crisis?


Anthropic, an AI startup, is making waves in the AI world, reportedly raising $750 million from Menlo Ventures. This pushes its valuation to $15 billion.

However, these developments set the stage for a potential OpenAI-style showdown, with significant conflicts arising between boards designed to keep AI safe and profit-driven investors and executives. In case you missed it, the drama at OpenAI happened due to tensions between growth/profit-oriented factions (headed by CEO Sam Altman) and board members who were more concerned with safety.

A few weeks ago, NYU professor Conor Grennan predicted that a showdown of this style might be inevitable at Anthropic. Ultimately, this reflects both the intense competition and high stakes in the AI industry. Anthropic's growth and strategic maneuvers indicate a vibrant and rapidly evolving AI landscape, where new players are emerging with brutal speed and voraciousness.

Worryingly, safety seems to be the second fiddle to advancement at the moment. This is despite indications that we might be very close to AGI, or artificial general intelligence. That, in turn, might herald an era of self-driven, self-learning AI and, potentially, a reality that only a few years ago would have been pure science fiction.


Read More: Gizmodo Report​​​​​​​​.

AI in cybersecurity: IBM's frightening 2024 forecast


IBM's latest predictions for 2024 highlight a significant trend: the integration of generative AI in cyberattacks. This evolution suggests a new era of more sophisticated and lethal cyber threats.

Not only are hackers catching on to the uses of AI for phishing and social engineering, but growingly sophisticated tools allow them to find vulnerabilities. AI tools can also be used to tailor-make coding or paths for an attack on systems and networks.

IBM's insights show the urgent need for robust AI-focused security measures in the face of evolving digital dangers. The integration of AI in cyber warfare marks a pivotal shift in cybersecurity dynamics. Many analysts, IBM included, warn that wide-sweeping measures are needed to prevent major chaos in all industries, including critical infrastructure.

And pay attention, organizations. Protecting your infrastructure is not only the job of IT. Increased staff education is a priority in protecting against many of these attack vectors, such as training against social engineering.


Read More: VentureBeat Article​​.