big lawsuits, AI boosting journalism, and never tell a chatbot your secrets

The December 28 intelligist AI newsletter

Greetings, intelligists!

Today’s biggest headline focuses on OpenAI but could impact virtually all other AI models.

The New York Times has decided to launch a lawsuit against OpenAI, makers of ChatGPT, and their partners, Microsoft. Microsoft is currently powering its own AI apps using ChatGPT.

The New York Times alleges that the tech companies have infringed on their copyright by using an undetermined number of their articles for training their AI model.

Of course, training a large language model, or LLM, such as ChatGPT means that whatever it “learns,” it repeats. So, users of ChatGPT might ask a question that overlaps with articles that have been scanned. The answer could involve regurgitating some of that information.

The Times claims that OpenAI is using its content without authorization or permission of any kind. As a result, the newspaper (whose online articles are behind a subscription paywall) wants damages related to subscription, licensing, advertising, and affiliate fees.

Tensions between content publishers and AI companies have been brewing for months. Many critics claim that AI companies are taking content without permission. The output of chatbots and GPTs, according to this view, gives away this content to users without permission or attribution.

These claims have not been limited to news outlets. Many musicians, visual artists, and photographers have also complained that AI models not only train on their work without permission but also reproduce parts (or even whole portions) of their work illegally.

This latest lawsuit could be a landmark moment for AI and copyright law. Any decision in favor of the Times could reshape how AI models are developed and utilized.

The Transformation of Journalism: One of the World’s Oldest Newspapers Using Ai

England’s The Guardian recently highlighted how some news outlets are embracing the possibilities of AI. This is a curious example, considering that the journalistic world has largely reacted with fear and mistrust when it comes to integrating AI solutions.

Berrow’s Worcester Journal, one of the UK's oldest newspapers, is revolutionizing its journalism by employing 'AI-assisted' reporters. This represents a new trend in the industry towards the use of AI in news production.

It should be noted that the paper is not exactly using AI to generate articles. Rather, it is using a custom GPT to act as an in-house copywriting tool. According to newspaper reps, their reporters use this tool as part of their writing process, mainly to speed up the creation of content.

That being said, the paper’s directors insist that humans are still putting the content together and reviewing it before publishing. This could soon become a normal practice in newsrooms all over the world.

AI Will Drastically Change Work for Almost Everyone in 2024

A recent report in Forbes magazine notes how generative AI will alter the work landscape in 2024. This transformation, driven by rapid advancements in AI, is changing how we approach our jobs and the skills required for future employment.

Here are some of the main ways that work might be transformed, regardless of industry.

  • The rise of creative machines: companies in every sector will step up their use of generative AI.

  • Enhanced decision-making and strategic planning: managers will begin to leverage the data-reading capabilities of AI to inform important decisions.

  • The evolution of customer service: chatbots, virtual assistants, and other customer service tools will become a more central part of operations.

  • Transforming education and training: AI will continue to be used for personalizing learning and providing resources.

  •  Reinventing research and development: generative AI will provide many tools for streamlining processes.

  • The emergence of new roles and skills: there will be a great need for AI-literate employees, ranging from prompt engineers to ethicists.

The Risks of Oversharing with AI Chatbots: Insights from Prof Mike Wooldridge

Have you been getting cozy with ChatGPT? Maybe sharing a couple secrets, getting a bit intimate, recreating scenes from the movie “Her?”

An Oxford AI professor, Mike Woolridge, has warned users about sharing private info with chatbots. One important point that he emphasizes is that there is no guarantee that your information will be kept private. Anything you tell the bot could be used for training and revealed to other users of the chatbot.

What's more, Woolridge warns that people are easily fooled by what they think is empathy or sympathy on the part of GPTs. Often, they seem like they are being nice or empathetic listeners when in reality, they completely lack these qualities. Instead, they are cold machines with no hearts that are merely imitating the data that they've been trained on.

That being said, I have personally found that humans can occasionally appear equally sympathetic but end up being just as heartless. And I have some family members that will make sure that any secrets you tell them will become community knowledge in a queston of minutes.