October 14, 2024

MediaBizNet

Complete Australian News World

This Week in AI: Can we (and will we ever) trust OpenAI?

This Week in AI: Can we (and will we ever) trust OpenAI?

Keeping up with a fast-moving industry like AI is difficult. So, until AI can do it for you, here’s a helpful summary of recent stories in the world of machine learning, along with notable research and experiments that we didn’t cover on our own.

By the way, TechCrunch plans to launch an AI newsletter on June 5. Follow us constantly. In the meantime, we’re increasing the cadence of our semi-regular AI column, which was previously twice a month (or so), to weekly – so stay tuned for more releases.

This week, AI startup OpenAI launched discounted plans for nonprofits and education clients and unveiled its latest effort to prevent bad actors from misusing its AI tools. There’s not much to criticize, at least not in this writer’s opinion. but me will Say that the flood of advertising seemed timely to confront the company’s recent bad press.

Let’s start with Scarlett Johansson. OpenAI has removed one of the voices used by its AI chatbot ChatGPT after users pointed out that it sounded eerily similar to Johansson’s voice. Johansson later issued a statement saying that she had hired legal counsel to inquire about the audio and obtain precise details on how it was developed, and that she had rejected repeated pleas from OpenAI to license its audio to ChatGPT.

Now, a Piece in the Washington Post This suggests that OpenAI did not actually seek to replicate Johansson’s voice and that any similarities were incidental. But why then did OpenAI CEO Sam Altman reach out to Johansson and urge her to reconsider two days before the impressive demo that featured the similar sound? It’s a little suspect.

READ  The Pope to comedians: Help us dream of a better world

Then there are the trust and safety issues with OpenAI.

As we reported earlier this month, OpenAI’s since-disbanded Superalignment team, responsible for developing ways to control and direct “super-intelligent” AI systems, was promised 20% of the company’s computing resources — but only (and rarely ) received a small portion of these resources. this. This led (among other reasons) to the resignation of the team leaders, Jan Leike and Ilya Sutskever, former chief scientist at OpenAI.

Nearly a dozen safety experts have left OpenAI in the last year; Many of them, including Lake, have publicly expressed concerns that the company is prioritizing commercial projects over safety and transparency efforts. In response to the criticism, OpenAI formed a new committee to oversee safety and security decisions related to the company’s projects and operations. But it staffed the committee with people inside the company — including Altman — rather than outside observers. This is what OpenAI says It is considered entrenchment Its non-profit structure favors the traditional for-profit model.

Such incidents make it difficult to trust OpenAI, a company whose power and influence grows daily (see: its deals with news publishers). There are few companies, if any, worth trusting. But OpenAI’s market-disrupting technologies make the breaches even more troubling.

It doesn’t help matters that Altman himself isn’t exactly a beacon of honesty.

When OpenAI News Aggressive tactics toward former employees Tactics that involved threatening employees with losing their vested shares, or blocking stock sales, if they did not sign restrictive nondisclosure agreements – Altman apologized and claimed to have no knowledge of the policies. but, According to FoxAltman’s signature is on the founding documents that enacted the policies.

READ  New Hampshire Woman Buys $4 Painting She Didn't Know It's NC Wyeth, It Could Be Worth $250,000

If Former OpenAI board member Helen Toner A former board member who tried to oust Altman from his position late last year is believed to have withheld information, misrepresented things that were happening at OpenAI, and in some cases outright lied to the board. Toner says the board learned of the ChatGPT release through Twitter, not from Altman; that Altman provided false information about OpenAI’s official safety practices; And that Altman, who was upset by an academic paper Toner co-authored that cast a critical light on OpenAI, tried to manipulate board members to keep Toner off the board.

None of this bodes well.

Here are some other noteworthy AI stories from the past few days:

  • Audio reproduction made easy: A new report from the Center for Countering Digital Hate finds that AI-powered voice cloning services make faking a politician’s statement fairly trivial.
  • Clash of Google AI Overviews: AI Overview, the AI-generated search results that Google began rolling out more widely earlier this month on Google Search, needs some work. The company acknowledges this, but claims it iterates quickly. (we will see.)
  • Paul Graham on Altman: In a series of posts on (Y Combinator owns a small stake in OpenAI.)
  • xAI raises $6 billion: Elon Musk’s artificial intelligence startup, xAI, has raised $6 billion in funding, with Musk raising capital to compete aggressively with rivals including OpenAI, Microsoft and Alphabet.
  • New AI feature in Perplexity: With the new Perplexity Pages capability, AI startup Perplexity aims to help users create reports, articles or guides in a more visually appealing format, Evan said.
  • Preferred numbers for AI models: Devin writes about the numbers that different AI models choose when tasked with giving a random answer. As it turns out, they have favorites, which are a reflection of the data they were each trained on.
  • Codestral versions: Mistral, the $6 billion Microsoft-backed French AI startup, has released its first generative AI model for programming, called Codestral. But it can’t be used commercially, thanks to Mistral’s very restrictive license.
  • Chatbots and Privacy: Natasha writes about the EU’s ChatGPT Task Force, and how it provides a first look at untangling privacy compliance in an AI-powered chatbot.
  • ElevenLabs sound generator: Voice cloning startup ElevenLabs has introduced a new tool, first announced in February, that allows users to create audio effects through prompts.
  • Interconnections of artificial intelligence chips: Tech giants, including Microsoft, Google and Intel — but not Arm, Nvidia or AWS — have formed an industry group, the UALink Promotion Group, to help develop components for next-generation AI chips.
READ  Kevin Costner reports that he left Yellowstone after the upcoming fifth season