Home Uncategorized OpenAI Co-Founder Envisions a Safe Superintelligence as Claude AI Gets Funnier –...

OpenAI Co-Founder Envisions a Safe Superintelligence as Claude AI Gets Funnier – CNET

36
0
OpenAI Co-Founder Envisions a Safe Superintelligence as Claude AI Gets Funnier – CNET

Earlier this month, Apple entered the market for AI, announcing that its “Apple Intelligence” features and functionality would be built into its popular iPhones, Mac computers and iPad tablets starting later this year. 

It also said that this fall its Developer Academy will begin AI training for students, including how to build, train and deploy machine learning models across Apple devices. (CNET’s Sareena Dayaram has more details about AI and the Apple Developer Academy.) 

AI Atlas art badge tag

Those announcements come amid an escalating battle for consumer adoption of AI, which involves tech companies including OpenAI (with ChatGPT), Microsoft (with CoPilot and Bing), Google (with Gemini), Meta (with Meta AI), Anthropic (with Claude), Perplexity.ai and Elon Musk’s xAI (with Grok). There’s a lot at stake, given that the AI market is projected to surge in value from $124 billion this year to over half a trillion dollars — $529 billion — by 2028, according to market research firm Statista. 

But the latest entrant in the AI market, led by an OpenAI co-founder and former chief scientist, says it’s more concerned with producing safe AI than chasing “short-term commercial” gains. At least that’s the pitch from Ilya Sutskever, who last week launched Safe Superintelligence Inc. (SSI), a Palo Alto, California, and Tel Aviv-based company that intends to “solve the most important technical challenge of our age.”

That challenge: building a safe superintelligence. What does that mean? Creating a “machine that is more intelligent than humans — in a safe way,” SSI spokeswoman Lulu Cheng Meservey told The New York Times. 

“We’ve started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence,” Sutskever said in a mission statement on SSI’s website, which was also signed by SSI’s co-founders: Daniel Gross, who led AI and search projects at Apple, and Daniel Levy, a former member of OpenAI’s technical staff.

Sutskever, if you don’t know, recently exited OpenAI after citing worries about how the San Francisco startup and its CEO, Sam Altman, are (or aren’t) addressing security concerns over the company’s AI models as OpenAI looks to become a profitable business. He was on OpenAI’s board when it decided last November to oust Altman but then later embraced his return.

Bloomberg, which broke the news about Safe Superintelligence Inc., said SSI is “aiming to create a safe powerful intelligence system within a pure research organization that has no near-term intention of selling AI products or services.” 

How will the company be funded, then? “OpenAI began as a non-profit research organization and eventually set up a for-profit subsidiary as its need for financing grew,” Axios noted. “Gross brushed off any concerns over Safe Superintelligence’s ability to raise capital in an interview with Bloomberg.”

The trio of SSI’s co-founders shared their point of view in their mission statement. “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures,” they wrote. “This way, we can scale in peace.”

The business goal, at least for now, seems to run counter to the drama faced by OpenAI (see the Scarlett Johansson incident), Google (see Gemini’s AI Overviews flub), Perplexity (see below) and others as they move quickly — critics say too quickly — and make mistakes in their quest to monetize AI tech.  

 Here are the other doings in AI worth your attention.

Claude gets a better sense of humor

The aforementioned AI race is why we’ll continue to see each of the top AI companies innovate quickly and offer updates to their generative AI chatbots. Last week, Anthropic released Claude 3.5 Sonnet, which it said is funnier; a better writer with more understanding of nuance and humor; and a more adept software engineer than the prior generation of its AI model. 

CNET AI reviewer Imad Khan called Claude “the chattiest of the AI chatbots,” noting that it “answers questions in easy-to-understand human-like language that makes it the most ideal AI chatbot for most people.”

Anthropic’s model is available for free via Claude.ai and the Claude iOS app for the iPhone, CNET’s Lisa Lacy reported. Claude Pro and Team plan subscribers, who pay $20 and $30 per user per month, respectively, get access to Claude 3.5 Sonnet with higher daily rate limits. 

In addition to Spanish, Japanese and French, which we saw in Claude 3, the company told Lacy that Claude 3.5 Sonnet “has strong levels of comprehension and fluency” in German, Italian, Dutch and Russian. Test it out if you’re curious. 

Anthropic’s update comes after OpenAI announced a new version of its model, GPT-4o, in May. The following day, Google debuted Gemini 1.5 Pro at its Google I/O developer event. 

A perplexing problem with Perplexity.ai 

The need to outdo your rivals seems to have pushed Perplexity, which has raised $165 million to take on Google in AI search, into an uncomfortable situation. According to investigations by Wired and Forbes, the startup seems to be “stealing content from them and likely other publications to feed its answer engine,” CNET’s Ian Sherr reported.

Wired, which is owned by Conde Nast, said its investigation showed that “it’s likely” that Perplexity.ai visited the media brand’s various properties “thousands of times without permission” and then offered up summaries of “unique stories that the AI shouldn’t have access to,” Sherr noted. Some of those summaries were inaccurate, Wired found.

That comes after Forbes also accused Perplexity of ripping off its original reporting without appropriate attribution, a move Perplexity’s CEO initially referred to as involving “rough edges” of his company’s tech. Forbes is now threatening Perplexity with legal action, Axios reported.

Stealing copyrighted content without permission or without compensation to the content owner, and then summarizing it — sometimes incorrectly — for users of your AI search engine seems a bit more than a problem with “rough edges.”  

Perplexity’s CEO told the Associated Press that the company “never ripped off content from anybody” and that instead Perplexity is an “aggregator of information.” A company representative declined to comment further to CNET’s Sherr, apart from saying that Perplexity is “committed to finding ways to partner with media companies to create aligned incentives for all stakeholders.” The representative also said Perplexity is developing a revenue-sharing program for media companies, along with free access to its tools.

Perplexity’s problem comes as publishers, including the Chicago Tribune, are pushing back against AI companies, who they say are scraping their sites for content without permission, compensation or attribution in their AI results. The New York Times sued OpenAI and one of its biggest investors, Microsoft, citing copyright claims. Meanwhile, other publishers, including The Financial Times, have been inking deals with OpenAI and licensing their content and saying they’ll work with the startup to train its large language model, or LLM.

UK candidate puts digital avatar — ‘AI Steve’ — on the local ballot

Just a month after Ukraine announced it would be deploying an AI digital “person” named Victoria Shi to deliver foreign ministry press announcements to save its human officials’ time, a businessman running for local office in the UK has put forward an AI avatar to engage with voters. 

“AI Steve” is the digital frontman for businessman Steve Endacott, who’s running for office in a city on England’s southern coast, NBC News reported. AI Steve appears on the ballot as an “independent,” along with the human candidates vying for the seat in a July general election.

“AI Steve is the AI co-pilot,” Endacott told NBC in an interview. “I’m the real politician going into Parliament, but I’m controlled by my co-pilot.”

Endacott, who ran unsuccessfully for local office in 2022, is using AI Steve in part to create buzz — the AI got 1,000 calls in one night after people heard about it, NBC said. But it’s also available 24/7 on a website to answer voters’ questions about Endacott’s policies and plans, and to allow voters to share their opinions.

“I don’t have to go knock on their door, get them out of bed when they don’t want to talk to me,” Endacott told NBC.

Endacott also said he’s not using his digital avatar to boost his own business. AI Steve is one of seven characters created using tech from a company called Neural Voice, which makes personalized voice assistants for businesses. Endacott is chairman of Neural Voice, NBC noted. The company, on its website, describes AI Steve as “the future of political engagement… Neural Voice AI doesn’t just listen — it engages, deliberates, and offers insights” to bring “a new level of clarity to the political discourse.”

I don’t mean to be a naysayer, and I’m all for ways to get people to pay attention to politics so they can make informed voting decisions. But given that we know these LLMs are based on training data, which can be biased and contain misinformation and disinformation, and given that LLMs also hallucinate — that is, spew “facts” that are in fact false or made up — an AI avatar running for office using an LLM to inform the masses sounds like the plot of a Tom Cruise sci-fi thriller, in which a human ultimately needs to intervene to save the planet.

We’ll see how AI Steve fares at the polls.  

With the coming of AI, workers say job security is No. 1 priority

It’s no secret that AI will disrupt the job market, costing people their jobs even as the tech creates new kinds of roles in the workplace, McKinsey and others have noted.

Signup notice for AI Atlas newsletter Signup notice for AI Atlas newsletter

Which is why there are numerous polls about what workers think about the coming tide of AI and whether it’ll swamp them. In what it’s calling the largest global workforce survey of its kind, The Boston Consulting Group, after interviewing more than 150,000 workers in 188 countries, said workers are feeling pretty good about their job prospects.

“More than 60% of workers surveyed believe they hold the upper hand in labor market negotiations, and 75% are approached multiple times a year with job offers,” BCG said in its summary of the survey’s findings

In addition, what people want from employers has changed over time — from being appreciated for their work (in 2014) to having good relationships with their co-workers (2021) to now prioritizing job security and work/life balance in 2023, BCG said. Why is job security No. 1?

“Recent headlines have suggested that the emphasis on job security may stem from restructuring in several industries or increased geopolitical uncertainty,” BCG said. “Instead, we believe that the response mainly reflects workers’ concerns about their long-term employability — because our data connects the desire for job security with increased awareness of technological disruption. Respondents who expressed concern about the impact of GenAI on their jobs were more likely to prioritize job security.”

That’s why employees are now judging their employers on how well they offer training and career development to help them navigate disruption caused by new technologies, including AI. While the majority don’t think the gen AI will push them out of their jobs, at least for now, many — 70% — know that it’ll change the nature of what they do, requiring them to develop new skills, BCG found. 

As for how much workers are embracing new AI tools, the survey found that 39% use gen AI “regularly,” with the top use cases being gathering information, handling writing tasks, and doing administrative work. 

You can download the 36-page report, “How Work Preferences Are Shifting in the Age of GenAI,” as a PDF here.

AI for legal discovery, dating and more

The number of new companies and industries embracing AI to solve human problems keeps growing, which is why I thought I’d highlight three stories worth a read.

First up, CNET contributor Rachel Kane offers details on how AI can refine your dating profile to make you more appealing to potential partners. Why? “If a robot is choosing which potential partners see your profile, maybe a robot is the best judge of how to appeal to the partner you want most, too,” Kane said after putting the AI to the test.

Second, though lawyers have found, much to their embarrassment, that using AI to help write legal briefs is problematic (given the hallucination problem with every LLM), AI can help with electronic discovery. “That’s the process of collecting and exchanging electronic evidence like emails, voicemails, chats and social media posts for civil lawsuits,” CNET’s Lisa Lacy wrote in her recap of the work being done by data company Hanzo.

Hanzo says it can help legal departments sort through unstructured data from resources like Slack, Microsoft Teams and email to identify relevant documents and to make sense of those records in the e-discovery process. The return on investment: saving attorneys’ time — and customers’ money, because manual searching takes a lot of billable hours.

And last, the CNET team has put together its list of editors’ choices for the best AI chatbots and tools, including Claude, Otter AI and OpenAI’s Dall-E text-to-image generator. You can see why they picked what they picked here.  

One more thing: If you want to get this column delivered to your inbox each week, sign up for a newsletter version at our consumer hub, AI Atlas. Subscribers get bonus content, including the terms you should know about from the world of AI. 

Editors’ note: CNET used an AI engine to help create several dozen stories, which are labeled accordingly. The note you’re reading is attached to articles that deal substantively with the topic of AI but are created entirely by our expert editors and writers. For more, see our AI policy.

Source link