TechCrunch AI News: Meta Shakes Up AI Org, Databricks CEO Targets New Market

two people shaking hands in front of a computer monitor two people shaking hands in front of a computer monitor

Big changes are happening over at Meta’s AI division, and it’s not just them. We’re seeing a lot of movement across the tech world when it comes to artificial intelligence. From new organizational structures to how companies are handling their AI models, it’s a busy time. Plus, there’s a lot of talk about how AI is changing the way we write computer code. Let’s break down some of the latest ai news techcrunch is covering.

Key Takeaways

  • Meta is reorganizing its AI division into four teams, including a new “Superintelligence Labs,” to speed up AI development.
  • The company is making big investments in AI talent and infrastructure, aiming to compete with rivals like OpenAI and Google.
  • Industry leaders from Microsoft and Google are sharing figures on how much AI contributes to their code generation, though measurement methods are unclear.
  • Marketers are looking for more transparency and privacy-friendly tools from Meta’s AI advertising, along with creative guidance.
  • Anthropic has revoked OpenAI’s API access to its Claude models, citing terms of service violations related to benchmarking and competitor analysis.

Meta’s AI Reorganization: A New Structure for Superintelligence

a very large room filled with lots of boxes

Meta is shaking things up in its AI division again, and this time the focus is on building what they’re calling "superintelligence." It sounds pretty ambitious, right? Basically, they’re reorganizing the whole AI group, which they’re now calling Meta Superintelligence Labs (MSL), into four main parts. This is their latest move to try and catch up with companies like OpenAI and Google in the AI race. It’s a big deal because they’ve been making a lot of changes to their AI teams lately, like, a lot. They’re trying to get all their ducks in a row to make faster progress.

Advertisement

Meta Superintelligence Labs: Four Teams for Accelerated AI Development

The new setup under MSL is pretty structured. You’ve got:

  • TBD Lab: This is where the big language models, like the Llama series, will be managed. Alexandr Wang, who’s the new Chief AI Officer, is leading this part.
  • Fundamental AI Research (FAIR): This lab has been around for a while, focusing on the long-term, more theoretical AI research.
  • Products and Applied Research: Nat Friedman, the former CEO of GitHub, is in charge here. His team’s job is to take all that research and put it into actual products people can use.
  • MSL Infra: This is all about building the massive, and let me tell you, expensive, infrastructure needed to actually run all this AI stuff. Think servers, computing power, the whole nine yards.

New Leadership and Focus on Key AI Areas

Alexandr Wang is now the Chief AI Officer, and he’s the one driving this new structure. He’s really emphasizing that to get to superintelligence, they need to be organized around research, product development, and the infrastructure to support it all. It’s a pretty clear signal about where Meta is putting its energy. They’re trying to consolidate efforts and make sure everything is aligned towards this big goal. It’s a significant shift, especially with Wang coming over from Scale AI.

Investment in Talent and Infrastructure for AI Ambitions

Meta isn’t just reorganizing; they’re also spending big. Mark Zuckerberg has been actively bringing in top AI talent, offering some seriously high compensation packages. We’re talking millions of dollars to lure researchers and executives from other major tech companies. On top of that, they’ve increased their spending forecast for capital expenditures, with a huge chunk of that money going towards building out the necessary AI infrastructure. It’s clear they’re betting heavily on this superintelligence push, and they’re willing to pay for the best people and the best equipment to get there.

Meta’s AI Strategy Amidst Industry Competition

Workflow diagram, product brief, and user goals are shown.

Meta’s been making some big moves in the AI space lately, and honestly, it feels like they’re really trying to catch up. You’ve probably heard about the big reorganization of their AI division, now called Meta Superintelligence Labs, or MSL. It’s basically split into four main teams, all aimed at speeding up their work towards what they’re calling "superintelligence." This whole shake-up comes after a bit of a rocky patch, especially with how people reacted to their Llama 4 model.

It’s clear they’re feeling the heat from companies like OpenAI and Google. Mark Zuckerberg himself has been putting a lot of effort into bringing in top AI talent, even offering some pretty eye-watering compensation packages. He’s really pushing this idea of "personal superintelligence," where AI becomes deeply woven into our daily lives to help us out. It’s an ambitious goal, for sure.

Here’s a quick look at the new setup:

  • TBD Lab: This is where Meta’s large language models, like the Llama series, will be managed. Alexandr Wang, who recently joined as Chief AI Officer from Scale AI, is leading this.
  • FAIR (Fundamental AI Research): This is the long-standing research arm, focusing on more forward-looking AI projects.
  • Products and Applied Research: This team, headed by former GitHub CEO Nat Friedman, is all about getting the AI research into actual products people can use.
  • MSL Infra: Building the massive, expensive infrastructure needed to power all this AI work falls to this group.

It’s a lot of restructuring, and they’re pouring billions into it. The big question is whether this new organization can really deliver and help Meta stand out in this super competitive AI landscape. We’ll have to wait and see how it all plays out, but they’re definitely not holding back on investment. It’s interesting to see how they’re trying to get ahead, especially after some of the earlier stumbles with their open-source models. Alexandr Wang’s move to Meta as Chief AI Officer is a pretty significant part of this new direction.

AI News TechCrunch: Industry Leaders Discuss AI’s Role in Code Generation

It seems like every tech company is talking about AI writing code these days. Microsoft’s CEO, Satya Nadella, recently mentioned that a significant chunk of their code, somewhere between 20% and 30%, is actually generated by software, meaning AI. He shared this during a chat with Mark Zuckerberg at Meta’s LlamaCon. Nadella noted that the results vary depending on the programming language, with Python showing more progress than, say, C++.

This is a pretty big deal when you consider Microsoft’s CTO, Kevin Scott, has predicted that by 2030, about 95% of all code could be AI-generated. That’s a massive shift! When Zuckerberg was asked about Meta’s own code generation numbers, he admitted he wasn’t sure.

It’s interesting because Google’s CEO, Sundar Pichai, also chimed in on a recent earnings call, stating that AI was generating over 30% of their company’s code. Of course, it’s a bit tricky to know exactly how they’re all measuring this, so we should probably take these percentages with a grain of salt. Still, it shows the direction things are heading.

Microsoft CEO on AI-Generated Code in Repositories

Google’s AI Code Generation Figures

Skepticism on AI Code Generation Measurement

Market Perspectives on Meta’s AI Advancements

It seems like Meta is really trying to get its AI house in order, and honestly, it’s a bit of a whirlwind. They’ve shuffled things around again, creating these four new teams under the Meta Superintelligence Labs (MSL) umbrella. The big idea is to speed things up, especially since everyone’s looking over their shoulder at OpenAI and Google.

Marketer Concerns Over AI Advertising Transparency

From what I’m hearing, the folks who actually spend money on ads are a bit uneasy. They’re calling Meta’s AI ad tools, like Advantage+, an "AI black box." It sounds like they don’t quite know how it all works behind the scenes, which is a problem when you’re trying to make sure your ad dollars are well-spent. They want to know what’s going on inside the AI.

Demand for Privacy-Safe AI Tools

There’s also this push for AI tools that respect user privacy. You know, the kind that can figure out who might be interested in something without actually handing over personal data. It’s a tricky balance, for sure.

Seeking AI-Powered Creative Guidance

Beyond just looking at what worked in the past, marketers are hoping these new AI efforts can actually predict what kind of creative stuff will hit the mark with customers. It’s less about looking back and more about looking forward, trying to get a leg up on the competition. It’s interesting to see how this plays out, especially with Meta’s massive advertising business funding a lot of this [5446].

Challenges and Criticisms of Meta’s AI Pursuit

Meta’s aggressive push towards what they’re calling "superintelligence" isn’t without its bumps and bruises. It feels like they’re trying to build a rocket ship while also figuring out how to fly it, and not everyone is convinced it’s going to reach the moon. For starters, the company has been shuffling its AI teams around a lot lately – this is like the fourth big change in just six months. It really makes you wonder if they have a solid plan or if they’re just reacting to what everyone else is doing [71c5].

Sustainability of High Compensation Packages

One of the big talking points is how Meta is paying its top AI talent. We’re talking about some seriously high salaries, even hundreds of millions in total compensation for certain researchers. While it’s great they’re attracting smart people, questions are popping up about whether this is sustainable in the long run. Other companies, like Elon Musk’s xAI, are suggesting they can get top talent without offering quite so much cash. It makes you think if Meta’s approach is more about outspending the competition than building a truly efficient operation.

Ethical Risks of Pursuing Superintelligence

Then there’s the whole "superintelligence" goal itself. It sounds impressive, sure, but what are the actual risks? People are worried about what happens when AI gets too smart. Could it lead to job losses? Could it make existing societal biases even worse? Mark Zuckerberg has this vision of AI being deeply woven into our lives, but that raises a lot of ethical questions that don’t seem to have easy answers yet. It’s a bit like opening Pandora’s Box, and we’re not entirely sure what’s inside.

Concerns Over Open-Source Model Pivot

Meta used to be pretty big on open-source AI, especially with their Llama models. That was a big deal because it let smaller companies and researchers get involved and build on their work. But lately, it feels like they’re pulling back from that. This shift away from open-source has some people worried that Meta might be heading towards a more closed-off approach, which could stifle innovation and make it harder for others to compete. It’s a bit of a shame because open-source was a real differentiator for them.

Anthropic Revokes OpenAI’s Access to Claude Models

It seems like there’s been some drama between AI companies. Anthropic, the folks behind the Claude models, has apparently cut off OpenAI’s access to their systems. This happened because Anthropic believes OpenAI was using Claude in ways that broke their terms of service. Specifically, it looks like OpenAI was using Claude to benchmark against their own models, like GPT-5, especially in areas like coding and safety.

Anthropic’s official statement mentioned that OpenAI’s technical staff were using their coding tools before GPT-5 even launched, which they see as a clear violation. Their commercial terms do say companies can’t use Claude to build competing services. However, Anthropic did say they’d still allow OpenAI access for benchmarking and safety checks, which is a bit of a mixed message, I guess.

OpenAI, on the other hand, called their usage "industry standard" and expressed disappointment, noting that their own API is still available to Anthropic. This whole situation highlights how competitive the AI space is right now. It also brings up questions about how companies share access to their powerful AI models, especially when they’re in direct competition. It’s a tricky balance between collaboration and protecting your own work.

Violation of Terms of Service for Benchmarking

Anthropic’s decision stems from what they perceive as a breach of their service agreement by OpenAI. The core issue appears to be OpenAI’s use of Anthropic’s Claude models for internal comparisons, particularly in evaluating coding and safety performance against their own developing models. This practice is seen by Anthropic as a direct contravention of their terms, which prohibit using their technology to develop competing products or services.

OpenAI’s Response to API Access Cutoff

In response to the access revocation, OpenAI has characterized its actions as standard industry practice. They expressed disappointment with Anthropic’s decision, especially given that their own API remains accessible to Anthropic. This statement suggests a differing interpretation of acceptable usage and collaboration within the AI development community.

Anthropic’s Stance on Competitor Access

Anthropic has previously shown a reluctance to grant access to its models for direct competitors. Executives have voiced that providing their technology to companies like OpenAI would be unusual, especially when those companies are actively developing similar AI capabilities. This stance underscores Anthropic’s strategy to maintain a competitive edge and control over its intellectual property in the rapidly evolving AI landscape. using Claude

What’s Next for AI?

So, Meta’s shaking things up again with its AI teams, trying to get ahead in this super competitive space. They’re splitting things up into four new groups, hoping to speed things up. It’s a big move, especially after bringing in a lot of new talent. Meanwhile, Databricks is looking to grab a piece of a new market with its own AI plans. It’s clear everyone’s racing to build the next big thing in AI, and it’s going to be interesting to see who actually pulls it off. We’ll have to keep an eye on how these strategies play out and what it means for the rest of us.

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Advertisement

Pin It on Pinterest

Share This