Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Hi folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here.
This week has been something of a swan song for the Biden administration.
On Monday, the White House announced sweeping new restrictions on the export of AI chips – restrictions that tech giants, including Nvidia, strongly criticized. (Nvidia’s business would be seriously affected from the restrictions, should they go into effect as proposed.) Then on Tuesday, the administration issued a executive order which opened federal land to AI data centers.
But the obvious question is, do the movements have a lasting impact? Will Trump, who takes office on January 20, simply reverse Biden’s enactments? So far, Trump has not signaled his intentions in any way. But it certainly has the power to undo Biden’s latest AI acts.
Biden’s export rules are expected to take effect after a 120-day comment period. The Trump administration will have wide latitude over how the measures are implemented — and whether they change in any way.
As for the executive order regarding federal land use, Trump could repeal it. Former PayPal COO David Sacks, Trump’s AI and Crypto ‘Czar’ little time committed to rescinding another AI-related Biden executive order that set standards for AI safety and security.
However, there is reason to believe that the incoming administration may not rock the boat too much.
In line with Biden’s move to free up federal resources for data centers, Trump recently promised accelerated permits for companies that invest at least $1 billion in the United States It is also selected Lee Zeldin, who has promised to cut regulations he sees as burdensome on businesses, to lead the EPA.
Aspects of Biden’s export rules could stay as well. Some of the regulations target China, and Trump has He did not make a secret which sees China as the US’s biggest rival in AI.
One piece in question is the inclusion of Israel in the list of countries subject to AI hardware trade caps. As recently as October, Trump described himself as a “protector” of Israel and he has reported that he is probably more permissive towards the military actions of Israel in the region.
In any case, we will have a clearer picture in the week.
ChatGPT, remember me…: : Paying users of OpenAI’s ChatGPT can now ask the AI assistant to schedule reminders or recurring requests. The new beta feature, called Tasks, will begin rolling out to ChatGPT Plus, Team, and Pro users globally this week.
Meta vs. OpenAI: The executives and researchers leading Meta’s artificial intelligence efforts are obsessed with beating OpenAI GPT-4 model while developing Meta’s own Blade 3 family of models, according to messages unsealed by a court on Tuesday.
OpenAI’s board is growing: OpenAI has appointed to its board of directors Adebayo “Bayo” Ogunlesi, an executive from the investment firm BlackRock. The company’s current board bears little resemblance to OpenAI’s board at the end of 2023, whose members fired CEO Sam Altman only to reinstate him days later.
Blaize goes public: Blaize is set to become the first AI chip startup to go public in 2025. Founded by former Intel engineers in 2011, the company has raised $335 million from investors including Samsung for its chips for cameras, drones and more cutting edge devices.
A model of “reasoning” that thinks in Chinese: OpenAI’s o1 AI reasoning model “thinks” in languages like Chinese, French, Hindi and Thai at times, even when asked a question in English – and no one really knows why.
A recent one to study co-authored by Dan Hendrycks, an advisor to billionaire Elon Musk’s AI company xAI, suggests that many security benchmarks for AI are correlated with the capabilities of AI systems. That is, as the overall performance of a system improves, it “scores better” on benchmarks – making it appear that the model is “safer”.
“Our analysis reveals that many AI safety benchmarks—about half—often inadvertently capture latent factors closely related to general capabilities and raw training computation,” write the researchers behind the study. “In general, it is difficult to avoid measuring the capabilities of the upstream model in AI security benchmarks.”
In the study, the researchers propose what they describe as an empirical foundation for the development of “more meaningful” security metrics, which they hope will “(advance) the science” of security assessments in AI.
In a technical paper published on Tuesday, the Japanese AI company Sakana AI detailed Transformer² (“Transformer-squared”), an AI system that dynamically adapts to new tasks.
Transformer² first analyzes a task – for example, writing code – to understand its needs. Then apply “task-specific adaptations” and optimizations to tune that task.
Sakana says that Transformer²’s methods can be applied to open models like Meta’s Llama and that they offer “a vision of a future where AI models are no longer static.”
A small team of developers has released an open alternative to AI-powered search engines Perplexity and OpenAI GPT search.
Called PrAIvateSearch, the project is available on GitHub under one My licensewhich means that it can be used widely without restrictions. It is powered by openly available AI models and services, including Alibaba’s Qwen family of models and the DuckDuckGo search engine.
The PrAIvateSearch team says their goal is to “implement searchGPT-like features,” but in an “open, local, and private way.” For tips on how to get started, check out the latest from the team blog post.