All posts
AI & Machine Learning

SenseTime Releases Image AI Model That Runs on Chinese Chips

Huma Shazia29 April 2026 at 11:23 pm4 min read
SenseTime Releases Image AI Model That Runs on Chinese Chips

Key Takeaways

SenseTime Releases Image AI Model That Runs on Chinese Chips
Source: Feed: Artificial Intelligence Latest
  • SenseNova U1 can reason with images directly instead of converting them to text first, reducing compute requirements
  • Ten Chinese chip makers including Cambricon and Biren Technology have optimized their hardware for U1
  • SenseTime released the model as open source on Hugging Face and GitHub to accelerate iteration through researcher feedback

What SenseTime Released

SenseTime, the Chinese AI company best known for facial recognition technology, released SenseNova U1 on Tuesday. The company claims the model can generate and interpret images faster than top US models. It's available for free on Hugging Face and GitHub.

The key difference from existing models: U1 processes images directly without translating them to text first. Most multimodal AI systems convert visual information into text tokens before reasoning about them. SenseTime's approach skips that step.

The model's entire reasoning process is no longer limited to text. It can reason with images as well.

— Dahua Lin, cofounder and chief scientist at SenseTime

Lin, who also serves as a professor of information engineering at the Chinese University of Hong Kong, says this capability matters for robotics. Models that process images directly could help robots better understand physical environments.

Running on Chinese Hardware

Like DeepSeek's latest flagship model, U1 can run on Chinese-made chips. On release day, ten Chinese chip designers announced their hardware supports the model. The list includes Cambricon and Biren Technology.

This matters because US export controls block Chinese firms from accessing advanced AI chips, particularly the Nvidia GPUs that dominate AI training. Chinese companies have scrambled to make their models work on domestic alternatives.

Lin said SenseTime will continue pushing to train on different chips. But he acknowledged a practical limit: the company may still need the best available chips to maintain iteration speed.

Why Open Source Now

SenseTime was founded in 2014 and became a world leader in computer vision. The company's technology powered facial recognition systems and autonomous driving applications. But when ChatGPT made natural language processing the center of the AI industry, SenseTime struggled to keep up.

Newer Chinese startups like DeepSeek and MiniMax pulled ahead. SenseTime has had trouble turning a profit. The company hopes releasing U1 publicly will help it catch up with both domestic and Western AI players.

In this day and age, being open source or closed source is not the winning factor; the speed of iteration is.

— Dahua Lin, cofounder and chief scientist at SenseTime

Lin said SenseTime made the open source decision last year. The reasoning: external researchers provide feedback that helps the company improve faster. The approach also lets SenseTime continue collaborating with international researchers despite US sanctions.

The Sanctions Context

The US has sanctioned SenseTime repeatedly. These restrictions limit the company's access to advanced chips and complicate business relationships with Western partners. Open source releases offer a partial workaround. Code on GitHub can spread globally regardless of trade restrictions.

Chinese companies have become some of the most active contributors to open source AI. DeepSeek's recent model releases attracted global attention. SenseTime is now following a similar playbook.

Also Read
Claude Cowork Organized 500 Photos in Minutes. Here's How

Another AI tool tackling image processing at scale

What This Means for Image AI

Most current multimodal AI models treat images as something to be described in words. They convert visual data to text, then reason about that text. This works, but it adds latency and computational overhead.

If SenseTime's claims hold up, direct image reasoning could change how AI systems handle visual tasks. Robotics applications in particular might benefit from faster, more efficient visual processing. Whether U1 delivers on these promises will become clearer as researchers test the model.

ℹ️

Logicity's Take

Frequently Asked Questions

What is SenseTime SenseNova U1?

SenseNova U1 is an open source AI model from Chinese company SenseTime that can generate and interpret images. Its key feature is processing images directly without converting them to text first.

Can SenseNova U1 run on Chinese chips?

Yes. Ten Chinese chip makers including Cambricon and Biren Technology have optimized their hardware to run U1, helping Chinese firms work around US export controls on advanced Nvidia chips.

Why is SenseTime releasing U1 as open source?

SenseTime says open source releases generate faster iteration through researcher feedback. The approach also helps the sanctioned company maintain international collaboration despite US restrictions.

What happened to SenseTime after ChatGPT launched?

SenseTime was a leader in computer vision and facial recognition, but struggled after natural language processing became the industry focus. It fell behind newer Chinese AI startups like DeepSeek and MiniMax.

ℹ️

Need Help Implementing This?

Source: Feed: Artificial Intelligence Latest / Zeyi Yang

H

Huma Shazia

Senior AI & Tech Writer