The standard story is that sensors keep getting better and software keeps fusing them. Hesai’s full-color lidar chip points in a different direction: move the fusion into the hardware, at capture time, and make the perception stack deal with a native color 3D point cloud instead of stitching camera and LiDAR streams later.
That is the interesting part. Not “cars can now see like humans.” That line is Hesai’s marketing, and there’s no independent evidence for it yet. The confirmed announcement is narrower and more important: Hesai says its new Picasso SPAD SoC combines color perception and distance measurement in the chip itself, and its next ETX sensors will support configurations up to 4,320 laser channels, with mass production planned for the second half of 2026.
I started out thinking this was just “LiDAR, but more colorful.” The details suggest something more consequential. If the hardware claim holds up in production, the competitive fight shifts a bit away from software-side sensor fusion and toward sensor architecture, yield, and manufacturing scale.
What Hesai actually announced
Here’s the verified core.
On April 17, 2026, at its Technology Open Day, Hesai announced a new chip called Picasso, described as a SPAD SoC, a system-on-chip built around single-photon avalanche diodes, which are extremely sensitive light detectors used in LiDAR. External coverage from CnEVPost and Taibo both report the same headline claims: native fusion of color and depth at the hardware layer, support for up to 4,320 laser channels, and planned integration into Hesai’s next-generation ETX series.
Some of the surrounding language is confirmed because it comes straight from the announcement:
- Confirmed: Picasso is real, was announced publicly, and is intended for ETX-series products.
- Confirmed: Hesai says ETX will support 1,080, 2,160, and 4,320 channel configurations.
- Confirmed: Hesai says mass production and automaker deliveries are planned for H2 2026.
- Confirmed: Hesai claims photon detection efficiency above 40%.
What is not independently confirmed is the “world’s first” framing, or the practical performance implied by lines like “recognize traffic lights, lane markings, and construction signs at a glance, just like humans.” That is still a company claim. No public benchmark, teardown, or third-party road test in the source set shows that yet.
A quick table helps separate announcement from proof:
| Claim | Status | What supports it |
|---|---|---|
| Picasso SPAD SoC was announced | Verified | Hesai event coverage from CnEVPost and Taibo |
| ETX supports up to 4,320 laser channels | Verified | Same reporting on the April 17 launch |
| H2 2026 mass production plan | Verified | Company-announced timeline, reported externally |
| PDE exceeds 40% | Plausible | Company technical claim, no independent test cited |
| Native color 3D point cloud reduces software stitching | Plausible | Follows from architecture claim, but not independently benchmarked |
| Cars will “see like humans” | Unverified | Marketing language only |
Why a full-color LiDAR chip matters
Traditional LiDAR gives you geometry: where objects are, how far away they are, and their shape. Cameras give you appearance: color, texture, lane paint, signal lights. Production autonomy stacks usually combine both later in software.
That software fusion works, but it is messy. You have to align sensors with different frame rates, fields of view, lighting sensitivities, and failure modes. A red traffic light might be obvious in the camera but ambiguous in the point cloud. A pedestrian shape might be obvious in LiDAR but partly blown out in sunlight. So the software does the marriage counseling.
Hesai’s full-color lidar chip tries to move some of that work earlier. If the sensor can emit a native color point cloud, then color is no longer a side channel coming from another device. It is attached to the same spatial measurement at capture time.
That could matter in three concrete ways.
First, less downstream compute. Not necessarily less compute overall, but less compute spent on registering and reconciling separate camera and LiDAR streams. In a market where every watt and dollar matters, deleting pipeline complexity is often better than adding another heroic model. The AI industry has a habit of assuming software will absorb every hardware problem. Then someone moves the problem into silicon and the software stack suddenly looks a bit overengineered.
Second, simpler failure analysis. When a system misses a lane marking today, was the problem calibration drift, timestamp mismatch, camera glare, bad fusion logic, or the marking itself? Native capture does not remove failure, but it can reduce the number of places failure hides.
Third, different economics. If color-rich 3D perception becomes a hardware feature, then competitive advantage depends more on detector design, packaging, production scale, and cost curves. That is a very different fight from “our perception model fuses six sensors slightly better.”
This is broader than cars, too. Robotics, industrial mapping, and digital twin capture all benefit when the sensor produces data that is easier to use directly. We’ve seen a similar shift elsewhere: in AI video generation, more capability is moving closer to the model’s native output rather than being bolted on as post-processing.
What the technical claims do and don’t prove
The flashy number here is 4,320 laser channels. That sounds like a straight line to better perception. It isn’t.
More channels generally buy you denser sampling. Denser sampling can mean cleaner object contours, better small-object detection, and longer effective range at useful resolution. If you’re trying to distinguish a traffic cone from a weird shadow 120 meters ahead, more measurement points help.
But channel count is not a magic number any more than camera megapixels are. A 200-megapixel phone sensor can still take mediocre pictures. Same story here. Practical performance depends on things like:
- detector efficiency
- laser power and eye-safety limits
- optical design
- noise characteristics
- weather robustness
- onboard processing
- cost per unit
Hesai says Picasso’s PDE exceeds 40%. If true, that matters because higher photon detection efficiency means more of the returning light actually gets counted. Under the same laser power, that can improve range and clarity. But again: plausible, not independently verified in the materials we have.
The stronger claim is architectural, not biological. Hesai appears to have built a sensor that captures color and distance together. That is meaningful. The weaker claim is anthropomorphic: that this means machine perception now works “just like humans.” Humans do not drive by reading a point cloud with RGB attributes. They use context, priors, motion cues, and common sense, then occasionally still make terrible decisions. “Like humans” is doing a lot of work there.
There is also an unanswered systems question: does native color capture reduce the need for cameras, or just make camera-LiDAR fusion easier? Based on the available evidence, the safe answer is the latter. Cars still need redundancy. A new sensor mode usually joins the stack before it replaces anything.
Why this launch matters for autonomous driving
The business context makes this more credible than a random demo.
Hesai reported 1,620,406 total LiDAR shipments in 2025, up 222.9% year over year, with RMB 3.03 billion in revenue, RMB 435.9 million in net income, and 41.8% gross margin. In January, it said it would expand annual production capacity from 2 million units to more than 4 million in 2026.
Those numbers do not prove the new chip will work as advertised. They prove something else: Hesai is no longer just showing concept hardware. It has scale, improving margins, and a stated plan to manufacture a lot more sensors. In hardware, that matters more than a dramatic demo video. Plenty of companies can build one impressive box. Fewer can ship millions.
| Hesai business metric | 2025 / 2026 figure | Why it matters |
|---|---|---|
| Total LiDAR shipments | 1,620,406 | Shows real deployment scale |
| ADAS LiDAR shipments | 1,381,133 | Most relevant to automotive use |
| FY2025 revenue | RMB 3,027.6 million | Indicates commercial traction |
| FY2025 net income | RMB 435.9 million | First full-year profitability |
| 2026 annual capacity target | 4 million+ units | Suggests rollout ambition is serious |
This is why the launch matters for autonomous driving. Not because one chip suddenly solves perception. Because moving color into the LiDAR hardware could simplify the stack and because Hesai has the manufacturing base to test that idea at scale.
There’s a lesson here for other embodied AI systems as well, from warehouse robots to the sort of machines that show up at a humanoid robot marathon. We keep talking as if intelligence is mostly software. Then hardware changes what the software problem even is. Sensor design is not glamorous, but it keeps having the nerve to matter.
Key Takeaways
- Verified: Hesai announced the Picasso SPAD SoC, ETX integration, support for up to 4,320 laser channels, and planned H2 2026 mass production.
- The important shift is native capture: a full-color lidar chip pushes color and depth fusion into the sensor, instead of relying entirely on software stitching later.
- Plausible but unproven: this could reduce compute load and simplify perception pipelines. No public third-party benchmarks in the source set prove that yet.
- Unverified: claims that vehicles will now perceive road scenes “just like humans.” That is marketing, not evidence.
- The bigger story is strategic: if this works, competition moves toward sensor architecture, packaging, and manufacturing scale, not just perception algorithms.
Further Reading
- Hesai releases world’s first full-color LiDAR chip, supporting up to 4,320 laser channels, External coverage of the April 17 announcement, including Picasso, ETX, and channel counts.
- Hesai Q4 and FY2025 Financial Results, Primary source for shipments, revenue, margin, and profitability.
- Hesai Announces Plan to Double Annual LiDAR Production Capacity at CES 2026, Company statement on capacity expansion from 2 million to 4 million-plus units.
- Taibo coverage of Hesai Technology Open Day, Fresh reporting that reiterates the Picasso SPAD SoC and ETX rollout details.
A full-color lidar chip does not mean cars suddenly see like people. It means the sensor stack may be getting less software-shaped and more silicon-shaped, which is usually where markets get decided.
