Over the three days from January 9 to January 11, 2026, Panasonic Automotive Systems (PAS) operated a booth at Tokyo Auto Salon 2026, a major event for car lovers held at Makuhari Messe in Chiba Prefecture. Its theme was how AI is changing mobility experiences. In this article, we report on the “Joy in Motion” that PAS is providing in the era of AI, and its future potential, along with our actual experiences of the booth’s exhibits.
Signage that lets visitors experience the fusion of
sensing technologies and AI
Skeletal structure. Feelings of tension. Even yawning.
AI that can read people in real time.
The PAS booth in the Tokyo Auto Salon site had a clean aesthetic with a green and white design. What I experienced there was a taste of the future, where cars have evolved to see and understand people.
There was signage in the booth for hands-on experiences and exhibits presenting different kinds of mobility. The first thing that caught my eye was a demonstration area over by the aisle. I decided to give it a try.
I held my hand up to the signage and a camera read my hand’s skeletal structure. The system then reproduced my hand in the form of a three-dimensional wireframe model made up of dots and lines. I tested it out by making rock, paper, and scissor gestures with my hand. The wireframe model reflected my movements with almost no lag. When I asked the staff about it, they said that the system had used AI to analyze the bone structures of my hand and instantly determined what kinds of hand gestures I was making. This was performed entirely using the camera’s video images and AI analysis, with no actual X-ray-like scanning of my hand.
In the next hands-on demonstration, I held onto a steering wheel with embedded sensors. The steering wheel analyzes a driver’s state of mind by detecting their heart rate and minute amounts of perspiration on their palms. Through this, it can infer the driver’s mental state while driving. This allows the vehicle to detect when drivers aren’t in tip-top condition, such as when they’re tired or impatient, even if the drivers themselves haven’t noticed yet. A staff member explained “Just like people break out in a cold sweat when they get nervous, the palms of their hands also sweat. This, along with heart rate data, is analyzed by the AI to help monitor the driver’s state.”
There was also signage that used a camera to watch people and determined what actions they were taking. It would display if someone was yawning, making a phone call, or drinking a beverage.
These hands-on displays made it clear that sensors and AI could be combined to assess people’s states with a high degree of accuracy. If my car had features like these, I’d enjoy even greater peace of mind when driving.

WELL Cabin Craie2
Accommodating everyone on-board
The vehicle of the future, made possible by sensors and AI

Next, I experienced the “WELL Cabin Craie2”. I sat in the driver’s seat and listened to the staff’s explanation as I enjoyed going on a “virtual drive.”
The “WELL Cabin Craie2” has numerous functions that provide the driver and all of the vehicle’s occupants with greater “Joy in Motion” by using sensing functions, both inside and outside the vehicle, and AI.
First, it proposes driving routes based on driver preferences, avoiding areas the driver would feel uncomfortable with—not just congested roads, but also narrow streets and the like. While driving, the car uses sensors to detect the conditions around it. It even relaxes the driver by complimenting them, saying things like “You’re driving smoothly.” For areas that tend to be blind spots, such as rear left corners, it has sensors that it uses to issue alerts. It will even detect if you yawn and suggest that you take a break. The car proactively takes all kinds of actions.
On top of that, it has a function that allows people in the back seat to connect to the car via their own smartphone and share route information, so everyone can get even more enjoyment out of the drive. If someone wants to stop somewhere, instead of calling out from the back to let the driver know, they can make their suggestion through their phone and change the route directly. This frees the driver from having to fiddle with the navigation system. The “WELL Cabin Craie2” is designed to give everyone in the vehicle a shared experience. Future possibilities being considered include expanding the car’s functionality so that it can use information stored in individual occupants’ smartphones to make recommendations for places to eat, music to listen to, and the like.

One unique aspect of the system is it even uses AI to analyze what can be seen from the car. For example, in a scenario where I was driving at night, I happened to see a firework go off in front of me. I pointed at the firework, and the AI detected this gesture and displayed the following on the tablet screen: “This type of firework is called a ‘kiku,’ or ‘chrysanthemum,’ firework.” In addition to telling me what kind of firework it was, it also told me the origin of its name. A staff member explained, “also, if you point at a building, it can tell you what that building is. There’s all kinds of information that you can call up using gestures.” As I was watching the fireworks scene, a staff member said “look at the camera and smile.” When I did, I heard the sound of a camera shutter, and then the car sent a photo to my phone of me, smiling, composited with an image of the fireworks. “It’s designed to take a picture when it detects you smiling, and then to composite that with the view in front of the car—in this case, fireworks—to create a photographic memento.” I thought about all the times when I’d wanted to take a photo but couldn’t because I was driving. With this, I could take a photo to remember the trip by, or to show others, just by pointing or changing my expression. This hands-on demo helped me better imagine what going for a drive would be like in the future.
The “WELL Cabin Craie2” exhibit really impressed me with how the car didn’t just help make driving safer, but also made it more pleasant and fun for everyone on board.

WELL Cabin GranLuxe
Simply going from point A to point B turns into a voyage
The mobility experiences the GranLuxe seeks to offer

PAS has coordinated with the MAHA Group, which provides hospitality services to inbound tourists, to provide premium mobility experiences to people visiting Japan through the “WELL Cabin Luxe”, which is based on the Toyota Alphard.
An actual floor model of the upcoming next-generation “WELL Cabin GranLuxe” is being exhibited at the Tokyo Auto Salon. I was able to sit in the vehicle and experience the hospitality provided by its cabin interior and AI. The “WELL Cabin GranLuxe” has been updated with a focus on providing a heightened sightseeing experience.
The first thing that catches your eye is the exterior. Made in collaboration with famous Italian design studio Italdesign, it features a traditional Japanese motif of blue ocean waves. It has a distinctly Japanese feel, apparent from the very first glance, and the moment you step inside, your expectations for your travels in Japan grow even more lofty.
The cabin is spacious. There are comfortable seats, elegant indirect lighting, a collection of speakers that combine to shape the cabin’s acoustic space, and a large 55-inch transmissive screen. Various elements, centering around the immersive display, add dramatic highlights tailored to individual driving scenarios and vehicle uses.
Occupants can enjoy all kinds of content on the display. Not only can they watch informational tourism videos, but they can also put on ambient videos like campfire scenes and relax as they travel. The crackling sound of the campfire makes it all that more immersive. The lighting is also linked to the video. Various presentation techniques have also been used to help prevent car sickness. One key point is that the front-facing view is transparent. Occupants can see the scenery before them while at the same time immersing themselves in the video they are watching. This gentle blend of the real world and video content turns the act of simply moving from one point to another into a high-value experience.

One thing that really stood out to me was WELL Attendant, the AI concierge. It naturally conveys information that makes sightseeing destinations even more engaging, such as their history or culture, in a natural way, by asking the AI avatar questions.
I tried it out myself. I selected WELL Attendant on my tablet and asked, “What’s a good Japanese restaurant in Asakusa?” WELL Attendant answered instantly. It also answered more detailed questions, like “How did Akihabara turn into an electronics mecca” or “Are there any ramen restaurants around here that offer extra-large portions for the same price as a regular bowl?”
I was surprised at how natural its responses were. It didn’t feel at all mechanical. A staff member explained, “We’ve carefully tuned the speed of its answers, the way it responds, how long it talks, and other aspects, so that users can talk with it naturally.” And, indeed, its answers weren’t just fast, it actually took a little beat after I finished talking before it answered. It didn’t keep me waiting, but it didn’t talk over me, either. It didn’t feel talking to an AI; it felt the same as talking to an actual human. Many of the visitors to the booth were representative from travel planning agencies, municipal government personnel, and other people and organizations interested in tourism-related issues. There were even visitors from overseas, and WELL Attendant was able to answer even unexpected questions.

Another thing I was surprised at was its support for other languages and dialects. At present, in addition to supporting numerous languages, like English, Chinese, and French, it also supports various dialects from across Japan. A visitor from China was impressed that the AI concierge spoke not only Mandarin but also Cantonese.
The goals of the AI concierge are, of course, to provide greater “Joy in Motion,” but also to help address issues faced by enterprise operators such as a lack of sightseeing guides or the difficulty of providing multilingual support. According to the staff, in the future the sightseeing info it provides will even be optimized based on the states of passengers, such as their facial expressions. To me, this felt like an ambitious project that sought to design “Joy in Motion” not only for people, but for society itself.

Overall impressions
The future created by “Joy in Motion” design
New, emotionally moving mobility experiences
The PAS booth showed the evolution of the car, from a simple means of transportation to a partner that moves people’s hearts. Cars that read people’s conditions, think ahead and accommodate drivers, and even help set the mood for passengers. This kind of new mobility experience is almost within reach.
Led by the vision of becoming the “Joy in Motion” design company, PAS will further advance its intensely human technologies to turn mobility into a partner that will provide even more wonderful experiences.
