-
- News
- Books
Featured Books
- pcb007 Magazine
Latest Issues
Current IssueSales: From Pitch to PO
From the first cold call to finally receiving that first purchase order, the July PCB007 Magazine breaks down some critical parts of the sales stack. To up your sales game, read on!
The Hole Truth: Via Integrity in an HDI World
From the drilled hole to registration across multiple sequential lamination cycles, to the quality of your copper plating, via reliability in an HDI world is becoming an ever-greater challenge. This month we look at “The Hole Truth,” from creating the “perfect” via to how you can assure via quality and reliability, the first time, every time.
In Pursuit of Perfection: Defect Reduction
For bare PCB board fabrication, defect reduction is a critical aspect of a company's bottom line profitability. In this issue, we examine how imaging, etching, and plating processes can provide information and insight into reducing defects and increasing yields.
- Articles
- Columns
- Links
- Media kit
||| MENU - pcb007 Magazine
Hon Hai Research Institute Unveils AI-enabled ModeSeQ
July 14, 2025 | Hon Hai Technology GroupEstimated reading time: 2 minutes
Hon Hai Research Institute (HHRI), an R&D powerhouse of Hon Hai Technology Group (Foxconn), the world’s largest electronics manufacturer and technology service provider, has been recognized for its competitive work in trajectory prediction in autonomous driving technology.
The landmark achievements in ModeSeq, taking top spot in the Waymo Open Dataset Challenge and presenting at CVPR 2025, among the world’s most influential AI and computer vision conferences, gathering top-tier tech firms, research institutions, and academic leaders, highlight HHRI’s growing leadership and technical excellence on the international stage.
“ModeSeq empowers autonomous vehicles with more accurate and diverse predictions of traffic participant behaviors,” said Yung-Hui Li, Director of the Artificial Intelligence Research Center at HHRI. “It directly enhances decision-making safety, reduces computational cost, and introduces unique mode-extrapolation capabilities to dynamically adjust the number of predicted behavior modes based on scenario uncertainty.”
Figure 1: Illustrates the ModeSeq workflow, showing how the model anticipates multiple possible future trajectories (highlighted by red vehicle icons and arrows). It progressively analyzes the scenario and assigns confidence scores (e.g., 0.2) to each potential path.
HHRI’s Artificial Intelligence Research Center, in collaboration with City University of Hong Kong, on June 13, presented "ModeSeq: Taming Sparse Multimodal Motion Prediction with Sequential Mode Modeling" at CVPR 2025(IEEE/CVF Conference on Computer Vision and Pattern Recognition), where its paper was among only the 22% that were accepted.
The multimodal trajectory-prediction technology overcomes the limitations of prior methods by both preserving high performance and delivering diverse potential outcome paths. ModeSeq introduces sequential pattern modeling and employs an Early-Match-Take-All (EMTA) loss function to reinforce multimodal predictions. It encodes scenes using Factorized Transformers and decodes them with a hybrid architecture combining Memory Transformers and dedicated ModeSeq layers.
The research team further refined it into Parallel ModeSeq, which claimed victory in the prestigious Waymo Open Dataset (WOD) Challenge – Interaction Prediction Track at the CVPR WAD Workshop. The team’s winning entry surpassed strong competitors from the National University of Singapore, University of British Columbia, Vector Institute for AI, University of Waterloo and Georgia Institute of Technology.
Building on their success from last year – where ModeSeq placed second globally in the 2024 CVPR Waymo Motion Prediction Challenge – this year’s Parallel ModeSeq emerged triumphant in the 2025 Interaction Prediction track.
Led by Director Li of HHRI’s AI Research Lab, in collaboration with Professor Jianping Wang’s group at City University of Hong Kong and researchers from Carnegie Mellon University, ModeSeq outperforms previous approaches on the Motion Prediction Benchmark—achieving superior mAP and soft-mAP scores while maintaining comparable minADE and minFDE metrics.
Figure 2: Director Yung-Hui Li (right) and Researcher Ming-Chien Hsu at CVPR 2025 presenting the latest advances in autonomous driving using ModeSeq.