“Everything that moves in the future will be robotic,” said Jensen Huang, Nvidia’s CEO, during the Nvidia GTC Artificial Intelligence Conference on March 18, 2024, in California.

Recently, Nvidia's conference showcased the latest developments in artificial intelligence computing for robotics, and this further confirms our belief that the future lies in smart automation. AICA is proud to be at the forefront of implementing these advanced technologies into real-world applications to bring efficiency and precision to industries.

What does it mean for AICA?

Artificial intelligence has been a buzzword in the tech industry for a while now. While many companies are still experimenting with it in the laboratories, AICA has successfully implemented it in real production sites. We work closely with integrators to bring advancements in automation, incorporating cutting-edge technologies such as machine learning, force control, and dynamic motion.

Highlights from the conference

Foundation Models

These models simply represent the future of AI. Foundation Models enable us to create more advanced applications at a faster pace and combine multiple technologies to enhance performance. Here are the three key models: foundation pose, synthetical DETR, and foundation grasp.

The foundation pose model is a solution for pose estimation and tracking. It empowers robots to track previously unseen objects without the need for a pre-defined 3D model. This is made possible through access to data from previous training, which allows the model to learn and adapt to new objects over time.

The synthetical DETR model is a powerful tool for object detection in indoor environments. It’s algorithms enable it to detect objects at lightning-fast speed, and its results can be fed to the foundation pose model for further analysis and tracking.

Finally, we have the foundation grasp model. This innovative transformer model uses pre-trained models to help AI find the optimal grip for unknown 3D objects. By leveraging deep learning and advanced algorithms, the model is able to analyze and interpret complex 3D shapes, allowing robots to pick up and manipulate objects with ease.

Realistic simulation

One of the key topics discussed during the conference was the emerging trend of realistic simulation. This concept involves training robots using vast amounts of data, which would typically take many years to accumulate, but can now be done in mere minutes. Multiple robots are trained simultaneously using this method, with each one attempting to complete the assigned task in a unique way, to identify the best approach.

This innovative technology can be easily leveraged through the use of AICA software. Our company is already utilizing this technology to train our models, as demonstrated on the right side of the image below. Once we have identified the most successful model, we can export it into our control framework (displayed on the left side of the image) and deploy it on a real robot with confidence.

Potential for AICA

In light of this event, we at AICA are keeping a close eye on the latest advancements in robotic technologies. Our team has identified two crucial elements for the further improvement of our software and the development of AICA's technology.

Firstly, we will continue to utilize realistic simulation in our process as it enables robots to learn complex tasks faster and implement the best model in real-life scenarios. This ensures that our robots perform optimally and effectively in real-world situations.

Secondly, we are exploring the potential of using foundation models for object detection and pose estimation to develop complex applications alongside our robotic control tools.

As the world moves towards intelligent robotics and automation, AICA's technology is becoming increasingly significant. Our commitment to "Accessible robotics for everyone" means that we will continue to facilitate robotics programming for complex tasks. Nvidia's technologies mentioned above bring us another step closer to the emerging world of robotics automation.